Pumpkin is a framework for distributed Data Transformation Network developed in the COMMIT Project. It implements a protocol for distributed data processing. A data packet is a self-contained processing unit, which incorporates data, state, code and routing information. Automata is used to model the packet state transformation from one node to the next. The automata graph doubles as routing information for the packet. A decentralized distributed system takes care of routing data packets too. Read more
GitHub: https://github.com/recap/pumpkin
Contact person: Reginald Cushing MSc, dr. Adam Belloum
LOBCDER is a data management system that federates storage resources. It is a part of the Data and Compute Cloud Platform4 of the VPH-Share project [8]. Its function is to provide a large and scalable file storage that mimics a local file system. Read more
GitHub: https://github.com/skoulouzis/lobcder
Contact person: Spiros Koulouzis MSc, dr. Adam Belloum
WeevilScout. Given the ubiquity of Web browsers and the performance gains being achieved by JavaScript virtual machines, a question arises: Could Internet browsers become yet another middleware for distributed computing? With 2 billion users online, computing through Internet browsers could amass immense resources and transform the Internet into a distributed computer ideal for common classes of distributed scientific applications such as parametric studies. WeevilScout framework builds on this idea, and can setup a cluster of globally distributed Internet browsers that computed thousands of tasks. Read more
GitHub: https://github.com/recap/weevilscout
Contact person: Reginald Cushing MSc, dr. Adam Belloum
Cookery is a framework for designing scientific applications in a cloud environment. Cookery implement a design pattern that splits an application into three layers, each meant for a user in a different role and each representing different level of abstraction. Cookery make it easy for researchers to design workflows and reuse them beyond the original environment. Cookery is developed in Python programming language; it can thus benefit from the rich application resources of Python ecosystem. The current version of the framework has access to several cloud services such as Google Prediction API. Applications of our framework can be developed entirely in a cloud environment thanks to its integration with Jupyter Notebook making our framework a cloud service itself. Read more
GitHub: https://github.com/mikolajb/cookery
Contact person: Mikolaj Baranowski MSc, dr. Adam Belloum
The intuition our group had years ago was that we could leverage Semantic Web technologies to identify the exchangeable knowledge required to federate networks. We proved that we can manage the complexity and heterogeneity of the Internet with semantically rich models.
The result of this work has been the definition of the INDL ontologies. INDL was the first language of this type, and the community embraced it internationally. INDL has become the foundation of the OMN language, an international effort that see the participation and adoption from colleagues from all over the world. Read more
UvA: https://staff.fnwi.uva.nl/p.grosso/INDL.html
Contact person: dr. Paola Grosso
The Dynamic Real-time Infrastructure Planner (DRIP) is a microservice suite for planning and provisioning networked virtual machines, for deploying application components and for managing runtime infrastructures based on time-critical constraints. DRIP has been developed in the context of EU H2020 projects SWITCH and ENVRIplus. The DRIP system provides an engine for automating all these procedures, enabling a more holistic approach to the optimisation of resources and meeting application-level performance constraints. It allows application developers to seamlessly plan a customised virtual infrastructure based on constraints on quality of service and budget. Based on such a plan DRIP can provision an infrastructure across several cloud providers, deploy application components, start execution and scale individual components on demand.
GitHub: https://github.com/QCAPI-DRIP/DRIP-integradation/wiki
Contact person: dr. Zhiming Zhao
Open Information Linking for Environmental research infrastructures (OIL-E) is a framework for addressing the semantic linking requirements of environmental science research infrastructures. It aims to provide a machine-readable bridge between the ENVRI Reference Model (ENVRI RM) used within the ENVRI cluster of European environmental science research infrastructures to model their architecture and design, and other concept models related to research infrastructure, architecture and scientific (meta)data. The ENVRI RM ontology within OIL-E captures all the archetypes defined across the three views for science, information and computation, providing a standard vocabulary for many of the actors, resources, information objects and computational services used in environmental science research infrastructures. It is intended that OIL-E will link concepts used in a variety of different standards and specifications as a means to map out and harmonise technical developments in environmental science research infrastructures.
OIL-E: http://oil-e.net/ontology/
Contact person: dr. Zhiming Zhao
The ENVRI knowledge base uses the OIL-E framework to structure information about environmental science research infrastructure and the semantic landscape of environmental science in Europe. The main purpose of the knowledge base is twofold: to gather the body of knowledge accumulated via application of the ENVRI reference model in machine-readable form, queryable publicly via Semantic Web standards and subject to ontological validation; and to gather concrete information and links to actual services and technologies (software, standards and vocabularies) currently used by research infrastructure so as to provide a navigable map of active resources built on Linked Open Data principles and contextualised via ENVRI RM archetypes and relationships. The knowledge base also has a secondary role as a context to experiment with semantic linking and mapping approaches with other Semantic Web data sources.
OIL-E: http://oil-e.vlan400.uvalight.net/
Contact person: dr. Zhiming Zhao
Sesame is a system-level modeling and simulation framework that allows for efficient exploration of (mapping concurrent applications to) embedded MultiProcessor System-on-Chip (MPSoC) systems. It separates application models and architecture models while also recognizing an explicit mapping step to map application tasks onto architecture resources. In this approach, an application model describes the functional behavior of an application in a timing and architecture independent manner. A (platform) architecture model defines architecture resources and captures their performance and/or power constraints. To perform quantitative performance analysis, application models are first mapped onto and then co-simulated with the architecture model under investigation, after which the performance of each application-architecture combination can be evaluated. Read more
Contact person: drs. Simon Polstra, dr. Andy Pimentel