Parallel Computing Systems (PCS)

The Parallel Computing Systems (PCS) theme performs research on the design, programming and run-time management of multi-core and multi-processor computer systems. The modeling, analysis and optimization of the extra-functional aspects of these systems, such as performance, power/energy consumption but also the degree of productivity to design and program these systems, play a pivotal role in this work. The System and Network Engineering Lab does research on four topics in the area PCS.

Modeling, Simulation and Exploration of Embedded Systems 
This research centers around the system-level modeling, simulation and exploration of multi- and many-core embedded computer systems with the purpose of efficiently and effectively designing, programming and (runtime) managing these systems. More specifically, the research focus is on the development of techniques, algorithms and tools for the analysis and optimization of so-called extra-functional system behavior, like system performance, power/energy consumption, reliability and costs. For more information see: dr. A.D. Pimentel.

Design and Verification of Real-Time Embedded Systems
Real-Time embedded systems are computing systems which are subject to stringent timing constraints, formulated in real, i.e., physical time. Many of these systems are safety-critical, meaning that a failure, including a timing error, may have costly, or even catastrophic consequences. Examples of such systems are the airbag controller in a car, or a flight control system in a modern airplane, or even the control systems in nuclear power plants. This research covers various aspects of the development, design and validation of these embedded real-time systems. For more information see: dr. ing. S.J. Altmeyer.

Programming Languages and Compilers for the Age of Many-Core
To effectively harness the compute power of today’s highly parallel and increasingly heterogeneous systems, programmers are forced into utilizing a variety of low-level, machine-oriented and ever changing programming models. This research focuses on high-level languages for concise, resource-agnostic programs and the necessary compiler and runtime system technologies to effectively and efficiently map these programs to diverse compute architectures. Our objective is to reconcile software engineering productivity, application portability and runtime performance in terms of latency, throughput, dependability and energy consumption. For more information see: dr. C. Grelck.

Performance Analysis, Modeling and Engineering
Today we build massively parallel, heterogeneous systems for many domains and applications, and it becomes increasingly difficult to assess their performance, and to further improve it. In this research, we focus on analytical and statistical frameworks for performance analysis and modeling of parallel, heterogeneous systems and workloads. We further pursue the design and development of systematic performance engineering methods. For more information see: dr. A.L. Varbanescu.

Performance Portability
With computing systems becoming increasingly diverse, portability can no longer resume to functionality alone. In this research, we investigate the concept of performance portability. We focus on the quantification and qualification of performance portability for systems, programming models, and applications. For more information see: dr. A.L. Varbanescu.

BACK