MoeWE

Modular Vocational Training on High Performance Computing

The MoeWe project developed the Supercomputing-Akademie, a training program in high-performance computing that is conceived to address the unique needs of researchers and IT professionals in industry.

The need for experts in simulation, programming, visualization, and optimization for high-performance computing (HPC) is constantly growing due to ever-increasing digitalization. To cover the long-term need for supercomputing experts, HLRS and SICOS BW GmbH are cooperating with the Universities of Ulm and Freiburg to design up-to-date HPC training courses for IT personnel. The training will be organized in a “blended learning” model that combines online digital media with classroom learning. It will be aimed at IT professionals working in science or industry, including software engineers, end users, and IT administrators.

The training program is structured into modules. It covers the following areas:

Basics of High-Performance Computing

Some of today’s applications are so extensive or complex that their calculation on a normal workstation computer is not possible anymore. In such cases, it can be beneficial to use supercomputers. To understand how these HPC systems actually work, the basic functioning of computers and their extension to parallel processing must first be understood.

Parallel Programming

An inherent feature of supercomputers is their parallel architecture. For code development on parallel systems, application programmers not only need to be familiar with relevant parallelization models, libraries, and directives, but they also need to fully comprehend central parallelization concepts.

Simulation

Many companies rely on simulations in their product development. Successful simulations yield answers to a wide range of physical questions, which are solved with the help of highly sophisticated numerical methods. How does the simulation process work and how can it be optimized for HPC?

Ecology and Economy

The ecological and economic costs of IT systems have been discussed for some time under the catchphrase “Green IT”. This also applies to the operation of supercomputers, which consume huge amounts of electricity. Which strategies and measures are sensible and sustainable?

Cluster, Cloud & HPC

Distributing computational effort to many processors is the basic idea with which multi-core systems increase their working speed. For the developer, this means implementing a task in such a way that it efficiently exploits the many cores of a system. The best way to do this can vary depending on the platform available for parallel computing. 

Data Management, Data Analytics, Smart Data Analytics

Large amounts of unstructured data require sophisticated methods of data analysis, also called smart data analytics. In order to derive event patterns and correlations from big data, i.e. huge data streams, supercomputers are in many cases the appropriate technology.

Visualization

Raw simulation data usually consists of a stream of numbers, which can be difficult to interpret by themselves. To facilitate the understanding of such data, tools have been developed for more intuitive visualization of simulation results. Furthermore, visualization tools can also help with analyzing the behavior of the programs themselves.

Performance Optimization

Performance is the central challenge of HPC. When calculations are too large for a single computer, they can also consume significant HPC resources. With optimization, the aim is to substantially reduce the runtime of a program without reducing the quality or validity of a solution, thereby conserving these resources.

The modular structure of the curriculum allows the students to tailor the training program to their personal background and needs at a flexible and self-defined pace.

For more information see Supercomputing Akademie.

Runtime

01. July 2016 -
31. March 2021

Funding

MWK Baden-Württemberg & ESF