The focus is on advanced programming with MPI and OpenMP. The course addresses participants who have already some experience with C/C++, Fortran or Python and MPI and OpenMP, the most popular programming models in high performance computing (HPC).
The course will teach newest methods in MPI-3.0/3.1/4.0/4.1 and OpenMP-4.5 and 5.0, which were developed for the efficient use of current HPC hardware. Topics with MPI are the group and communicator concept, process topologies, derived data types, the new MPI-3.0 Fortran language binding, one-sided communication and the shared memory programming model within MPI. Topics with OpenMP are the OpenMP-4.0/4.5/5.0 extensions, as the vectorization directives, thread affinity and OpenMP places. (The GPU programming with OpenMP directives is not part of this course.) The course also contains performance and best practice considerations.
Hands-on sessions (in C, Fortran and Python+mpi4py+numpy) will allow users to immediately test and understand the basic constructs of the Message Passing Interface (MPI) and the shared memory directives of OpenMP (in C and Fortran).
This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves. It is organized by JSC in cooperation with HLRS.
Online course Organizer: JSC Forschungszentrum Jülich, Germany
01. Dez. 2025 08:45
04. Dez. 2025 16:15
Online by JSC
Englisch
Mittel
Paralleles Programmieren
MPI
OpenMP
Zurück zur Liste
Unix / C or Fortran / familiar with the principles of MPI, e.g., to the extent of the introductory course MPI and OpenMP, i.e., at least the MPI process model, blocking point-to-point message passing and collective communication, and the single program concept of parallelizing applications, and for the afternoon session of the last day, to be familiar with OpenMP 3.0.
To be able to do the hands-on exercises of this course, you need a computer with an OpenMP capable C/C++ or Fortran compiler and a corresponding, up-to-date MPI library (in case of Fortran, the mpi_f08 module is required). Please note that the course organizers will not grant you access to an HPC system nor any other compute environment. Therefore, please make sure to have a functioning working environment / access to an HPC cluster prior to the course.
In addition, you can perform the MPI exercises in Python with mpi4py + numpy. In this case, an appropriate installation on your system is required (together with a C/C++ or Fortran installation for the OpenMP exercises).
Pleasetar -xvzf TEST.tar.gzusing https://fs.hlrs.de/projects/par/events/TEST.tar.gz orunzip TEST.zip using https://fs.hlrs.de/projects/par/events/TEST.zipand verify your MPI and OpenMP installation with the tests described in TEST/README.txt within the archive.
The exercise about race-condition detection (at the end of the course) is optional. It would require an installation of a race condition detection tool, e.g., the Intel Inspector (available until 2024) together with the Intel compiler, or Intel compilers 2024 or 2025 which include the ThreadSanitizer, or both. It is recommended to install it.
Learn more about course curricula and content levels.
Dr. Rolf Rabenseifner (Stuttgart)
A few days before the course starts, you will receive pdf files from the slides and tar/zip files for installing the exercises on your system.
An older version of this course with most of the material (including the audio information) can also be viewed in the HLRS self-study materials.
A detailed program can be found here (PDF) (preliminary).Course ends at 16:45 the first three days and 16:15 the last day.
Please register via the course website at JSC.
Thomas Breuer, phone 02461 61-96742, t.breuer(at)fz-juelich.de
See the training overview and the Supercomputing Academy pages.
Mai 05 - 08, 2025
Online
Mai 09 - 23, 2025
Hybrid Event - Stuttgart, Germany
Juni 16 - 17, 2025
Juni 17 - 18, 2025
August 20 - 29, 2025
Online by ETH
Oktober 13 - 17, 2025
Stuttgart, Germany
November 03 - Dezember 12, 2025
Online (flexible)