Parallelization with MPI and OpenMP

Research & Science
Parallelization with MPI and OpenMP


The focus is on the programming models MPI and OpenMP. Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the basic constructs of the Message Passing Interface (MPI) and the shared memory directives of OpenMP. This course is organized by PC2 and the Marie Curie ITN, Paderborn University in cooperation with HLRS. (Content Level: 70% for beginners, 30% advanced)


The goal is to learn methods of parallel programming.

On clusters and distributed memory architectures, parallel programming with the Message Passing Interface (MPI) is the dominating programming model. The course gives an full introduction into the basic and intermediate features of MPI, like blocking and nonblocking point-to-point communication, collective communication, subcommunicators, virtual topologies, and derived datatypes. Modern methods like one-sided communication and the new MPI shared memory model inside are also taught.
Additionally, this course teaches shared memory OpenMP parallelization, which is a key concept on multi-core shared memory and ccNUMA platforms. A race-condition debugging tool is also presented. The course is based on OpenMP-3.1, but also includes new features of OpenMP-4.0 and 4.5, like pinning of threads, vectorization, and taskloops.
The course is rounded up with a talk on hybrid MPI+X programming of clusters of shared memory nodes, and it ends with Algorithmic Differentiation as an additional topic especially for the participants of the Marie Curie ITN. 

Hands-on sessions are included on all days. This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves.

The preliminary course outline can be found here (PDF download).


Unix / C or Fortran


Dr. Rolf Rabenseifner (Stuttgart, member of the MPI-2/3/4 Forum) [MPI and OpenMP course],
PD Dr. Kshitij Kulshreshtha Universität Paderborn, Institute for Mathematics [Algorithmic Differentiation]


The course language is English.


Each participant will get a paper copy of all slides.
The MPI-1 part of the course is based on the MPI course developed by the EPCC Training and Education Centre, Edinburgh Parallel Computing Centre.
If you want, you may also buy copies of the standards MPI-3.1 (Hardcover, 17 Euro) and OpenMP (about 14 Euro).
An older version of this course with most of the material (including the audio information) can also be viewed in the ONLINE Parallel Programming Workshop (only by members of the HLRS Mailing list; course participants can get access after the course).


Online form not yet available.

Registration deadline is January 14, 2018, with priority rules. Acceptance will be approved on January 15, 2018. As long as seats are available there will be an extended registration period without priority rules (until January 24, 2018).

Priority for acceptance: first - participants of the Marie Curie ITN, second - students and members of the Paderborn University, and third - if there are still remaining slots - from other universities in Nordrhein-Westfalen with interest in programming modern HPC hardware.


Travel Information and Accommodation

see travel advice of the Paderborn Center for Parallel Computing

Local Organizer and Contact

Bernard Bauer,  phone +49 5251 60-1737, and
Michaela Kemper, phone +49 5251 60-17356 (Paderborn Center for Parallel Computing, Paderborn University)

Shortcut-URL & Course Number
and course pages at the PC2: tba