Distributed memory parallelization with the Message Passing Interface MPI (Mon, for beginners):
On clusters and distributed memory architectures, parallel programming with the Message Passing Interface (MPI) is the dominating programming model. The course gives an introduction into MPI-1. Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the basic constructs of the Message Passing Interface (MPI).
Shared memory parallelization with OpenMP (Tue, for beginners):
The focus is on shared memory parallelization with OpenMP, the key concept on hyper-threading, dual-core, multi-core, shared memory, and ccNUMA platforms. This course teaches shared memory OpenMP parallelization. Hands-on sessions (in C and Fortran) will allow users to immediately test and understand the directives and other interfaces of OpenMP. Race-condition debugging tools are also presented.
Intermediate and advanced topics in parallel programming (Wed-Fri):
Topics are advanced usage of communicators and virtual topologies, one-sided communication, derived datatypes, MPI-2 parallel file I/O, hybrid mixed model MPI+OpenMP parallelization, parallelization of explicit and implicit solvers and of particle based applications, parallel numerics and libraries, and parallelization with PETSc. MPI-3.0 introduced a new shared memory programming interface, which can be combined with MPI message passing and remote memory access on the cluster interconnect. It can be used for direct neighbor accesses similar to OpenMP or for direct halo copies, and enables new hybrid programming models. These models are compared in the hybrid mixed model MPI+OpenMP parallelization session with various hybrid MPI+OpenMP approaches and pure MPI. Further aspects are domain decomposition, load balancing, and debugging. Hands-on sessions are included on all days. This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves.
Hands-on sessions are included on all days. This course provides scientific training in Computational Science, and in addition, the scientific exchange of the participants among themselves.
see link to detailed program (preliminary)
Unix / C or Fortran
Each participant will get a paper copy of all slides.
The MPI-1 part of the course is based on the MPI course developed by the EPCC Training and Education Centre, Edinburgh Parallel Computing Centre.
If you want, you may also buy copies of the standards MPI-3.1 (Hardcover, 17 Euro) and OpenMP (about 13 Euro).
An older version of this course with most of the material (including the audio information) can also be viewed in the ONLINE Parallel Programming Workshop.
Students and academic participants within PRACE-member-countries:
All other participants (i.e., not from academia, or from outside PRACE):
The course fee includes coffee breaks.
Registration link: see above.
see our How to find us page.
HLRS is part of the Gauss Centre for Supercomputing (GCS), which is one of the six PRACE Advanced Training Centres (PATCs) that started in Feb. 2012.
A part of this course is a PATC course, see also the PRACE Training Portal and Events. For participants from public research institutions in PRACE countries, the course fee is sponsored for this part of the course through the PRACE PATC program. For details, see the section about the course fee above.
In the context of PRACE-4IP, the first two days of this course are also an on-demand course for the project Centre of Excellence for Global Systems Science (CoeGSS).
In conjunction with this course, a Train the Trainer Program is provided. Whereas this regular course teaches parallel programming, the Train the Trainer Program is an education for future trainers in parallel programming. For further details, see here.
(The 2016 links are provided as soon as the PRACE PATC curriculum 2016/2017 is finalized. As preliminary information, you may visit the 2015 TtT invitation.)