MPI course material
Using the material on this web page requires to accept our terms and conditions of use.
Slides: (please download, internal hyper-refs work best with Acrobat)
- MPI course with C, Fortran and Python (mpi4py) language bindings: mpi_3.1_rab.pdf
- This file will be always updated to the newest version.
- The pdf file contains internal cross references, (see the blue boxes on the 2nd page).
- It is stongly recommended to use Acrobat, because some other viewers seem to go to wrong pages, for example to one page later.
Content:
- MPI Overview
- one program on several processors
- work and data distribution
- Process model and language bindings
- starting several MPI processes
- Messages and point-to-point communication
- Nonblocking communication
- to avoid idle time, deadlocks and serializations
- The New Fortran Module mpi_f08
- Collective communication
- (1) e.g., broadcast slides
- (2) e.g., nonblocking collectives, neighborhood communication
- Error handling
- Groups & Communicators, Environment Management
- (1) MPI_Comm_split, intra- & inter-communicators
- (2) Re-numbering on a cluster, collective communication on inter-communicators, info object, naming & attribute caching, implementation information
- Virtual topologies
- (1) A multi-dimensional process naming scheme
- (2) Neighborhood communication + MPI_BOTTOM
- (3) Optimization through reordering
- One-sided Communication
- Shared Memory One-sided Communication
- (1) MPI_Comm_split_type & MPI_Win_allocate_shared
Hybrid MPI and MPI-3 shared memory programming - (2) MPI memory models and synchronization rules
- Derived datatypes
- (1) transfer any combination of typed data
- (2) advanced features, alignment, resizing
- Parallel File I/O
- (1) Writing and reading a file in parallel
- (2) Fileviews
- (3) Shared Filepointers, Collective I/O
- MPI and Threads – e.g., hybrid MPI and OpenMP
- Probe, Persistent Requests, Cancel
- Process Creation and Management
- Spawning additional processes
- Singleton MPI_INIT
- Connecting two independent sets of MPI processes
- Other MPI features
- Best practice
- Parallelization strategies (e.g. Foster’s Design Methodology)
- Performance considerations
- Pitfalls
- Summary
- Appendix
Exercises - please download (zip or tar.gz) and expand both MPI:
- preferred: MPI31single.tar.gz (alternative: MPI31single.zip)
- The exercises are provided in C, Fortran (with the mpi_f08 module) and Python (through mpi4py)
Your environment for the exercises:
To be able to do the hands-on exercises of this course, you need a computer with a C/C++ or Fortran compiler and a corresponding, up-to-date MPI library (in case of Fortran, the mpi_f08 module is required).
Please download TEST archive file TEST.tar.gz or TEST.zip
After uncompressing archive file via
tar -xvzf TEST.tar.gz
or unzip TEST.zip
please verify your MPI and OpenMP installation with the tests described in TEST/README.txt within the archive (or here).
Standards and API definitions:
- Please download the current MPI standard from the official web-page: MPI documents
- The MPI standard includes the MPI language bindings for C and Fortran.
- For Python language bindings with mpi4py, you may look at
Recordings:
- For recordings, please visit: Online course recordings and look for the latest MPI course