Self-Study Materials

MPI course material (password-free, actual version, without recordings)


Using the material on this web page requires to accept our terms and conditions of use.

This web page is part of the HLRS self-study materials, which contains recordings of several courses, including the most recent recording of an MPI course based on the material shown here.

Highly recommended for MPI and OpenMP learning:

A structured study plan for the recorded material is available below.
The recorded material is useful for self-study of MPI and OpenMP.
However, for advanced parallelization (100+ cores), attending an on-site course is highly recommended:

  • On-site interaction provides important communication and networking opportunities for all participants.
  • Such communication can be very helpful or even essential to the success of your scientific work.
  • On-site courses encourage active discussion and collaboration.
  • Online and hybrid formats offer limited instructor interaction.

All of our MPI and OpenMP courses in on-site, online, and hybrid formats include labs and interaction with the instructor and each other:

  • They are offered by different organizers (HLRS, JSC, LRZ, ZIH, VSC Vienna, Uni Mainz, ETH).
  • Our Supercomputing Academy (SCA) offers hybrid formats that combine asynchronous learning with active participation in seminars and an online forum.
  • All MPI parts are based on the same regularly updated course material.

Please visit https://www.hlrs.de/training

Slides: (please download, internal hyper-refs work best with Acrobat)

  • MPI course with C, Fortran and Python (mpi4py) language bindings: mpi_3.1_rab.pdf
    • This file is always updated to the latest version.
    • The pdf file contains internal cross-references, (see the blue boxes on the 2nd page).
    • It is highly recommended to use Acrobat, as some other viewers seem to go to the wrong pages, e.g. one page later.
  • Same content, but the animated slides are expanded by one slide per animation step: mpi_3.1_rab-animated.pdf
    • This PDF file was created using the PPspliT add-in for PowerPoint, developed by Massimo Rimondini.
    • Many thanks to him for his version 2.5 where he added all the extra features needed to split all the many slides of this course with all their animations and links to other slides within this slide-set.
    • If you find any bugs with this slide-set, please report them directly to Rolf Rabenseifner.

Content:

  1. MPI Overview
    • one program on several processors
    • work and data distribution
  2. Process model and language bindings
    • starting several MPI processes
  3. Messages and point-to-point communication
  4. Nonblocking communication
    • to avoid idle time, deadlocks and serializations
  5. The New Fortran Module mpi_f08
  6. Collective communication
    • e.g., broadcast slides
    • e.g., nonblocking collectives, neighborhood communication
  7. Error handling
  8. Groups & Communicators, Environment Management
    • MPI_Comm_split, intra- & inter-communicators
    • Re-numbering on a cluster, collective communication on inter-communicators,
      info object, naming & attribute caching, implementation information
  9. Virtual topologies
    • A multi-dimensional process naming scheme
    • Neighborhood communication + MPI_BOTTOM
    • Optimization through reordering
  10. One-sided Communication
  11. Shared Memory One-sided Communication
    • MPI_Comm_split_type & MPI_Win_allocate_shared
      Hybrid MPI and MPI-3 shared memory programming
    • MPI memory models and synchronization rules
  12. Derived datatypes
    • transfer any combination of typed data
    • advanced features, alignment, resizing
  13. Parallel File I/O
    • Writing and reading a file in parallel
    • Fileviews
    • Shared Filepointers, Collective I/O
  14. MPI and Threads – e.g., hybrid MPI and OpenMP
  15. Probe, Persistent Requests, Cancel
  16. Process Creation and Management
    • Spawning additional processes
    • Singleton MPI_INIT
    • Connecting two independent sets of MPI processes
  17. Other MPI features
  18. Best practice
    • Parallelization strategies (e.g. Foster’s Design Methodology)
    • Performance considerations
    • Pitfalls and progress / weak local
  19. Heat example
  20. Summary
  21. Appendix 

Exercises - please download (zip or tar.gz) and expand both MPI:

Your environment for the exercises:

To be able to do the hands-on exercises of this course, you need a computer with a C/C++ or Fortran compiler and a corresponding, up-to-date MPI library (in case of Fortran, the mpi_f08 module is required).

Please download TEST archive file TEST.tar.gz or TEST.zip
After uncompressing archive file via
        tar -xvzf TEST.tar.gz
or     unzip TEST.zip
please verify your MPI and OpenMP installation with the tests described in TEST/README.txt within the archive (or here).


Standards and API definitions:


Recordings: