Self-Study Materials

MPI course material (password-free, actual version, without recordings)


Using the material on this web page requires to accept our terms and conditions of use.


Slides: (please download, internal hyper-refs work best with Acrobat)

  • MPI course with C, Fortran and Python (mpi4py) language bindings: mpi_3.1_rab.pdf
    • This file will be always updated to the newest version.
    • The pdf file contains internal cross references, (see the blue boxes on the 2nd page).
    • It is stongly recommended to use Acrobat, because some other viewers seem to go to wrong pages, for example to one page later.
  • same content, but the animated slides are expanded with one slide per animation step: mpi_3.1_rab-animated.pdf
    • Our apologies: due to a bug in the used PPsplit technology, all hyperlinks to animated slides do not work  )-:
      • These are, e.g., the hyperlinks to the quiz-solutions, but also back-links from solution-slides to exercise-slides that are animated.
    • All links to non-animated slides seem to work correctly (with Acrobat)  (-:
      • These are, e.g., the hyperlinks from slides 2-5 to all the course chapters and most of the hyperlinks to the solution-slides

Content:

  1. MPI Overview

    • one program on several processors
    • work and data distribution
  2. Process model and language bindings
     
    • starting several MPI processes
  3. Messages and point-to-point communication
  4. Nonblocking communication
     
    • to avoid idle time, deadlocks and serializations
  5. The New Fortran Module mpi_f08
  6. Collective communication
     
    • (1) e.g., broadcast slides
    • (2) e.g., nonblocking collectives, neighborhood communication
  7. Error handling
  8. Groups & Communicators, Environment Management
     
    • (1) MPI_Comm_split, intra- & inter-communicators
    • (2) Re-numbering on a cluster, collective communication on inter-communicators, info object, naming & attribute caching, implementation information
  9. Virtual topologies
     
    • (1) A multi-dimensional process naming scheme
    • (2) Neighborhood communication + MPI_BOTTOM
    • (3) Optimization through reordering
  10. One-sided Communication
  11. Shared Memory One-sided Communication
     
    • (1) MPI_Comm_split_type & MPI_Win_allocate_shared
            Hybrid MPI and MPI-3 shared memory programming
    • (2) MPI memory models and synchronization rules
  12. Derived datatypes
     
    • (1) transfer any combination of typed data
    • (2) advanced features, alignment, resizing
  13. Parallel File I/O
     
    • (1) Writing and reading a file in parallel
    • (2) Fileviews
    • (3) Shared Filepointers, Collective I/O
  14. MPI and Threads – e.g., hybrid MPI and OpenMP
  15. Probe, Persistent Requests, Cancel
  16. Process Creation and Management
     
    • Spawning additional processes
    • Singleton MPI_INIT
    • Connecting two independent sets of MPI processes
  17. Other MPI features
  18. Best practice
     
    • Parallelization strategies (e.g. Foster’s Design Methodology)
    • Performance considerations
    • Pitfalls
  19. Summary
  20. Appendix 

Exercises - please download (zip or tar.gz) and expand both MPI:

Your environment for the exercises:

To be able to do the hands-on exercises of this course, you need a computer with a C/C++ or Fortran compiler and a corresponding, up-to-date MPI library (in case of Fortran, the mpi_f08 module is required).

Please download TEST archive file TEST.tar.gz or TEST.zip
After uncompressing archive file via
        tar -xvzf TEST.tar.gz
or     unzip TEST.zip
please verify your MPI and OpenMP installation with the tests described in TEST/README.txt within the archive (or here).


Standards and API definitions:


Recordings: