Julia for High-Performance Computing

This course takes place ON-SITE at Stuttgart, Germany. Please refer to our Covid rules.

This 4-day course introduces the Julia programming language as a modern approach to high-performance numerical computing. Starting from the foundations and characteristic language features (e.g. multiple dispatch, type inference) the course will discuss and demonstrate how Julia manages to deliver high performance while also being high-level and dynamic. It will teach participants the necessary language concepts to achieve high performance in Julia and avoid common pitfalls. The course will cover "serial" and parallel computing (multithreading, distributed computing, MPI) and will provide insights into how to readily offload computations to NVIDIA GPUs.

Hands-on sessions on each day will allow the participants to interactively explore the language and immediately test and apply the discussed concepts.

This course is especially appropriate for you if

  • you have HPC experience and are interested in Julia, or
  • you have basic Julia knowledge and want to dive into the HPC aspects of the language.

This course is organized by HLRS in cooperation with the Paderborn Center for Parallel Computing (PC2) and the NHR alliance.

Location

HLRS, University of Stuttgart
Nobelstraße 19
70569 Stuttgart, Germany
Room 0.439 / Rühle Saal
Location and nearby accommodations

Start date

Sep 20, 2022
08:30

End date

Sep 23, 2022
13:00

Language

English

Entry level

Intermediate

Course subject areas

Parallel Programming

Programming Languages for Scientific Computing

Back to list

Prerequisites and content levels

Prerequisites
  • Basic programming experience in any language.
  • Familiarity with UNIX/Linux (esp. basic terminal usage) is recommended.
  • Elemental Julia and/or HPC knowledge is a plus.

(If you want to follow along on your personal laptop, make sure to have Julia 1.8 and Jupyter installed and working.)
 

Content levels
  • Basic: 7 hours
  • Intermediate: 11 hours
  • Advanced: 3 hours

Learn more about course curricula and content levels.

Instructors

Dr. Carsten Bauer (Paderborn Center for Parallel Computing)
Dr. Michael Schlottke-Lakemper (HLRS)

Learning outcomes

After this course, participants will: 

  • have a basic understanding of Julia's fundamental concepts and compilation process
  • understand how to write efficient Julia code
  • know how to do performance benchmarking
  • avoid common performance pitfalls
  • be able to parallelize Julia using multithreading, distributed computing and GPUs
  • be familiar with common development workflows and package management

Agenda

Local registration starts on first course day at 8:30.

1st course day (8:30 - 16:30)

  • Overview of Julia for HPC
  • Julia's type system
  • Multiple dispatch
  • Compilation pipeline
  • Code specialization
  • Generic programming
  • Evening: social event with city tour

2nd course day (9:00 - 16:30)

  • Microbenchmark
  • Interoperability (with C and Python)
  • Performance programming
    • Type instabilities
    • Broadcasting / syntactic loop fusion
    • Linear memory layout
  • Profiling (statistical, instrumented, and hardware-level)
  • Workflow tips (VSCode, REPL, Revise.jl)

3rd course day (9:00 - 16:30)

  • Multithreading
    • Tasks and threads
    • Composable multithreading
  • Multiprocessing / distributed computing
    • Distributed standard library
    • MPI.jl
  • Package management
    • Reproducible Julia environments
    • Binary dependencies (JLLs)

4th course day (9:00 - 13:00)

  • Computing on NVIDIA GPUs
    • High-level abstractions
    • Custom CUDA kernels
    • Vendor libraries
  • Using Julia for HPC in practice: an experience report
  • Final Q&A

Course material

Slides and Jupyter notebooks will be available for all participants.
The material can be found on https://github.com/carstenbauer/JuliaHLRS22

On-site course & COVID rules

Besides the content of the training itself, another important aspect of this event is the scientific exchange among the participants. We try to facilitate such communication by

  • offering common coffee and lunch breaks and
  • working together in groups of two during the exercises (if desired by the individual participants and permitted by the COVID-19 rules).

For your safety, we will only allow fully vaccinated or fully recovered or COVID-19 negative tested participants on all days. You must wear a medical face mask or FFP2 mask everywhere on site. If a distance of 1.5 m cannot be guaranteed inside, e.g., if you are working in pairs in exercises, it must be an FFP2 mask. Details can be found on the registration page.

We strongly recommend to choose travel options and hotels with the possibility to cancel (even close to the event) because we might be forced to deliver the course as an online course.

Registration-information

Register via the button at the top of this page.
We encourage you to register to the waiting list if the course is full. Places might become available.

Fees

Students without Diploma/Master: none.
Members of German universities and public research institutes: none.
Members of universities and public research institutes within EU or PRACE member countries: none. 
Members of other universities and public research institutes: 300 EUR.
Others: 780 EUR.

Our course fee includes coffee breaks (in classroom courses only).

Contact

Lucienne Dettki, phone 0711 685 63894, dettki(at)hlrs.de
Michael Schlottke-Lakemper, phone 0711 685 87162, m.schlottke-lakemper(at)hlrs.de
training(at)hlrs.de

PRACE PATC and bwHPC

HLRS is part of the Gauss Centre for Supercomputing (GCS), which is one of the six PRACE Advanced Training Centres (PATCs) that started in Feb. 2012.

This course is a PATC course, see also the PRACE Training Portal and Events. For participants from public research institutions in PRACE countries, the course fee is sponsored through the PRACE PATC program.

HLRS is also member of the Baden-Württemberg initiative bwHPC.

This course is also provided within the framework of the bwHPC training program.

Further courses

See the training overview and the Supercomputing Academy pages.