AMD Instinct™ GPU Training

All communication will be done through Zoom, Slack and email.

This course will give a deep dive into the AMD Instinct™ GPU architecture and its ROCm™ ecosystem, including the tools to develop or port HPC or AI applications to AMD GPUs. Participants will be introduced to the programming models for the MI200 series GPUs and MI300A APU. The new unified memory programming model makes writing HPC applications much easier for a wide range of GPU programming models. We will cover how to use pragma-based languages such as OpenMP, the basic GPU programming language HIP, and performance portable languages such as Kokkos and RAJA. In addition, there will be presentations on other important topics such as GPU-aware MPI, and Affinity. The AMD tool suite, including the debugger, rocgdb, and the profiling tools rocprof, omnitrace, and omniperf will also be covered. A short introduction will be given into the AMD Machine Learning software stack including PyTorch and Tensorflow and how they have been used in HPC.

After this course, participants will

  • have learned about the many GPU programming languages for AMD GPUs
  • understand how to get performance scaling
  • have gained knowledge about the AMD programming tools
  • have gotten an introduction to the AMD Machine learning software
  • know about profiling and debugging.

Location

Online course
Organizer: HLRS, University of Stuttgart, Germany

Start date

Apr 22, 2024
13:00

End date

Apr 25, 2024
17:00

Language

English

Entry level

Intermediate

Course subject areas

Data in HPC / Deep Learning / Machine Learning

Parallel Programming

Performance Optimization & Debugging

Topics

Accelerators

Code Optimization

GPU Programming

Machine Learning

MPI+OpenMP

OpenMP

Back to list

Prerequisites and content levels

Prerequisites

Some knowledge in GPU and/or HPC programming. Participants should have an application developer's general knowledge of computer hardware, operating systems, and at least one HPC programming language.

See also the suggested prereading below (resources and public videos).

Content levels

Basic: 1 hours
Intermediate: 7 hours
Advanced: 6 hours

Learn more about course curricula and content levels.

Resources
  • Book on HIP programming - Porting CUDA
    • Accelerated Computing with HIP,  Yifan Sun, Trinayan Baruah, David R Kaeli,
      ISBN-13: ‎ 979-8218107444
  • Book on OpenMP GPU programming
    • Programming Your GPU with OpenMP, Tom Deakin and Tim Mattson,
      ISBN-13: ‎ 978-0262547536
  • Book of parallel and high performance computing topics
    • Parallel and High Performance Computing, Manning Publications, Robert Robey and Yuliana Zamora,
      ISBN-13: ‎ 978-0262547536
  • ENCCS resourses
  • AMD Lab Notes series on GPUOpen.com

    • Finite difference method - Laplacian part 1
    • Finite difference method - Laplacian part 2
    • Finite difference method - Laplacian part 3
    • Finite difference method - Laplacian part 4
    • AMD matrix cores
    • Introduction to profiling tools for AMD hardware
    • AMD ROCm™ installation
    • AMD Instinct™ MI200 GPU memory space overview 
    • Register pressure in AMD CDNA2™ GPUs
    • GPU-Aware MPI with ROCm
    • Creating a PyTorch/TensorFlow Code Environment on AMD GPUs
    • Jacobi Solver with HIP and OpenMP offloading
    • Sparse matrix vector multiplication - part 1
  • Quick start guides at Oak Ridge National Laboratory

Instructors

Bob Robey, AMD Global Training Lead Data Center GPUs and other AMD guest lecturers.

Agenda (subject to change)

All times are CEST.
Day 1 (Mon) - AMD Programming Model, OpenMP and MPI

12:45 - 13:00 Drop in to Zoom

  • 13:00 HLRS Intro
  • 13:10 AMD Presentation Roadmap and Introduction to the System for Exercises
  • 13:30 Programming Model for MI200 and MI300 series
  • 13:50 Programming Model Exercises
  • 14:00 Break
  • 14:10 Introduction OpenMPI® Offloading
  • 14:40 OpenMPI® Exercises
  • 14:55 Break
  • 15:10 Real-World OpenMPI® Language Constructs
  • 15:45 OpenMPI® Language Constructs Exercises
  • 16:00 Advanced OpenMPI® - zero-copy, debugging and optimization
  • 16:30 Advanced OpenMPI® Exercises
  • 16:50 Wrapup
Day 2 (Tue) - MPI and HIP and interoperability

12:45 - 13:00 Drop in to Zoom

  • 13:00 GPU-Aware MPI on AMD GPUs
  • 13:30 MPI Exercises
  • 13:40 HIP and ROCm
  • 14:20 HIP Exercises
  • 14:30 Break
  • 14:40 Porting code to HIP
  • 15:00 Porting Exercises
  • 15:15 Optimizing HIP Code
  • 15:45 HIP Optimization Exercises
  • 16:00 Break
  • 16:15 OpenMPI® and HIP Interoperability
  • 16:40 Interoperability Exercises
  • 16:55 Wrapup
Day 3 (Wed) - Performance Portable languages, AMD Matrix Cores, Affinity and Machine Learning

12:45 - 13:00 Drop in to Zoom

  • 13:00 Performance Portability Frameworks; Intro to Kokkos
  • 13:30 Kokkos Exercises
  • 13:50 AMD Matrix Cores
  • 14:15 Break
  • 14:30 Affinity - Process Placement, Order and Binding
  • 15:15 Affinity Exercises
  • 15:45 Break
  • 16:00 ML/AI on AMD GPUs
  • 16:30 ML/AI on AMD GPUs Exercises
  • 16:55 Wrapup
Day 4 (Thu) - AMD Debuggers and Profiling Tools

12:45 - 13:00 Drop in to Zoom

  • 13:00 Debugging with Rocgdb
  • 13:40 Rocgdb Exercises
  • 14:00 Break
  • 14:15 GPU Profiling - Performance Timelines
  • 14:55 Timeline Profiling Exercises
  • 15:15 Break
  • 15:30 Kernel Profiling with Omniperf
  • 16:15 Kernel Profiling Exercises
  • 16:45 Additional Training Resources
  • 16:55 Wrapup

Registration information

Register via the button at the top of this page.
We encourage you to register to the waiting list if the course is full. Places might become available.

Registration deadline: April 4, 2024.

Fees

This course is free of charge.

Contact

Khatuna Kakhiani phone 0711 685 65796, kakhiani(at)hlrs.de
training@hlrs.de

HLRS Training Collaborations in HPC

HLRS is part of the Gauss Centre for Supercomputing (GCS), together with JSC in Jülich and LRZ in Garching near Munich. EuroCC@GCS is the German National Competence Centre (NCC) for High-Performance Computing. HLRS is also a member of the Baden-Württemberg initiative bwHPC.

Further courses

See the training overview and the Supercomputing Academy pages.

Related training

All training

March 11 - 15, 2024

Dresden, Germany


April 03 - 05, 2024

Online


April 16 - 19, 2024

Mainz, Germany


May 06 - 07, 2024

Online


May 13 - 17, 2024

Hybrid Event - Stuttgart, Germany


June 03 - 07, 2024

Hybrid Event - Stuttgart, Germany


June 25 - 26, 2024

Online


July 02 - 05, 2024

Stuttgart, Germany


October 14 - 18, 2024

Stuttgart, Germany


November 04 - December 13, 2024

Online