Introduction to SYCL2020

Intel's oneAPI logo showing a 1 above the text "oneAPI"

Most current HPC systems are heterogeneous and use accelerators. oneAPI/SYCL is a standardized and portable programming model adapted to heterogeneous computing. In this course we will provide an introduction to Intel's oneAPI implementation: Data Parallel C++ (DPC) with SYCL. SYCL code can run on any (Intel) x86 CPU and Intel based accelerator, but also other GPUs. The course will give an introduction into the SYCL programming method and parallelization and optimization strategies.

Location

Flexible online course: Combination of self-study and two Q&A sessions (CODE RECKONS)
Organizer: HLRS, University of Stuttgart, Germany

Start date

Aug 18, 2025

End date

Sep 26, 2025

Language

English

Entry level

Basic

Course subject areas

Hardware Accelerators

Parallel Programming

Topics

C/C++

Code Optimization

GPU Programming

Back to list

Prerequisites and content levels

Knowledge of C++11 or later is recommended while knowing C++17 makes SYCL2020 programming much easier.

Content levels
  • Basic: 8.5 hours
  • Intermediate: 7 hours
  • Advanced: 7 hours

Learn more about course curricula and content levels.

Learning outcomes

After this course, participants will:

  • have an overview over DPC++/SYCL programming,
  • have an overview over over common parallelization techniques,
  • be able to make an informed choice about execution places,
  • know about different offered memory models,
  • have an overview of different optimization options.

Agenda

The the self-paced course includes lessons and exercises on

1: A quick introduction to SYCL

  • The SYCL Standard
  • The oneAPI Environment

2: Our First SYCL Program

  • Getting Started
  • Data Transfer
  • Starting the Computation

3: Managing SYCL Devices

  • Host and Device Codes
  • Device Selection

4: Shared Memory, SYCL Buffers and Accessors

  • Unified Shared Memory
  • Buffers
  • Accessors

5: Kernel Management

  • The SYCL Task Graph
  • SYCL Actions
  • Dependency Management

6: SYCL Data Parallelism

  • Implicit Data-Parallel Kernel
  • Explicit Data-Parallel Kernel
  • Hierarchical Kernel

7: Kernel Optimizations

  • Local Memory
  • Collective Operations

8: Parallelization Strategies

  • Structured Parallel Programming
  • MAP
  • REDUCE
  • STENCIL

9: Algorithms/Architecture Adequation

  • Comparisons between GPU/CPU
  • Coarse-grained parallelism
  • Fine-grained parallelism
  • Memories and Caches

Each lesson concludes with a short quiz for you to validate your progress in SYCL. In addition, two Q&A sessions will be offered. The first about beginners' topics, the second about the more advanced topics. The Q&A sessions are scheduled for August 27 14:00 and September 16 14:00 via Zoom.

Flexible learning

Flexible Learning

This course offers flexible learning, allowing you to learn at your own pace and access online course materials and cluster resources. Two online Q&A are planned to discuss the learning modules and to answer your questions. We also provide a communication channel that enable you to communicate with your peers, get help, as well as to share your experiences.

Learning Duration

The course is divided into 9 learning units of about 2.5 hours, each with quizzes and exercises. Participants can learn the individual learning content on their own schedule. In addition, online Q&A sessions are offered on fixed dates.

Confirmation of Attendance

High-Performance Computing Center (HLRS) issues participants a confirmation of attendance if they can proof that they finished all lessons.

Technical Requirement
  • Stable Internet connection so you can access the learning materials.
  • Access to video conferencing tool with a microphone for participation in the Q&A session.

Registration information

Register via the button at the top of this page (will be available soon).

This course is offered in cooperation with Intel® and the self-paced learning materials are provided at https://codereckons.com/. By registering you explicitly consent that the necessary data (your name and e-mail address) is forwarded to the sponsor Intel® for the purpose of opening an account at https://codereckons.com/.

Registration closes on August 10, 2025 or when course capacity is reached.

Fees

This course is free of charge.

Contact

Lucas Jordan, phone +49 711 685-87206, training(at)hlrs.de

HLRS Training Collaborations in HPC

HLRS is part of the Gauss Centre for Supercomputing (GCS), together with JSC in Jülich and LRZ in Garching near Munich. EuroCC@GCS is the German National Competence Centre (NCC) for High-Performance Computing. HLRS is also a member of the Baden-Württemberg initiative bwHPC.

Further courses

See the training overview and the Supercomputing Academy pages.

Related training

All training

May 13 - 16, 2025

Hybrid Event - Stuttgart, Germany


June 17 - 18, 2025

Online


July 01 - 04, 2025

Hybrid Event - Stuttgart, Germany


July 09 - 10, 2025

Online


November 10 - 14, 2025

Online


November 25 - 28, 2025

Hybrid Event - Stuttgart, Germany