Efficient Distributed GPU Programming for Exascale

Efficient Distributed GPU Programming for Exascale

Monday, June 22, 2026 9:00 AM to 6:00 PM · 9 hr. (Europe/Berlin)
Hall X1 - 1st Floor
Tutorial
AI Applications powered by HPC TechnologiesApplication Workflows for DiscoveryDevelopment of HPC SkillsExtreme-scale SystemsParallel Programming Languages

Information

Over the past decade, GPUs became ubiquitous in HPC installations around the world, delivering the majority of performance of some of the largest supercomputers, steadily increasing the available compute capacity. Finally, four Exascale systems are deployed (Frontier, Aurora, El Capitan, and JUPITER), using GPUs as the core computing devices for this era of HPC.
To take advantage of these GPU-accelerated systems with tens of thousands of devices, application developers need to have the proper skills and tools to understand, manage, and optimize distributed GPU applications.
In this tutorial, participants will learn techniques to efficiently program large-scale multi-GPU systems. While programming multiple GPUs with MPI is explained in detail, also advanced tuning techniques and complementing programming models like NCCL and NVSHMEM are presented. Tools for analysis are shown and used to motivate and implement performance optimizations. The tutorial teaches fundamental concepts that apply to GPU-accelerated systems of any vendor in general, taking the NVIDIA platform as an example. It is a combination of lectures and hands-on exercises, using the JUPITER system for interactive learning and discovery.
Format
on-site
Targeted Audience
Scientific software developers, scientists, and students, aiming to scale their applications efficiently across many GPUs. Attendees, interested in identifying, understanding, and resolving performance bottlenecks in multi-GPU applications. Established researchers, familiar with multi-GPU applications but wanting to learn new techniques and use the latest software and hardware features.
Beginner Level
20%
Intermediate Level
80%
Prerequesites
We strive to make the tutorial as accessible as possible. As an intermediate-level tutorial, we however expect basic knowledge of distributed computing with MPI, CUDA/HIP C++, and programming in C/C++. Additionally, experience in using HPC systems is needed (Linux shell, make, Slurm). Participants are expected to provide a laptop with which they can access the HPC system. Access will be facilitated via individual accounts using the Jupyter platform.