

Efficient Distributed GPU Programming for Exascale
Friday, June 13, 2025 2:00 PM to 6:00 PM · 4 hr. (Europe/Berlin)
Hall Y7 - 2nd floor
Tutorial
AI Applications powered by HPC TechnologiesApplication Workflows for DiscoveryDevelopment of HPC SkillsExtreme-scale SystemsParallel Programming Languages
Information
Over the past decade, GPUs became ubiquitous in HPC installations around the world, delivering the majority of performance of some of the largest supercomputers (e.g. Summit, Sierra, JUWELS Booster). This trend continues in the recently deployed and upcoming Pre-Exascale and Exascale systems (JUPITER, LUMI, Leonardo; El Capitan, Frontier, Aurora): GPUs are chosen as the core computing devices for this era of HPC.
To take advantage of these GPU-accelerated systems with tens of thousands of devices, application developers need to have the propers skills and tools to understand, manage, and optimize distributed GPU applications.
In this tutorial, participants will learn techniques to efficiently program large-scale multi-GPU systems. Programming multiple GPUs with MPI is explained in detail, and advanced tuning techniques and complementing programming models like NCCL and NVSHMEM are outlined. The tutorial teaches fundamental concepts that apply to GPU-accelerated systems in general, taking the NVIDIA platform as an example. It is a combination of lectures and hands-on exercises, using the JUPITER system for interactive learning and discovery.
To take advantage of these GPU-accelerated systems with tens of thousands of devices, application developers need to have the propers skills and tools to understand, manage, and optimize distributed GPU applications.
In this tutorial, participants will learn techniques to efficiently program large-scale multi-GPU systems. Programming multiple GPUs with MPI is explained in detail, and advanced tuning techniques and complementing programming models like NCCL and NVSHMEM are outlined. The tutorial teaches fundamental concepts that apply to GPU-accelerated systems in general, taking the NVIDIA platform as an example. It is a combination of lectures and hands-on exercises, using the JUPITER system for interactive learning and discovery.
Format
On Site
Targeted Audience
Scientific software developers, scientists, and students, aiming to scale their applications efficiently across many GPUs.
Attendees, interested in identifying, understanding, and resolving performance bottlenecks in multi-GPU applications.
Established researchers, familiar with multi-GPU applications but wanting to learn new techniques and use the latest software and hardware features.
Beginner Level
20%
Intermediate Level
80%
Advanced Level
30%





