

Mastering Tasking with OpenMP 6.0
Friday, June 13, 2025 2:00 PM to 6:00 PM · 4 hr. (Europe/Berlin)
Hall Y12 - 2nd floor
Tutorial
Compiler and Tools for Parallel ProgrammingOptimizing for Energy and PerformanceParallel Programming Languages
Information
OpenMP is a popular, portable, widely supported and easy-to-use shared-memory model. Long since OpenMP offers tasking to support the creation of composable parallel software blocks and the parallelization of irregular algorithms. However, mastering the tasking concept of OpenMP requires a change in the way developers reason about the structure of their code and how to expose the parallelism of it. Our tutorial addresses this critical aspect by examining the tasking concept in detail and presenting patterns as solutions to many common problems.
With the recent release of OpenMP 6.0, tasking has been greatly extended. Most prominently, it now supports free-agent threads, which are threads that are not assigned to any team that is executing a parallel region, as well as task graphs for efficient replay. In this tutorial, we specifically present the new release.
We assume attendees understand basic parallelization concepts and know the fundamentals of OpenMP. We present the OpenMP tasking language features in detail and focus on performance aspects, such as introducing cut-off mechanisms, exploiting task dependencies, and preserving locality. All aspects are accompanied by extensive case studies.
With the recent release of OpenMP 6.0, tasking has been greatly extended. Most prominently, it now supports free-agent threads, which are threads that are not assigned to any team that is executing a parallel region, as well as task graphs for efficient replay. In this tutorial, we specifically present the new release.
We assume attendees understand basic parallelization concepts and know the fundamentals of OpenMP. We present the OpenMP tasking language features in detail and focus on performance aspects, such as introducing cut-off mechanisms, exploiting task dependencies, and preserving locality. All aspects are accompanied by extensive case studies.
Format
On Site
Targeted Audience
Our primary target is HPC programmers with some knowledge of OpenMP that want to implement efficient shared-memory code, particularly for irregular algorithms or composable parallel software components.
Intermediate Level
50%
Advanced Level
50%





