

The Future of Benchmarks in Supercomputing
Friday, June 26, 2026 2:00 PM to 6:00 PM · 4 hr. (Europe/Berlin)
Hall X11 - 1st Floor
Workshop
Community EngagementOptimizing for Energy and PerformancePerformance and Resource ModelingPerformance MeasurementPerformance Tools and Simulators
Information
The goal of this workshop is to pose, engage, debate, and address the question - "How should the supercomputing community evolve performance benchmarks?”. The workshop will be organized as presentations and panel discussions with audience participation that will invite active members of the Top500, HPCG, MLPerf, Semi Analysis, etc. and key personnel from industry, academia, and government to discuss the value, need and desire for evolving the benchmark suite that is inclusive and accommodative of emerging applications in HPC, AI and data analytics to guide future supercomputing system design and architecture.
This workshop is a follow-on to well-attended (>100 participants) sessions at SC23, SC24, ISC24, and ISC25 under the same name. The past workshops are archived at https://sites.google.com/view/thefutureofbenchmarks. Stalwarts from the community have presented their viewpoints, and a significant majority of participants continue to express a strong desire for deeper conversations on the following topics - (i) Design of benchmarks (representative, cost-conscious etc.), (ii) Choice of metrics (energy efficiency, I/O inclusive, etc.), (iii) Agility to evolve (problem size, newer kernels, mini-apps, containerized harness etc.), and (iv) Articulation of purpose (ranking list, co-design, marketing).
Organizers:
This workshop is a follow-on to well-attended (>100 participants) sessions at SC23, SC24, ISC24, and ISC25 under the same name. The past workshops are archived at https://sites.google.com/view/thefutureofbenchmarks. Stalwarts from the community have presented their viewpoints, and a significant majority of participants continue to express a strong desire for deeper conversations on the following topics - (i) Design of benchmarks (representative, cost-conscious etc.), (ii) Choice of metrics (energy efficiency, I/O inclusive, etc.), (iii) Agility to evolve (problem size, newer kernels, mini-apps, containerized harness etc.), and (iv) Articulation of purpose (ranking list, co-design, marketing).
Organizers:
Format
on-site
Targeted Audience
The targeted audience for the workshop includes scientists, system architects, benchmarking teams and vendors. The workshop encourages participation from those dealing with new workloads that combine modeling, simulation, data science, and artificial intelligence, as existing benchmarks are considered less representative of these emerging, complex end-to-end workflows.
Beginner Level
30%
Intermediate Level
50%
Advanced Level
20%



