Accelerated Hybrid Supercomputer Platform and Its Software Environment: at the Core of the HPC & AI convergence

Accelerated Hybrid Supercomputer Platform and Its Software Environment: at the Core of the HPC & AI convergence

Wednesday, May 15, 2024 2:05 PM to 2:25 PM · 20 min. (Europe/Berlin)
Hall 4 - Ground floor
Vendor Roadmaps
AI Applications powered by HPC TechnologiesHeterogeneous System ArchitecturesInterconnects and NetworksLarge Language Models and Generative AI in HPCSustainability and Energy Efficiency

Information

High-performance computing has largely influenced and enabled all of the core technologies that are now powering the Generative AI revolution. While there are unique requirements for each domain, accelerated hybrid supercomputers are designed to accommodate the convergence of HPC and the most demanding AI workloads. Massive scale is a characteristic of convergence. When an Exascale class HPC system computes Quintillions of floating points operations, a Generative AI training with a Trillion parameter Large Language Model can be run on the same or very comparable size system. Scale matters and is multidimensional, down to high speed, low latency networks, coherent memory management, cooling technologies up to the collective operation frameworks, software environment, and finally to the applications, traditional simulations, or AI trainings. This is a race for speed and size. However, in the end, what matters is the massive efficiency savings made possible to get on a converged HPC-AI supercomputer like BullSequana XH3000.
Format
On-siteOn Demand

Log in