From Parallel File Systems to Data Platforms: Advancing HPC in an AI-Driven, Sovereign Data World

From Parallel File Systems to Data Platforms: Advancing HPC in an AI-Driven, Sovereign Data World

Tuesday, June 23, 2026 3:20 PM to 3:40 PM · 20 min. (Europe/Berlin)
Hall H, Booth L01 - Ground Floor
HPC Solutions Forum
AI FactoriesHeterogeneous System ArchitecturesSovereignty in AIStorage Technologies and Architectures

Information

HPC has long set the standard for extracting value from large-scale, distributed data through parallelism and open access models. As AI workloads move into the enterprise, those same principles are becoming essential beyond traditional HPC environments.

The challenge is that enterprise data remains fragmented across systems, sites, and clouds, while AI workflows require coordinated, high-performance access to all of it. The limiting factor in AI factories isn’t infrastructure—it’s the ability to access and govern distributed data as a unified, policy-aware namespace.

At the same time, data sovereignty requirements, particularly across Europe, are reshaping how data can be used. AI initiatives must now operate within strict boundaries around data residency, governance, and control. This creates a fundamental tension: how to enable global, parallel access to data without violating regional policies or introducing new silos.

This session explores how HPC architectures are evolving beyond individual technologies such as parallel file systems and burst buffers into a unified, standards-based data platform. Rather than relying on data replication or proprietary stacks, this approach applies familiar HPC capabilities—parallel I/O, global namespaces, and policy-driven data placement—across heterogeneous infrastructure using standard protocols.

We will examine how this model enables enterprises to operationalize AI directly on existing data, without large-scale migration or duplication, while enforcing sovereignty and governance at the data level. Policies are applied to data itself, allowing it to be placed, accessed, and processed appropriately across regions without compromising compliance.

The discussion will focus on practical use cases, including:
• Accelerating AI training by dynamically placing data near GPU resources without manual staging
• Enabling distributed inference pipelines that operate on live enterprise data across regions
• Supporting cross-border collaboration while enforcing data residency and access controls
• Extending HPC data access models into hybrid environments without re-platforming existing storage systems

By treating data as a first-class, orchestrated resource, organizations can apply HPC-class performance and parallelism to enterprise AI workloads while maintaining sovereignty, avoiding duplication, and reducing operational complexity.

The result is a data-centric foundation that bridges HPC and enterprise environments, enabling AI to scale without creating new silos or losing control of where and how data is used.
HPC Solutions Forum Questions
How are high-performance data platforms evolving beyond individual technologies like parallel file systems and burst buffers?What is the best way to keep advancing HPC in an AI-driven world?Discuss your solution in terms of benefits for specific use cases, rather than general horizontal terms like HPC, AI, performance, or scalability.
Format
on-site