

GIGAPOD:Redefining the Future Data Center by Scaling Beyond Servers and Software Stack
Wednesday, June 11, 2025 3:00 PM to 3:20 PM · 20 min. (Europe/Berlin)
Hall H, Booth L01 - Ground floor
HPC Solutions Forum
Data Center Infrastructure and CoolingML Systems and Tools
Information
GigaPOD is a turnkey, rack-scale AI supercomputing solution designed by GIGABYTE to help enterprises build and deploy powerful AI data centers. Hree is summary,
>Integrated AI Infrastructure: GigaPOD is an integrated solution that combines high-performance GIGABYTE GPU servers, networking equipment, and storage into a pre-configured rack or multiple interconnected racks.
>Scalability: It's designed to be highly scalable, allowing businesses to start with a certain configuration and expand as their AI computational needs grow.
>High GPU Density: A fundamental configuration of a GigaPOD can include a significant number of interconnected GPUs
>Flexibility in Cooling: GigaPOD supports both air-cooling and liquid-cooling solutions. This allows customers to choose the cooling method that best suits their data center infrastructure, power availability, and performance requirements. GIGABYTE also offers direct liquid cooling (DLC) options for enhanced thermal management and energy efficiency.
>Optimized Hardware Configuration: GigaPOD solutions are designed with optimized hardware configurations to ensure efficient data flow and communication between GPUs, utilizing technologies like GPU RDMA and NVMe for direct data access. The network topology often employs a non-blocking fat-tree structure to prevent bottlenecks.
>Management Platform: GIGABYTE offers a management platform called GIGABYTE POD Manager (GPM) to help manage and monitor the GigaPOD infrastructure, including real-time monitoring, workload scheduling, and automation features.
>Integrated AI Infrastructure: GigaPOD is an integrated solution that combines high-performance GIGABYTE GPU servers, networking equipment, and storage into a pre-configured rack or multiple interconnected racks.
>Scalability: It's designed to be highly scalable, allowing businesses to start with a certain configuration and expand as their AI computational needs grow.
>High GPU Density: A fundamental configuration of a GigaPOD can include a significant number of interconnected GPUs
>Flexibility in Cooling: GigaPOD supports both air-cooling and liquid-cooling solutions. This allows customers to choose the cooling method that best suits their data center infrastructure, power availability, and performance requirements. GIGABYTE also offers direct liquid cooling (DLC) options for enhanced thermal management and energy efficiency.
>Optimized Hardware Configuration: GigaPOD solutions are designed with optimized hardware configurations to ensure efficient data flow and communication between GPUs, utilizing technologies like GPU RDMA and NVMe for direct data access. The network topology often employs a non-blocking fat-tree structure to prevent bottlenecks.
>Management Platform: GIGABYTE offers a management platform called GIGABYTE POD Manager (GPM) to help manage and monitor the GigaPOD infrastructure, including real-time monitoring, workload scheduling, and automation features.
HPC Solutions Forum Questions
For organizations pursuing both traditional HPC and new AI workloads, to what extent should they have shared infrastructure / budget / personnel versus separate?
Format
On Site

