At the 2025 Open Compute Project (OCP) Summit, AMD officially introduced its new “Helios” rack-scale platform, a modular system targeting the fast-growing AI and data center markets. Following its Advancing AI 2025 event, this announcement marks AMD’s formal entry into the rack-scale systems segment—directly challenging NVIDIA’s Rubin platform and expanding AMD’s role beyond chips into full-stack solutions.
A Modular, Open Rack Design #
Helios is based on the Open Rack Wide (ORW) standard pioneered by Meta, emphasizing modularity, scalability, and open design principles. AMD’s Executive Vice President for Data Center Solutions described the system’s mission as “translating open standards into deployable, high-performance systems.”
The Helios rack integrates EPYC CPUs, Instinct GPUs, and a new open interconnect fabric to create a flexible and future-ready foundation for AI training and inference workloads.
Public demonstrations revealed a double-width ORW chassis featuring a central equipment bay flanked by service and cooling compartments—balancing density with ease of maintenance. Inside, horizontally mounted compute modules occupy 70–80% of the rack space. Two primary optical links—nicknamed “Blue Aqua” and “Yellow Link”—handle separate communication channels for high-speed data transfer. This layered topology reflects AMD’s evolving system-level design philosophy, offering a contrast to NVIDIA’s proprietary Kyber interconnect structure.
Powered by EPYC and Instinct #
Helios will debut with AMD’s upcoming EPYC “Venice” CPUs and Instinct MI400 accelerators, both central to AMD’s next wave of AI and HPC products.
- EPYC Venice is expected to leverage the Zen 6 architecture, boosting inter-chip bandwidth and memory throughput.
- The Instinct MI400 continues AMD’s CDNA architecture evolution, adding more efficient matrix engines and improved multi-GPU scaling.
Networking and management are handled by integrated AMD Pensando modules, enabling advanced virtualization, security enforcement, and large-scale orchestration via a unified control plane.
Dual Open Interconnect Standards: UALink and UEC #
A highlight of Helios is its dual adoption of open interconnect technologies:
- UALink enables vertical scaling (scale-up) through a high-speed GPU-to-GPU communication protocol co-developed by multiple vendors, breaking the limitations of closed interconnects.
- UEC (Unified Ethernet Compute) supports horizontal scaling (scale-out) over Ethernet, allowing shared memory and data access across compute nodes using a programmable, open fabric.
This dual approach exemplifies AMD’s “hardware and software openness” strategy—creating an alternative to the proprietary AI ecosystems that dominate today’s infrastructure market.
Liquid Cooling and Maintenance Efficiency #
Helios also introduces a quick-disconnect liquid cooling system, designed for tool-free maintenance and improved serviceability. The system supports higher power density and continuous high-load operation, crucial for cloud providers and large-scale AI clusters. By integrating liquid cooling at the rack level, AMD is addressing the growing demand for energy efficiency and thermal stability in dense AI environments.
Market Impact and Strategic Direction #
While AMD has yet to announce a commercial release date, the showcased hardware indicates that Helios is already in engineering validation. More importantly, the launch signals AMD’s transformation from a component supplier to a system-level solutions provider. Together with Instinct accelerators and EPYC CPUs, Helios completes AMD’s AI data center stack—from silicon to system.
In today’s market, NVIDIA’s Rubin platform, powered by Grace Hopper GB200 and NVLink 5.0, remains the de facto standard for hyperscale AI clusters. AMD’s Helios stands out as the first open alternative—with modular design, standardized interconnects, and accessible maintenance—appealing to cloud service providers seeking vendor-neutral AI infrastructure.
Conclusion: A New Chapter in Open AI Infrastructure #
The debut of Helios marks a strategic milestone for AMD. As AI workloads scale exponentially, demand for open, interoperable rack systems is rising. With Helios, AMD positions itself as a driving force behind this shift—championing open standards and ecosystem diversity.
Far beyond a hardware showcase, Helios represents AMD’s commitment to redefining AI infrastructure—by merging compute performance, scalability, and openness into a cohesive platform capable of challenging NVIDIA’s dominance and shaping the next phase of AI data center evolution.
Quote: AMD Unveils ‘Helios’ Rack System for AI and Data Centers