- HPE will ship 72-GPU racks with next-generation AMD Intuition accelerators globally
- Venice CPUs paired with GPUs goal exascale-level AI efficiency per rack
- Helios depends on liquid cooling and double-wide chassis for thermal administration
HPE has introduced plans to combine AMD’s Helios rack-scale AI structure into its product lineup beginning in 2026.
The collaboration provides Helios its first main OEM companion and positions HPE to ship full 72-GPU AI racks constructed round AMD’s next-generation Intuition MI455X accelerators.
These racks will pair with EPYC Venice CPUs and use an Ethernet-based scale-up cloth developed with Broadcom.
Rack format and efficiency targets
The Helios reference design depends on Meta’s Open Rack Broad commonplace.
It makes use of a double-wide liquid-cooled chassis to deal with the MI450-series GPUs, Venice CPUs, and Pensando networking {hardware}.
AMD targets as much as 2.9 exaFLOPS of FP4 compute per rack with the MI455X technology, together with 31TB of HBM4 reminiscence.
Signal as much as the TechRadar Professional publication to get all the highest information, opinion, options and steering your small business must succeed!
The system presents each GPU as a part of a single pod, which permits workloads to span all accelerators with out native bottlenecks.
A purpose-built HPE Juniper change supporting Extremely Accelerator Hyperlink over Ethernet kinds the high-bandwidth GPU interconnect.
It presents a substitute for Nvidia’s NVLink-centric strategy.
The Excessive-Efficiency Computing Middle Stuttgart has chosen HPE’s Cray GX5000 platform for its subsequent flagship system, named Herder.
Herder will use MI430X GPUs and Venice CPUs throughout direct liquid-cooled blades and can change the present Hunter system in 2027.
HPE said that the GX5000 racks’ waste warmth will heat campus buildings, which exhibits environmental issues along with efficiency targets.
AMD and HPE plan to make Helios-based methods globally obtainable subsequent 12 months, increasing entry to rack-scale AI {hardware} for analysis establishments and enterprises.
Helios makes use of an Ethernet cloth to attach GPUs and CPUs, which contrasts with Nvidia’s NVLink strategy.
The usage of Extremely Accelerator Hyperlink over Ethernet and Extremely Ethernet Consortium-aligned {hardware} helps scale-out designs inside an open requirements framework.
Though this strategy permits theoretically comparable GPU counts to different high-end AI racks, efficiency below sustained multi-node workloads stays untested.
Nevertheless, reliance on a single Ethernet layer might introduce latency or bandwidth constraints in actual purposes.
That stated, these specs don’t predict real-world efficiency, which is able to rely on efficient cooling, community visitors dealing with, and software program optimization.
Through Tom's Hardware
Follow TechRadar on Google News andadd us as a preferred source to get our skilled information, opinions, and opinion in your feeds. Ensure to click on the Comply with button!
And naturally you too can follow TechRadar on TikTok for information, opinions, unboxings in video type, and get common updates from us on WhatsApp too.