
AMD and HPE Expand Strategic Collaboration to Accelerate Open Rack-Scale AI Infrastructure for the Next Generation of Computing
Today, AMD announced a significantly expanded collaboration with Hewlett Packard Enterprise (HPE), aimed at accelerating the development and deployment of the next generation of open, scalable AI infrastructure. This enhanced partnership centers on leveraging AMD’s leadership in high-performance compute technologies together with HPE’s proven expertise in system architecture and enterprise AI solutions. The companies are jointly working to deliver an end-to-end platform optimized for large-scale AI and high-performance computing (HPC) environments.
One of the most important elements of this collaboration is HPE’s role as one of the first system providers to adopt and commercialize the new AMD “Helios” rack-scale AI architecture. Designed from the ground up for massive AI workloads, the Helios platform integrates a purpose-built HPE Juniper Networking scale-up switch, created in close collaboration with Broadcom, and an advanced software stack aimed at enabling seamless, high-bandwidth connectivity over Ethernet. This new approach supports extremely large model training and inference workloads by simplifying networking complexity and maximizing performance throughout the rack.
A Fully Integrated, Open Rack-Scale AI Platform
The Helios architecture combines the most advanced components in the AMD portfolio—including AMD EPYC™ CPUs, AMD Instinct™ GPUs, AMD Pensando™ accelerated networking, and the AMD ROCm™ open software environment—to create a unified compute platform specifically optimized for performance, efficiency, and scalability. By consolidating multiple layers of hardware and software into a cohesive system, AMD and HPE aim to reduce the time, cost, and complexity typically associated with deploying AI clusters at scale.
The platform is engineered to simplify the deployment of large-scale AI systems from initial deployment through ongoing expansion, providing organizations with accelerated time-to-solution and improved infrastructure flexibility whether operating in research environments, cloud data centers, or global enterprise networks. For organizations needing fast and predictable scaling to support rapidly evolving AI demands, Helios is positioned as a transformational step beyond traditional architectures.
Leadership and Vision from AMD and HPE Executives
“We are incredibly proud to strengthen our collaboration with HPE, a partner that has consistently shared our vision for innovation in high-performance computing,” said Dr. Lisa Su, Chair and CEO of AMD. “With Helios, we are bringing together the full stack of AMD compute technologies and HPE’s system design capabilities to deliver an open, rack-scale AI platform that enables groundbreaking performance, efficiency, and scalability. This expansion of our partnership underscores our commitment to empowering customers in the AI era with flexible, standards-based solutions capable of handling the world’s most demanding workloads.”
Antonio Neri, President and CEO of HPE, echoed this perspective, emphasizing the companies’ shared legacy in advancing supercomputing and open standards:
“For more than a decade, HPE and AMD have worked together to push the boundaries of what’s possible in HPC, delivering multiple exascale-class systems and driving innovations that help customers dramatically accelerate their research and business objectives. With the introduction of the new AMD Helios platform and our purpose-built HPE scale-up networking solution, we are providing cloud service providers and enterprise customers with faster deployment capabilities, greater infrastructure flexibility, and reduced risk as they scale AI computing to new levels.”
Advancing the Next Era of AI and HPC through Open Standards
The AMD Helios rack-scale AI platform delivers up to 2.9 exaFLOPS of FP4 compute performance per rack by pairing AMD Instinct MI455X GPUs, next-generation AMD EPYC “Venice” CPUs, and AMD Pensando Vulcano NICs for scale-out networking, all unified through the open ROCm software ecosystem. This architecture is specifically optimized to support the explosive demand for training and inference of increasingly large foundation models and multimodal AI systems.
Helios is built on the Open Compute Project (OCP) Open Rack Wide design, simplifying global deployment and enabling partners and customers to integrate solutions faster and more efficiently. By building on open-standards-based hardware and networking, AMD and HPE aim to encourage broad industry participation and innovation. The platform’s modular design gives organizations flexibility to configure compute and networking to meet both current and future performance requirements while protecting long-term investment.
As part of their collaboration, HPE has integrated a differentiated scale-up Ethernet switch and supporting software built in partnership with Broadcom. This switch delivers optimized performance for large-scale AI deployments using the Ultra Accelerator Link over Ethernet (UALoE) standard, which supports high-bandwidth, low-latency communication between accelerators. This reinforces AMD’s longstanding commitment to open technology ecosystems and avoids lock-in to proprietary networking solutions.
HPE plans to make the AMD Helios AI Rack-Scale Architecture available worldwide beginning in 2026.
Driving the Next Wave of HPC and AI Innovation in Europe
In addition to the expanded product collaboration, AMD and HPE are powering the next generation of AI and scientific research infrastructure in Europe through Herder, a new supercomputer announced for the High-Performance Computing Center Stuttgart (HLRS) in Germany. Powered by AMD Instinct MI430X GPUs and next-generation AMD EPYC “Venice” CPUs, Herder is being built on the HPE Cray Supercomputing GX5000 platform. Once deployed, the system will enable world-class performance and energy efficiency for both traditional HPC simulation workloads and emerging large-scale AI applications.
“The pairing of AMD Instinct MI430X GPUs and EPYC processors within HPE’s GX5000 platform is an ideal technology solution for HLRS,” said Prof. Dr. Michael Resch, Director of HLRS. “Our scientific community requires extremely powerful infrastructure to support traditional simulation-based research, while we are also seeing rapidly increasing demand for machine learning and artificial intelligence. Herder’s hybrid HPC-AI architecture enables us to support both simultaneously, while providing the ability to develop new computational approaches that combine the strengths of both fields.”
Herder is expected to begin delivery in the second half of 2027 and to enter full service by the end of 2027, replacing HLRS’s current flagship system, Hunter. The expanded computing power and architectural flexibility of Herder will enable European researchers to pursue groundbreaking discoveries across climate science, engineering, biomedical research, manufacturing innovation, and other mission-critical domains.
Source Link:https://www.amd.com/en



