www.magazine-industry-usa.com
26
'26
Written on Modified on
AMD & Nutanix Develop Open Enterprise AI Platform
Multi-year partnership integrates AMD EPYC CPUs, Instinct GPUs and ROCm software into Nutanix Cloud and Kubernetes platforms for scalable agentic AI infrastructure.
www.amd.com

Enterprise data centers, hybrid cloud environments and edge computing infrastructures are increasingly shifting toward AI inference–driven workloads. To address these requirements, AMD and Nutanix have announced a multi-year strategic partnership to co-develop an open, full-stack AI infrastructure platform designed for agentic AI applications across enterprise and service provider environments.
The collaboration aligns silicon, runtime software and enterprise cloud orchestration technologies to deliver scalable, production-ready AI platforms optimized for inference workloads.
Integrated full-stack platform for enterprise AI
Under the agreement, Nutanix Cloud Platform and Nutanix Kubernetes Platform will be optimized for AMD EPYC CPUs and AMD Instinct GPUs. The roadmap includes integration of AMD ROCm software and the AMD Enterprise AI platform into Nutanix AI solutions, enabling enterprises to deploy AI workloads on open infrastructure without dependence on vertically integrated AI stacks.
The co-engineered platform is designed to support high-performance inference acceleration using AMD Instinct GPUs, high-core-density compute through AMD EPYC processors and unified lifecycle management via Nutanix Enterprise AI. The solution targets enterprise AI agents, multimodal inference services and industry-specific intelligent applications deployed across data centers, hybrid cloud and edge environments.
The first jointly developed agentic AI platform is expected to enter the market beginning in late 2026.
Strategic investment and joint development roadmap
As part of the agreement, AMD will invest $150 million in Nutanix common stock at a purchase price of $36.26 per share. In addition, AMD will fund up to $100 million to support joint engineering initiatives and go-to-market activities. The equity investment is expected to close in the second quarter of 2026, subject to regulatory approvals and customary conditions.
The partnership is supported by a broad ecosystem of OEM server providers to ensure deployment flexibility across enterprise infrastructure environments.
According to Dan McNamara, senior vice president and general manager of Compute and Enterprise AI at AMD, the collaboration focuses on delivering scalable AI platforms rooted in openness, providing enterprises and service providers with flexibility to deploy and expand AI workloads.
Tarkan Maner, President and Chief Commercial Officer at Nutanix, stated that the partnership aims to provide integrated platforms optimized for inference and agentic AI applications across hybrid environments.
Supporting open and scalable AI infrastructure
As enterprise AI increasingly centers on inference rather than training, infrastructure requirements emphasize performance-per-watt efficiency, operational simplicity and interoperability. The joint AMD–Nutanix platform is designed to align open standards, interoperable software frameworks and architectural flexibility with enterprise-grade orchestration and lifecycle management.
By combining accelerated compute hardware with cloud-native orchestration and open runtime software, the companies aim to deliver an AI infrastructure model that supports scalable deployment of both open-source and commercial AI models across distributed enterprise environments.
The collaboration reflects the growing demand for open, scalable enterprise AI infrastructure capable of supporting production workloads across diverse computing environments.
www.amd.com

