AMD Unleashes Open AI Ecosystem Vision with New Silicon, Software, and Systems at Advancing AI 2025

At the Advancing AI 2025 event in San Jose, AMD pulled the curtain back on its most ambitious AI strategy yet—delivering a sweeping portfolio of hardware, software, and full-stack infrastructure to power an open, integrated, and scalable AI future.

 Key Highlights:

  • Instinct MI350 Series GPUs launched with 4x AI performance gen-on-gen, delivering a massive leap in compute and efficiency.

  • Helios, AMD’s next-gen rack-scale AI platform, was unveiled, targeting 10x performance for inference on future MI400 GPUs and Zen 6 CPUs.

  • AMD reinforced its commitment to open AI, introducing ROCm 7.0 and a global Developer Cloud Access Program.

  • Industry giants—including Meta, Microsoft, OpenAI, Oracle, xAI, Cohere, HUMAIN, and others—showcased real-world deployments and future collaborations with AMD AI.

AMD Chair and CEO Dr. Lisa Su, joined on stage by partners like Meta, Microsoft, Oracle, and OpenAI, declared:

“We are entering the next phase of AI, driven by open standards, shared innovation, and AMD’s expanding leadership across a broad ecosystem.”

New AI Muscle: Instinct MI350 Series

The new Instinct MI350X and MI355X GPUs bring:

  • 4x AI compute uplift over the previous generation¹

  • 35x increase in inferencing performance²

  • Up to 40% more tokens-per-dollar³ vs. competitors

  • Deployed already in Oracle Cloud Infrastructure (OCI) and targeting wider rollout in 2H 2025

ROCm 7.0: AI Software, Reinvented

AMD’s open-source stack now supports:

  • Broader framework compatibility

  • Expanded hardware support

  • Enhanced tools, APIs, and libraries for developers
    Additionally, the AMD Developer Cloud is now widely available, purpose-built for high-performance AI development with ROCm 7 pre-installed.

Rack-Scale Innovation: “Helios” and Beyond

AMD previewed its Helios rack platform, coming in 2026:

  • Built on Instinct MI400 GPUs, Zen 6 “Venice” EPYC CPUs, and Pensando “Vulcano” NICs

  • Designed to deliver 10x inference performance on complex Mixture-of-Experts models⁴

And that’s just the beginning—AMD set a new 2030 sustainability goal:
➡️ 20x increase in rack-scale energy efficiency⁵, slashing AI training rack needs from 275 to just one⁶.

Ecosystem Power Moves

  • Meta uses MI300X for Llama 3 & 4 inference, with plans for MI400.

  • OpenAI runs GPT models on MI300X in Azure and is working closely on MI400.

  • Oracle to deploy 131,072 MI355X GPUs in new zettascale clusters.

  • Microsoft runs both proprietary and open models using MI300X on Azure.

  • Cohere, HUMAIN, Red Hat, Astera Labs, and Marvell showcased AMD-powered AI breakthroughs.

From next-gen chips to open software and collaborative AI infrastructure, AMD is going all-in to rewrite the rules of the AI race—with openness, scale, and performance at the core.

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *