AMD는 미국 캘리포니아 산호세에서 현지 시간으로 12일 ‘2025 어드밴싱 AI(2025 Advancing AI)’를 개최하고 자사의 차세대 통합 AI 플랫폼과 솔루션 포트폴리오를 공개했다. 이번 행사에서 AMD는 데이터 센터 및 HPC(고성능 컴퓨팅) 시장을 겨냥한 최신 Instinct GPU 제품군과 더불어, 인공지능 학습 및 추론 전반의 성능과 에너지 효율을 대폭 향상시킬 통합 솔루션을 발표했다.

▲AMD CEO Lisa Su introduces AMD’s next-generation integrated AI platform and solution portfolio.
Helios·MI350·ROCm integration, satisfying performance, efficiency, and economy
Close collaboration with global OEMs such as Oracle, Dell, HPE, and Supermicro
“AMD’s exclusive technology will integrate AI market-leading GPUs, CPUs, networking, and open software to provide unparalleled flexibility and performance across the entire AI spectrum.”
AMD held '2025 Advancing AI' on the 12th (local time) in San Jose, California, USA and unveiled its next-generation integrated AI platform and solution portfolio.
At the event, AMD announced its latest Instinct GPU family for the data center and high-performance computing (HPC) markets, as well as integrated solutions that will dramatically improve performance and energy efficiency across artificial intelligence (AI) training and inference.
AMD's core strategy is to go beyond simple competition of individual components and combine all computing elements such as CPU, GPU, and network controller (NIC) into a single optimized platform to realize innovative price-performance across the entire AI infrastructure.
AMD CEO Lisa Su said, &l“AMD is driving AI innovation at an unprecedented pace, highlighted by the launch of the AMD Instinct MI350 Series accelerators, advancements in our next-generation AMD ‘Helios’ rack-scale solutions, and growing momentum with the ROCm open software stack,” said Ericsson, AMD’s vice president and general manager of AI at Intel. “We are entering the next phase of AI by extending our leadership across open standards, co-innovation, and a broad ecosystem of hardware and software partners working together to define the future of AI.”

▲AMD's latest Instinct MI350 series
Announced at the event, AMD’s latest Instinct MI350 series is a data center GPU based on the CDNA 4 architecture and features 288GB of high-bandwidth HBM3E memory, delivering up to 2.6x to 4.2x improvement in AI computing capabilities over the previous generation.
The MI350 series is divided into two models, MI350X and MI355X, depending on the air cooling and liquid cooling system, and each is designed to optimize energy efficiency and density so that it can be applied to various data center environments.
Through this, AMD aims to improve token processing throughput by up to 40% per dollar in AI inference processing, thereby building cost-effective AI systems.
In particular, AMD is expected to announce the next-generation AI accelerator, the Instinct MI400 series, and the 'Helios' rack system that integrates it, attracting attention as a fully integrated solution to be introduced from the end of this year to early next year.all.
The MI400 GPU is optimized for large model training and high-load AI workloads with 432GB of HBM4 memory and over 300GB/s of memory bandwidth.
In addition to MI400 GPUs, the Helios Rack combines the latest EPYC CPUs, Pensando’s high-performance AI NICs, and next-generation interconnect technologies such as UALink, allowing up to 72 GPUs to collaborate seamlessly within a single system.
This is expected to enable parallel operations of large-scale AI models and excellent scalability in distributed learning environments.
AMD is also helping AI developers and researchers maximize the performance of GPUs through its open software platform, ROCm 7.0.
ROCm 7.0 supports various mixed-precision calculation methods such as FP4, FP6, and FP8 to improve the efficiency of AI inferencing and training tasks, and focuses on greatly improving development productivity by closely linking with open source frameworks (such as SGLang and vLLM).
In addition, as ecosystem cooperation through AMD Developer Cloud is strengthened and partnerships with OEMs and cloud service providers are in full swing, rapid commercialization of AI-dedicated servers and integrated infrastructure is expected in the future.
AMD said, “The future era of AI will be one where innovation is achieved through an open, collaborative ecosystem, rather than through the dominance of a single company.” It added, “With Helios, the new Instinct GPU product line, and the ROCm software stack, we aim to build an integrated AI platform that satisfies performance, efficiency, and economy.”
In particular, we are working with Oracle Cloud Infrastructure, as well as Dell, HPE, and SupWith close collaboration with global OEMs such as ermicro, AMD’s solutions are expected to bring significant changes to the global data center and AI research environment in the future.
Along with this, AMD is revealing an AI infrastructure roadmap with scalability of approximately 20 levels or more by 2030, and aims to build an ultra-large-scale AI learning and inference system through a cluster configuration of up to 100 units.
This future vision is expected to move away from the existing CPU-centric computing architecture and become an integrated solution where all elements, including CPU, GPU, networking, and storage, are organically combined.
Industry experts evaluated that, “As competition in the AI-only chip market intensifies, AMD’s open platform strategy and integrated solutions will be a new breakthrough to challenge existing powerhouses such as Nvidia.”
The content announced at this event is evaluated as a technological culmination of AMD's R&D efforts over the past few years, and as a best practice that will lead innovation in the next-generation AI era. The perfect combination of CPU, GPU, and NIC, which are core components of data center infrastructure, and the strengthened collaboration of the software ecosystem are expected to present a new paradigm for AI learning and inference and reorganize the global AI market landscape.
AMD's announcement this time is evaluated as a signal signaling the beginning of an ecosystem change across the AI industry, beyond a simple technological innovation. The integrated AI platform that AMD will present in the future is expected to accelerate AI innovation worldwide by providing stable and scalable AI infrastructure solutions to diverse customer groups such as large corporations, small and medium-sized enterprises, and startups.