슈퍼마이크로컴퓨터(Super Micro Computer, 이하 슈퍼마이크로)가 슈퍼클러스터 포트폴리오에 엔비디아 블랙웰 플랫폼을 탑재한 엔드투엔드 AI 데이터센터 솔루션을 추가했다고 22일 발표했다.

▲Direct liquid cooling supercluster for NVIDIA Blackwell unveiled / (Image: Supermicro)
AI computing integration, generative AI response based on unitary parameters
Equipped with HGX B200 8-GPU·GB200·NVL4·NVL72 system
Super Micro Computer, Inc. (NASDAQ: SMCI), a global leader in enterprise computing, storage, and networking solutions, today announced the addition of an end-to-end AI data center solution powered by the NVIDIA Blackwell platform to its SuperCluster portfolio.
The new supercluster significantly increases the number of NVIDIA HGX B200 8-GPU systems in its liquid-cooled rack, offering a significant increase in GPU compute density compared to previous liquid-cooled NVIDIA HGX H100 and H200-based Supermicro superclusters.
Supermicro is now strengthening its NVIDIA Hopper systems portfolio to address the rapidly growing adoption of accelerated computing for HPC applications and popular enterprise AI.
The upgraded SuperCluster scalable appliances are based on a rack-scale design with an innovative vertical cooling distribution manifold (CDM), enabling more compute nodes to be accommodated in a single rack. The newly developed high-efficiency cooling plate and advanced hose design reportedly increase the efficiency of the water-cooling system.
A new in-row redundant cooling distribution unit (CDU) option is also available for large-scale deployments. Existing air-cooled data centers can also maximize performance and efficiency with the NVIDIA HGX B200 8-GPU system, featuring a new air-cooled cooling system chassis.
The NVIDIA HGX B200 8-GPU server features enhanced thermal management and power delivery and supports dual 500W Intel Xeon 6 (with 8800MT/s DDR5 MRDIMMs) or AMD EPYCTM 9005 series processors.
The newly designed air-cooled 10U form factor features a chassis with expanded thermal headroom to accommodate eight 1000W TDP Blackwell GPUs. The server features a 1:1 GPU to NIC (NVIDIA BlueField-3 SuperNIC or NVIDIA ConnectX-7) ratio to provide scalability in high-performance computing environments. It also includes two NVIDIA BlueField-3 DPUs per server to efficiently streamline data processing with AI storage.
Supermicro’s NVIDIA MGX design lineup will support the NVIDIA GB200 Grace Blackwell NVL4 Superchip, which connects four Blackwell GPUs integrated with two NVIDIA Grace CPUs via NVLink-C2C technology. It is compatible with Supermicro’s liquid-cooled NVIDIA MGX modular servers and is said to deliver up to 2x the performance over the previous generation for scientific computing, graph neural network (GNN) training, and inference applications.
“Supermicro has the expertise, deployment speed, and shipping capabilities to contribute to the world’s largest liquid-cooled AI data center project,” said Charles Liang, president and CEO of Supermicro. “Supermicro and NVIDIA recently successfully built an AI data center with 100,000 GPUs.”
“Superclusters can reduce power demand through direct liquid cooling, improving overall data center performance, reducing power consumption, and reducing operating costs,” he explained.