▲DGX B200 provided by NVIDIA to OpenAI / (Photo: NVIDIA)
MS CEO Nadella: “Support for sophisticated AI workloads”
OpenAI Gets First Samples of Blackwell DGX B200
NVIDIA announced on the 11th that it will be the first to supply the Microsoft (MS) and OpenAI Blackwell systems.
NVIDIA announced that it is the first cloud solution provider for NVIDIA Blackwell Systems, featuring GB200-based AI servers on Microsoft Azure. Microsoft Azure said it is optimizing its compute capabilities to run AI models using InfiniBand networking and innovative closed-loop liquid cooling.
In response to this news, Microsoft CEO Satya Nadella said on the official social media X account, “Our long-standing partnership and deep innovation with NVIDIA continues to lead the industry, supporting the most sophisticated AI workloads.”
The GB200 Grace Blackwell superchip is a core component of the NVIDIA GB200 NVL72, a multi-node, liquid-cooled, rack-scale solution that connects 72 Blackwell GPUs and 36 Grace CPUs.
It provides up to 30x faster inference for Large Language Model (LLM) workloads and the ability to run trillion-parameter models in real time.
NVIDIA also provided OpenAI with one of the first engineering samples of the Blackwell DGX B200. OpenAI will use the NVIDIA Blackwell B200 data center GPUs for AI training on the latest DGX B200 platform. NVIDIA has released a picture of the DGX B200 delivered to OpenAI via its official X account.
The NVIDIA DGX B200 system is a unified AI platform for AI model training, fine-tuning, and inference, featuring eight Blackwell GPUs interconnected with fifth-generation NVIDIA NVLink, delivering 3x the training performance and 15x the inference performance of the previous generation DGX H100.
Based on this, it is being utilized in various workloads such as LLM, recommendation systems, and chatbots, and is driving the acceleration of AI innovation in companies.