반도체 AI 보안 인더스트리 4.0 SDV 스마트 IoT 컴퓨터 통신 특수 가스 소재 및 장비 유통 e4ds plus

Intel and OSC Double AI Processing with HPC Clusters

기사입력2024.02.23 15:57


▲Intel-Ohio Supercomputer Center New HPC Cluster (Photo: Intel)
Intel, Dell, NVIDIA, OSC Unveil Next-Generation HPC Cluster Plans

AI and machine learning are essential for solving complex research problems in science, engineering, and biomedicine. As the efficacy of these technologies continues to be proven, their use is also increasing in academic fields such as agricultural science, architecture, and sociology.

Intel announced on the 23rd that it has unveiled Cardinal, a cutting-edge high-performance computing (HPC) cluster in collaboration with Dell Technologies, NVIDIA, and the Ohio Supercomputer Center (OSC).

Cardinal is specifically designed to meet the growing demand for HPC resources in the region, particularly in the field of artificial intelligence (AI), along with research, education and industrial innovation.

Cardinal clusters have hardware that can meet the demands of growing AI workloads. In both functionality and capacity, this new cluster is expected to be a larger upgrade than the Owens Cluster it replaces, which was launched in 2016.

Cardinal Cluster is a heterogeneous system leveraging Dell PowerEdge servers and Intel® Xeon® CPU Max Series with High Bandwidth Memory (HBM) that serves as a foundation for efficiently managing memory-intensive HPC and AI workloads while promoting programmability, portability and ecosystem adoption.

△With 756 Max Series CPU 9470 processors providing a total of 39,312 CPU cores △128 gigabytes (GB) of HBM2e and 512 GB of DDR5 memory per node △With a single software stack and x86-based traditional programming model, this cluster can more than double the processing power of the OSC, addressing a wide range of use cases and supporting easy introduction and deployment.

Other key specs include: 104 cores across 32 nodes, 1 terabyte (TB) of memory, 4 x NVIDIA Hopper architecture-based H100 Tensor Core GPUs with 94 GB of HBM2e memory interconnected with four NVLink connections, 500 petaflops of peak AI performance (FP8 Tensor Cores including sparsity) for large-scale AI-based scientific applications with 400 Gbps of networking performance and low latency NVIDIA Quantum-2 InfiniBand, 104 cores across 16 nodes, 128 GB of HBM2e and 2 TB of DDR5 memory capable of handling large-scale symmetric multiprocessing (SMP) style workloads, etc. Includes.

“The Intel Xeon CPU Max series is the optimal choice for developing and deploying HPC and AI workloads leveraging the most widely adopted AI frameworks and libraries,” said Ogi Brkic, vice president and general manager of Intel’s Data Center AI Solutions product line. “The system’s unique heterogeneity will enable OSC’s engineers, researchers and scientists to take full advantage of the more than 2x memory bandwidth performance it delivers.”