AI와 머신 러닝은 과학, 공학, 바이오 의학 분야에서 복잡한 연구 문제를 해결하기 위해 필수적으로 활용되고 있다. 이러한 기술의 효능이 지속적으로 입증되면서 농업 과학, 건축학, 사회학과 같은 학문 분야에서도 활용도 늘어나고 있다.

▲Intel-Ohio Supercomputer Center New HPC Cluster (Photo: Intel)
Intel, Dell, NVIDIA, OSC Unveil Next-Generation HPC Cluster Plans
AI and machine learning are essential for solving complex research problems in science, engineering, and biomedicine. As the efficacy of these technologies continues to be proven, their use is also increasing in academic fields such as agricultural science, architecture, and sociology.
Intel announced on the 23rd that it has unveiled Cardinal, a cutting-edge high-performance computing (HPC) cluster in collaboration with Dell Technologies, NVIDIA, and the Ohio Supercomputer Center (OSC).
Cardinal is specifically designed to meet the growing demand for HPC resources in the region, particularly in the field of artificial intelligence (AI), along with research, education and industrial innovation.
Cardinal clusters have hardware that can meet the demands of growing AI workloads. In both functionality and capacity, this new cluster is expected to be a larger upgrade than the Owens Cluster it replaces, which was launched in 2016.
Cardinal Cluster is a heterogeneous system leveraging Dell PowerEdge servers and Intel® Xeon® CPU Max Series with High Bandwidth Memory (HBM) that serves as a foundation for efficiently managing memory-intensive HPC and AI workloads while promoting programmability, portability and ecosystem adoption.
△With 756 Max Series CPU 9470 processors providing a total of 39,312 CPU cores △128 gigabytes (GB) of HBM2e and 512 GB of DDR5 memory per node △With a single software stack and x86-based traditional programming model, this cluster can more than double the processing power of the OSC, addressing a wide range of use cases and supporting easy introduction and deployment.
Other key specs include: 104 cores across 32 nodes, 1 terabyte (TB) of memory, 4 x NVIDIA Hopper architecture-based H100 Tensor Core GPUs with 94 GB of HBM2e memory interconnected with four NVLink connections, 500 petaflops of peak AI performance (FP8 Tensor Cores including sparsity) for large-scale AI-based scientific applications with 400 Gbps of networking performance and low latency NVIDIA Quantum-2 InfiniBand, 104 cores across 16 nodes, 128 GB of HBM2e and 2 TB of DDR5 memory capable of handling large-scale symmetric multiprocessing (SMP) style workloads, etc. Includes.
“The Intel Xeon CPU Max series is the optimal choice for developing and deploying HPC and AI workloads leveraging the most widely adopted AI frameworks and libraries,” said Ogi Brkic, vice president and general manager of Intel’s Data Center AI Solutions product line. “The system’s unique heterogeneity will enable OSC’s engineers, researchers and scientists to take full advantage of the more than 2x memory bandwidth performance it delivers.”