최근 챗GPT와 같은 대규모 기계학습 기반 서비스가 널리 활용되고 있다. 최신 기계학습 기반 서비스 운영 기업들은 더 높은 성능을 달성하기 위해 모델과 데이터의 크기를 경쟁적으로 증가시키면서, 데이터센터에서 요구하는 메모리 용량 또한 급격히 증가했다. 이에 빅테크 기업들은 메모리 연결 기술인 CXL를 통한 해법을 모색하고 있다.

▲A scene from Panesia CEO Jeong Myeong-su’s keynote speech at the Semiconductor Engineering Society event / (Photo: Panesia)
CXL Core Product Family Roadmap and Development Status Announcement
Presenting a CXL-based data center for large-scale AI acceleration
Recently, large-scale machine learning-based services such as ChatGPT have been widely utilized. As companies operating the latest machine learning-based services competitively increase the size of models and data to achieve higher performance, the memory capacity required in data centers has also increased rapidly. Accordingly, big tech companies are seeking solutions through CXL, a memory linking technology.
CXL semiconductor fabless startup Panesia recently presented the development status and roadmap of its CXL core product line at the main keynote of the Semiconductor Engineering Society. Panesia attracted attention by announcing that it plans to produce CXL 3.1 switch SoC chips and provide them to customers in the second half of next year.
The CXL 3.1 switch SoC chip is expected to be utilized in the design of next-generation data center structures suitable for LLM and large-scale AI applications based on design assets (IP) developed with purely domestic technology.
CXL enables cost-effective memory expansion by configuring a unified memory space. Panesia explains that it is reasonable in terms of performance because a series of operations to manage expanded memory resources are performed in a hardware-accelerated form in the CXL controller.

▲Panesia CEO Jeong Myeong-su unveils the actual CXL 3.1 controller chip that has completed the silicon process
/ (Photo: Panesia) In his main keynote speech at the Semiconductor Engineering Society, Panesia CEO Jeong Myeong-su announced the development status and roadmap for Panesia's core CXL product line, CXL Switch SoC and CXL IP.
It supports all features defined in the latest standard, CXL 3.1, while also being designed to interoperate seamlessly with devices that support older standards. In particular, it is reported that the high scalability is a special feature that allows connecting hundreds of devices mounted on multiple servers by connecting multiple switches in multiple layers or configuring a fiber-like structure (fabric structure).
△Since it has secured the IPs of all CXL protocols including CXL.mem, CXL.cache, and CXL.io, it explained that the versatility of the switch, in which the types of devices that can be connected to the switch, such as memory, AI accelerators, and GPUs, are not limited to a specific type, is also a differentiating factor for the Panesia switch.
These features make Panesia's CXL switch ideal for connecting a variety of devices installed in actual enterprise data centers, and thus show promise for adoption in next-generation data center architectures that accelerate large-scale AI applications such as ChatGPT.
Recently, machine learning services operated by data center operators and hyperscalers such as Microsoft are comprehensively utilizing various types of models and calculations with different characteristics, and the device configuration optimized for processing each model and calculation is different.
For example, the latest version of ChatGPT simultaneously utilizes LLM and vector database (Vector DB) for accurate information processing. To efficiently execute LLM, multiple GPUs are required, while to store a large vector database, multiple memory and storage devices are required.
Therefore, it is impossible to construct a single server form that perfectly satisfies the needs of a wide range of models and operations.
“What Panesia proposes is to configure servers that connect specific system devices in one place, such as a server configured with GPUs and a server configured with memory, and connect these servers with CXL to configure an integrated system,” said CEO Jeong. “After that, each system device“By focusing on applications that can be handled well in the chip pool, it becomes possible to process various models and calculations in an efficient manner,” he explained.
“To achieve this, the CXL switch must be able to connect various types of devices and connect multiple servers into a single system, and Panesia’s versatile and highly scalable switches can be used here,” he added.
The CXL solution is a technology that can be used to build next-generation data centers such as AI data centers, and attention is focused on its market adoption.