AI 반도체 수요 증가로 반도체 업계 미래 기술 아젠다가 AI에 집중되고 있다. 기존 GPU를 통한 AI 연산 한계를 극복하기 위해 PIM 반도체가 솔루션으로 대두되고 있다.

▲PIM (Processing In Memory) Intelligent Semiconductor Technology Seminar
KCIA Holds PIM Intelligent Semiconductor Technology Seminar
NPU+PIM, Simultaneously responding to high computing demands and bandwidth
Without an optimized PIM SW, the ecosystem needs to expand
As demand for AI semiconductors increases, the semiconductor industry's future technology agenda is focused on AI. PIM semiconductors are emerging as a solution to overcome the limitations of AI computation through existing GPUs.
The PIM (Processing In Memory) Intelligent Semiconductor Technology Seminar was held on the 31st at Yangjae El Tower. Hosted by the Korea Computing Industry Association (KCIA), this event was attended by industry, academia, and research personnel related to PIM, who shared related technologies and current issues.
Choi Gyu-hyun, a senior researcher at the Korea Electronics Technology Institute (KETI), who presented on the PIM semiconductor technology trend, said, “PIM should be utilized as main memory, and an integrated system solution of NPU and PIM is needed to accelerate large language models (LLMs),” and argued that various research activities and expansion of the ecosystem are necessary for this.
LLM has a problem in that it requires high computing power for Summarization and high bandwidth for Generation, making it difficult for a single device (accelerator) to satisfy both simultaneously. “While GPUs satisfy high computing demands, they are not very efficient in terms of memory bandwidth,” said Choi. “NPUs also have the same problem and require the use of HBM.”

▲Kyuhyun Choi, Senior Researcher, Korea Electronics Technology Institute (KETI)
PIM satisfies bandwidth requirements due to the utilization of internal memory bandwidth, but it is difficult to meet the high computing requirements of Summarization due to process limitations that make it impossible to insert many operators. Therefore, a solution that selects only the advantages of NPU and PIM is needed.
Researcher Choi presented three platforms for utilizing PIM: △accelerator △accelerator memory △main memory, and emphasized that by installing PIM inside the main processor (CPU) with which the NPU is designed together, two advantages can be obtained: △resolving data transfer overhead △optimal acceleration of both summarization and generation.
However, difficulties are expected to arise in the production of PIM hardware and software. Researcher Choi stated, “Although programming that matches the specifications of each PIM semiconductor manufacturer and PIM product is necessary, it is difficult to write a program that takes this into account and apply it to an actual environment, which reduces usability issues.” He predicted, “This issue can be resolved through support for libraries that run on a PIM-specific compiler and deep learning framework.”
Korea is one of the three countries in the world that possesses a commercialized super-large language model, and Korea is ready to lead the artificial intelligence era. Research and development among industry, academia, and research institutes is also actively underway to take the lead in cutting-edge AI hardware sectors such as semiconductor innovation.
SK Hynix is currently dominating customers centered around Nvidia with its leadership in HBM technology ahead of Samsung. SK Hynix recently announced the AiMX prototype in September, showcasing an AiM-based generative AI-specific accelerator, boasting that it demonstrated 10 times faster latency and 20% power consumption compared to GPUs in generative AI.
Currently, Samsung is introducing an AI acceleration solution that integrates PIM into HBM, and attention is focused on whether HBM3-based PIM will be released this year.