엣지 AI 가속화를 지원하는 컴퓨팅 플랫폼 개발을 위해 마이크로칩은 자회사 SST와 인텔리전트 하드웨어 코리아의 협업으로 시너지를 내고 있다.

▲Collaboration between Microchip subsidiary SST and Intelligent Hardware Korea (Image: Microchip)
Based on Microchip's memBrain NVM in-memory computing technology
Collaboration with IHWK to develop SoC for neural network technology devices
Microchip is creating synergy through collaboration with its subsidiaries SST and Intelligent Hardware Korea to develop a computing platform that supports edge AI acceleration.
Microchip Technology Inc. (NASDAQ: MCHP) announced today that it is supporting the development of this platform by providing a system used to evaluate the SuperFlash memBrain neuromorphic memory solution through its subsidiary Silicon Storage Technology (SST).
The solution is based on Microchip’s industry-proven SuperFlash technology-based nonvolatile memory (NVM) and is optimized for vector and matrix multiplication (VMM) operations for neural networks through an analog in-memory computing approach.
Microchip’s memBrain technology evaluation kit was designed by Intelligent Hardware Korea (IHWK) to demonstrate maximum power efficiency of a neuromorphic computing platform that executes inference algorithms at the edge. Through this, IHWK ultimately aims to develop ultra-low-power analog processing units for applications such as generative AI models, autonomous vehicles, medical diagnosis, voice processing, security/surveillance, and commercial drones.
As artificial intelligence (AI) computing and related inference algorithms are rapidly introduced to the network edge, IHWK is developing a neuromorphic computing platform for neurotechnology devices and field-programmable neuromorphic devices.
Neural models currently used for edge inference require more than 50 million synapses (weights) for processing alone, and securing the bandwidth of off-chip DRAM, which is only required for purely digital solutions, can create a bottleneck in neural computing and reduce overall computing performance.
In contrast, the memBrain solution stores synapses (weights) in on-chip floating gates in an ultra-low-power, sub-threshold mode, while performing computations using the same memory cells, resulting in significant improvements in power efficiency and system latency. Compared to traditional digital DSP and SRAM/DRAM-based approaches, Microchip’s solution reduces power usage per inference decision by 10-20x, while significantly reducing overall costs.
IHWK is collaborating with the Korea Advanced Institute of Science and Technology (KAIST) in Daejeon and Yonsei University in Seoul to develop and design APU devices. The final APU is expected to operate in the 20-80 teraOPS-per-watt range, optimizing the algorithms required to perform inference workloads and resulting in the best performance of any in-memory compute solution designed for use in battery-powered devices.
“Microchip’s memBrain in-memory computing technology uses proven NVM instead of off-chip memory solutions to eliminate the large data communication bottleneck that occurs when performing AI processing at the network edge, solving common AI processing challenges,” said Mark Reiten, vice president of SST’s licensing business unit. “We are pleased to collaborate with IHWK, a university and early adopter customer, to demonstrate our technological capabilities in neural processing and contribute to the advancement of Korea’s AI industry.”
“Korea is one of the regions that is playing an important role in the development of AI semiconductors,” said Sanghoon Yoon, CEO of IHWK. “Experts in non-volatile and emerging memory fields selected Microchip’s memBrain product, based on its industry-proven NVM technology, as the optimal choice for developing in-memory computing systems.”
Permanently storing neural models within the processing elements of the memBrain solution means it supports instant-on capabilities for real-time neural network processing. Leveraging the non-volatility of floating gate cells in SuperFlash memory, IHWK is achieving new benchmarks for supporting machine learning inference in power edge computing devices using advanced machine learning models.