반도체 AI 인더스트리 4.0 SDV 스마트 IoT 컴퓨터 통신 특수 가스 소재 및 장비 e4ds plus

[혁신포커스] “MathWorks provides AI model compression… ST, TI, and NXP boards can utilize ultra-small neural networks”

기사입력2025.04.08 16:58


▲Donghoon Yeo, an engineer at MathWorks Korea, giving a presentation at MATLAB Expo 2025 Korea

Verification and Network Compression Deployment Strategy for Embedded AI Announced
Algorithm Developer AI Modeling·Embedded Collaboration Efficiency Improvement
AI modeling challenges are compounded by cost, performance, and memory optimization

As the trend of embedding AI models in small devices becomes more prevalent, communication and collaboration between algorithm developers and embedded S/W developers in the AI product development process becomes more complex, and tedious verification and testing processes are repeated. MathWorks has released solutions and demos that can simplify this development process, promoting the digitalization of engineering systems.

On the 8th, MathWorks held MATLAB EXPO 2025 KOREA at COEX in Samseong-dong, Seoul. More than 1,500 domestic and international technology experts and MATLAB and Simulink customers attended the expo to see the latest engineering technologies and trends.

On this day, main track 1 featured sessions on the topic of ‘Algorithm Development and AI.’ Yeo Dong-hoon, an engineer at MathWorks Korea, presented 'Deployment Strategy through Verification and Network Compression for Embedded AI' and explained a strategy to improve collaboration efficiency between algorithm modeling and embedded S/W development in small device development.

Based on his experience developing neural networks for small devices, Engineer Yeo Dong-hoon pointed out that manually performing tests to meet system requirements would be extremely burdensome.

He emphasized, “When creating a model, it is cumbersome to create thousands of models and combine them, and test them every time,” and “Systematic management of system requirements is necessary.” He shared insights on resolving problems that arise when the embedded process becomes more complex and feedback from the embedded development team is passed back to the algorithm development team through MATLAB.

Engineering systems are huge and complex structures, requiring a platform that can be tested in an integrated manner, and AI in particular is emerging as a key to solving domain problems, such as virtual sensors.

MATLAB solutions support system engineering by digitizing everything from neural network design to simulation, code generation, and testing. In particular, embedded devices are likely to face development challenges, such as having to use the cheapest board possible to perform AI modeling on low-performance and memory-capacity specifications.

To this end, MathWorks provides AI model compression and supports solutions that compress neural networks through quantization, projection, and pruning to reduce memory usage and increase inference performance. It can be used for deploying ultra-small neural networks on MCUs, and supports products from ST, TI, NXP, etc.


▲Processor types that support automatic code generation


The female engineer emphasized, “TensorFlow Lite has a limitation in that it cannot be installed on devices that do not support the library,” and “The biggest feature of MATLAB is that it can generate code even if the library cannot be used on NXP or STM MCUs, and it provides various functions such as various debugging systems for PIL testing and simulation.”