최근 인공지능 기술이 고도화되고 보편화되면서 클라우드 리소스를 사용하지 않는 온디바이스 AI 프로세싱 요구가 빠르게 증가하고 있다.
▲STM Edge AI Processing Webinar
(Image: ST, e4ds EEWebinar) Developer-Friendly STM Edge AI Processing
STM32 Cube.AI NanoEdge AI Studio
“ST MCU-based AI solution and ecosystem support”
As artificial intelligence technology has become more advanced and widespread, the demand for on-device AI processing that does not use cloud resources is rapidly increasing.
At a recent e4ds EEWebinar, STMicroelectronics presented the STM Edge AI Processing webinar.
In this webinar, Hyunsoo Moon, Manager at STMicroelectronics, presented on ST’s solutions and ecosystem that enable edge AI processing on STM32 MCUs.
MCUs are often deployed in what are called edge platforms, where they are used at the end points within individual devices and systems. For example, it acquires sensor data and transmits it to the host, and controls LEDs and motors.
“Embedded applications will collect more data, which will increase the demand for data-centric analysis and data-based processing,” said Manager Moon Hyun-soo. “With the proliferation of IoT devices, more data will be collected, which will be processed and analyzed in real time with edge AI processing, enabling the implementation of a wide range of applications such as smart cities, smart homes, and industrial automation,” he explained.
Accordingly, ST continues to release and support developer-friendly solutions to enable edge AI processing on the STM32 MCU platform. These include STM32 Cube.AI and NANOEDGE AI STUDIO.
▲Edge AI tools available in the STM32 family and applicable applications (Image: ST, e4ds EEWebinar) Based on deep learning neural networks, ST's AI development environment is divided into △data preparation △data science △porting deep learning models to MCU. ST can convert deep learning models into C code-format models through the two tool solutions mentioned above.
STM32 Cube.AI provides the function to convert AI models created using general-purpose machine learning tools such as TensorFlow Lite, Keras, and Onyx into C code models.
This allows implementation of △anomaly detection △sensing △audio △vision, etc. on STM32 MCUs. In addition, X-LINUX-AI software is provided to provide AI Linux-based examples for the STM32 MPU product family.
It also provides AI model optimization functions. Manager Moon said, “STM32 Cube.AI provides various examples through GitHub under the name of Model Zoo,” and “Model Zoo includes examples of vision, audio, and sensing, and you can use these models as they are or use learning script files to change learning data to create new models.”
However, developers can use AI models such as TensorFlow Lite and Keras in conjunction with STM32 Cube.AI, but models such as PyTorch, Matlab, and Scikit-learn must be converted to the Onyx format before use.
STM32 Cube.AI supports △Graphics Optimizer △Quantized Model Support △Memory Optimizer.
In particular, data can be stored separately in internal and external memory to improve memory usage efficiency, and devices with small internal flash sizes can also place AI model parameters in external memory. Frequently used parameters can also be stored in SRAM or TCM memory close to the core to reduce latency.
▲A scene from the Nano Edge AI demo video
(Image: ST, e4ds EEWebinar) The advantage of NanoEdge AI Studio is that it automatically performs data preprocessing, model creation, and AI model conversion in C code format, allowing even developers without AI model experience to easily create AI models.
Developers can create AI models by simply providing a training dataset to NanoEdge AI Studio without having to understand general-purpose machine learning tools such as TensorFlow Lite, Keras, and Onyx. This is expected to shorten the development time.
“ST is well aware of the demand for edge AI processing in the current embedded market,” said Manager Moon Hyun-soo. “We are putting a lot of effort into providing various AI solutions and ecosystems based on ST MCUs.”