Edge AI, Device-level AI Computation Execution… Low Cost, Low Latency
Achieving efficiency across industries including AI speakers, autonomous vehicles, and industrial IoT
ST's 'NanoEdge AI Studio' Provides Easy ML Development Library
With the advent of the 4th industrial revolution, artificial intelligence (AI) that learns and reason like humans is attracting attention across industries. AI technology is being widely applied in major industries such as automobiles, finance, healthcare, and education. Recently, the government and companies are focusing on developing AI semiconductors to enhance competitiveness.
AI technology has been progressing by sending information collected from devices to a central cloud server for analysis and then sending it back to the devices. However, while AI is gaining attention and growing rapidly, the introduction and use of AI is by no means an easy problem.
High costs, such as hiring personnel to build high-performance hardware systems required for AI processing, have emerged as major challenges for most companies. Sensitive data makes it difficult to outsource work, and without expertise, it is also difficult to secure quality data.
■ Edge AI Concept This is where the concept of 'Edge AI' comes in. 'Edge AI' refers to AI operations at the device level. Edge computing is used based on data generated by the device to directly execute AI algorithms on the hardware device.
Since the device itself can collect and compute information without going through a remote cloud server, it has the advantages of cost reduction, fast processing through low latency, real-time information extraction, and enhanced security.
AI is gradually expanding from the current cloud-based approach to edge devices such as automobiles and home appliances. It is expected that customized services for smart devices will be further strengthened by directly identifying various data generated in daily life such as users’ eating habits and exercise.
For example, smart cameras with edge AI locally process captured images through deep learning, thereby reducing the load on cloud servers. Speakers with edge AI, such as Alexa, process tasks locally by storing them on the smart speaker through machine learning and deep learning processes.
In addition, autonomous vehicles are a collection of edge AI-implemented systems that process data immediately within the vehicle, enabling real-time operation of autonomous vehicles. Smart factories can process data at high speeds, including robot control, equipment data collection, and analysis, by applying edge AI to achieve automation and efficiency.
■ Edge AI Major Semiconductor Companies Edge AI has developed as cloud-based AI centered around software vendors, but recently, AI on devices centered around hardware vendors has been gaining attention.
Nvidia is leading the way in expanding edge AI. In addition to selling its own edge product line, Nvidia has integrated its GPU technology as “CUDA” into various mobile chip architectures designed by ARM through its acquisition of ARM.
The NVIDIA® Jetson Nano™ developer kit delivers compute performance to run modern AI workloads in an ultra-small, low-power, low-cost form factor.
Samsung Electronics has stated that it is focusing on research into deep learning algorithms and ‘on-device AI’. Earlier this year, it announced that it has significantly enhanced AI performance in the Exynos 2200. The neural processing unit (NPU) computational performance has more than doubled compared to its predecessor, strengthening the on-device AI function that implements AI functions on smart devices without going through a cloud server.
Intel has added vision processors (VPUs) and accelerators to its hardware lineup for edge AI, along with CPUs. Software tools have also been introduced for each layer, and in particular, functions such as OpenVino have been added to the main hardware Movidius.
Texas Instruments (TI) introduced the 'Sitara AM62 Processor' that helps expand edge AI processing to applications. This product increases accessibility to edge AI while reducing power consumption, making it suitable for dual-screen displays and small HMI applications.
TI said this product can reduce power consumption by up to 50 percent compared to competitors when applied to industrial applications. It added that it is particularly suitable for designing small devices such as portable devices.
NXP offers a portfolio of ML-based MCX microcontrollers (MCUs) that enable users to accelerate inference at the edge, delivering AI machine learning performance up to 30x faster than CPU cores.
NXP said the MCX portfolio maximizes software reuse through a unified software suite and provides a variety of options to help developers focus on the device that best suits their application needs.
Infineon has expanded its AURIX MCU family to support e-mobility, ADAS, automotive E/E architecture and popular AI applications. The TC4x family features a new PPU (Parallel Processing Unit), a SIMD vector DSP (Digital Signal Processor) that meets the needs of various AI topologies.
This allows for platform software savings as it leverages a common software architecture across a scalable family of products. Infineon is intensively supporting the ecosystem of AURIX TC4x to ensure short time-to-market and ease of use.
STMicroelectronics’ NanoEdge AI Studio is a new machine learning (ML) technology that allows developers to create the most appropriate ML library for their project with minimal data in a user-friendly environment. Applications include connected devices, home appliances, and industrial automation.
NanoEdge AI Studio helps to improve industrial processes, optimize maintenance costs, improve latency and information security, and enhance security. It easily creates machine learning, anomaly learning, detection and classification, regression, and outlier libraries that can be implemented directly on all STM32 MCUs, and users can do all tasks without specialized knowledge.
Recently, ST began supporting smart sensors with embedded Intelligent Sensor Processing Units (ISPUs). This feature extends the tool’s capabilities to implement on-device AI models that detect anomalies inside intelligent sensors.
NanoEdge AI Studio users can also distribute the inference workload across multiple devices in the system to achieve low power consumption. Always-on sensors with built-in ISPUs perform event detection at ultra-low power, waking up the MCU only when the sensor detects an anomaly.
Edge AI has great significance as a means of achieving high efficiency across industries. Edge AI is expected to further expand, providing a variety of functions to users and businesses by improving the overall standard.