LG Innotek's Gumi Plant 4 operates on Intel® Core™, Xeon® processors, and Arc™ GPUs.
Intel and LG Innotek are joining hands to accelerate the implementation of AI-based smart factories.
Intel announced on the 26th that it is collaborating with LG Innotek to build an inspection system utilizing Intel's CPU/GPU hardware and AI software toolkit starting in 2024, thereby simultaneously enhancing production efficiency and quality control.
The AI inspection system introduced at LG Innotek's Gumi Plant 4 operates based on Intel® Core™ and Xeon® processors and Arc™ GPUs.
Data generated during the production process is streamed in real time to a PC equipped with an Intel Core CPU, and the built-in GPU is responsible for defect analysis.
High-load tasks such as high-resolution image processing and multi-algorithm execution are handled by the Intel Arc external GPU. The accumulated data is transferred to an Intel Xeon-based server and used for pre-training.
In particular, Intel's OpenVINO™ software toolkit facilitates integration with existing deep learning environments, significantly reducing the burden of code rewriting when replacing GPUs.
OpenVino is an open-source toolkit that supports easy deployment of AI models across a variety of hardware environments, ensuring both development speed and flexibility.
This collaboration is also attracting attention in terms of cost-effectiveness.
Intel Arc GPU offers high cost efficiency compared to other products with similar performance, which allowed LG Innotek to significantly reduce the cost of building inspection systems.
These savings are laying the foundation for future expansion into domestic and international production lines.
LG Innotek first applied Intel's AI vision inspection solution to its mobile camera module production line last year, and is expanding its application to the FC-BGA (flip-chip ball grid array) production process this year.
In the future, we are also considering managing pre-training workloads using Intel® Gaudi® AI accelerators.
Additionally, LG Innotek engineers are increasing the utilization of Intel Xeon CPUs to retrain deep learning models when changing processes or replacing raw materials.
Xeon CPUs maximize deep learning training and inference performance with parallel computing performance and Intel® Advanced Matrix Extensions (Intel® AMX) accelerators, and can efficiently perform fine-tuning tasks without a separate GPU.