[_view.as] Limitations on the Expansion of Super-Large AI Models, Lightweighting and Modularization Are Necessary - e4ds news
반도체 AI 인더스트리 4.0 SDV 스마트 IoT 컴퓨터 통신 특수 가스 소재 및 장비 e4ds plus

[IT 인사이트] “Limitations on the Expansion of Super-Large AI Models, Lightweighting and Modularization Are Necessary”

기사입력2023.08.23 16:46

2023 Artificial Intelligence Graduate School Symposium Held
AI model commercialization in mind… Emphasis on cost-effective research

Experts have argued that research on modularization and lightweighting of not only ultra-large AI models but also medium-sized AI models is necessary.

The Ministry of Science and ICT held the '2023 Artificial Intelligence Graduate School Symposium' at the COEX Grand Bloom in Samseong-dong for two days from the 17th to the 18th. The event featured lectures by experts from industry, academia, and research, and explored the direction of AI development through exchanges of AI-related research by doctoral students.

▲2023 Artificial Intelligence Graduate School Symposium Panel Discussion

At the opening ceremony on this day, an award ceremony was held for the winning team of the 2023 Artificial Intelligence Innovation Challenge. This competition was sponsored by seven companies including KT and Deepnoid, and evaluated problem-solving skills using actual data from the field on industrial AI issues. Among a total of 373 teams and 944 people, Yonsei University's DeepText team won first place in the graduate school and received the Seoul Science and ICT Minister's Award of 10 million won.

A panel discussion was held in the afternoon, where domestic AI experts provided AI research strategies and advice for the era of hyper-large-scale AI. The presentation was attended by Professor Seo Min-jun (KAIST), Professor Choi Seong-jun (Korea University), Director Shin Im (KETI Artificial Intelligence Research Center), and Director Yoon Sang-doo (Naver Cloud AI Lab).

The main AI issues discussed in the presentation that day were the size and research direction of the super-large AI model. Recently, super-large AI is being developed in the direction of reducing the size of the model while maintaining similar performance.


KAIST Professor Seo Min-jun

Professor Seo Min-jun of KAIST discussed, "Is increasing the size of the super-large model the answer?" According to Professor Seo, Nvidia's H100 consumes 1KW of energy, while an average nuclear power plant consumes approximately 1GW of energy. In the end, the amount of H100s that can be operated in one power plant is only about one million. The explanation is that there are environmental limitations to the exponential expansion of AI models.

Professor Seo said, “Like the differences between English and Korean, research on the transformation or connection between each mode in the model is becoming more important,” and “Modularization is important because there is a limit to the expansion of the model size.”

Naver Cloud AI Lab Director Sangdoo Yoon also said, “Learning a large model does not mean that learning a small model is meaningless,” and, “It is not the size of the model that is important, but rather the exploration of what can be discovered within it that is important.”

The new center director also said, “Although APIs like ChatGPT show good performance, small and medium-sized businesses have difficulty operating them,” and insisted, “Researchers should keep commercialization in mind and devise a method to research mid-sized models and lightweight methods at a reasonable cost required for each service.”

During the discussion that day, there was also discussion on our country’s response measures as global AI market competition intensifies. There was a common opinion that it is time to discuss the division of roles between academia, industry, and research institutes in response to the position that we should focus on research or each business at the national level.

Naver CEO Yoon said, “Although the interests of each company are different, it is difficult to complete it with only internal company resources, so let’s find a way to appropriately share incentives.” It is also expected that there will be opportunities for industry and academia to cooperate based on differences such as safety issues and operation-related aspects of LLM.

The researchers suggested ways to save theoretical costs by establishing correct hypotheses in order to conduct research most efficiently with a limited cost base.

The experts did not spare any encouragement for the students attending the event to study the field in the era of super-large AI. The experts emphasized the ability to not only utilize existing technologies but also deeply understand and transform them. They also provided advice on how to secure your own competitiveness in the AI era, such as strategies for creating your own skill set.


▲2023 Artificial Intelligence Graduate School Symposium Booth Exhibition

Meanwhile, at the event held that day, the Artificial Intelligence Graduate School, Artificial Intelligence Convergence Innovation Graduate School, and companies in the artificial intelligence industry jointly set up exhibition booths and an artificial intelligence experience zone.

Minister of Science and ICT Lee Jong-ho urged, “Through this symposium, we must work to solidify solidarity among AI graduate schools and cooperation between universities and businesses by having academia and industry share and disseminate their achievements,” and added, “In the era of hyper-large-scale AI, we will continue to strive to ensure that the AI Graduate School and the AI Convergence Innovation Graduate School can take the lead in fostering world-class, excellent AI researchers by combining industry and academia capabilities.”