Physical AI HBM Smart Factory SDV AIoT Power Semicon 특수 가스 정정·반론보도 모음 e4ds plus

Ministry of Science and ICT announces strategy to realize trustworthy human-centered AI

기사입력2021.05.14 09:05

Ministry of Science and ICT, a place that everyone can trust and enjoy
Announcement of 3 major strategies and 10 major practical tasks for AI implementation
Step-by-step review and implementation plan until 2025



While AI is rapidly spreading across industries and society, creating innovation, unexpected social issues and concerns are also emerging, such as the AI chatbot Iruda controversy (21.01.), the deepfake video incident of former US President Obama (18.07.), and MIT's development of a psychopath AI (18.06.).

The Ministry of Science and ICT announced the 'Trustworthy AI Realization Strategy' for human-centered AI at the 22nd general meeting of the 4th Industrial Revolution Committee on the 13th. This strategy will be implemented in stages through 3 major strategies in terms of technology, system, and ethics and 10 major implementation tasks until 2025.

Minister of Science and ICT Choi Ki-young said, “The Iruda incident has given our society a lot of homework to do on how to deal with the reliability of AI,” and added, “We will clarify the standards for securing AI trust so that companies and researchers do not experience confusion in the process of developing AI products and services, and so that the public does not suffer damage as a result.” He added, “We will prepare support measures so that small and medium-sized companies do not have difficulties complying with the reliability standards.”

[1] Creating a trustworthy AI implementation environment

① Establishment of a trust-securing system for each stage of AI product/service implementation - Based on the “development-verification-certification” stages of implementing AI products/services in the private sector, we present and support trust-securing standards and methodologies that companies, developers, and third parties can refer to for implementing trust. Accordingly, development guidebooks, verification systems, and private autonomous certifications are being promoted.

② Support for securing AI reliability in the private sector - We will operate a platform that provides integrated support for “data acquisition-algorithm learning-verification” for AI implementation so that even startups with poor technical and financial situations can systematically secure reliability. Accordingly, we plan to additionally develop functions such as analysis of trust attributes by level according to the verification system and actual environment testing on the “AI Hub” platform that currently supports learning data and computing resources.
▲ One-stop support for “data acquisition-algorithm learning-verification”
Platform configuration plan [Image = Ministry of Science and ICT]

③ Development of AI reliability source technology - We will add functions that allow AI to explain judgment criteria, etc. to already implemented systems, and promote the development of technologies to enhance AI explainability, fairness, and robustness so that AI can diagnose and eliminate legal, institutional, and ethical biases on its own. Three related projects that will be implemented from next year to 2026 have passed the preliminary feasibility study.

[2] Establishing a foundation for safe use of AI

④ Improving the reliability of data for AI learning - Establish standard criteria, such as verification indicators for ensuring reliability, that the public and private sectors must commonly adhere to in the process of producing data for AI learning, together with the private sector. Meanwhile, the 'Data Dam' project being promoted as a digital new deal plans to improve quality by preemptively applying reliability considerations such as compliance with laws and regulations throughout the construction process.

⑤ Promotion of securing trust in high-risk AI - We plan to establish a category of high-risk AI that may pose a potential risk to the safety or basic rights of the people, and to 'notify' users of whether or not to use the AI before providing services. After notification, we plan to comprehensively review in the mid- to long-term the institutionalization of 'refusal to use' the AI-based service, 'explanation of results' on the basis of AI's judgment, and 'raising objections' to this, taking into account global legislative and institutionalization trends, industrial ramifications, social consensus and acceptance, and technological feasibility.

⑥ Implementation of AI impact assessment - In order to comprehensively and systematically analyze and respond to the impact of AI on the lives of all citizens, we plan to introduce a social impact assessment as stipulated in Article 56 of the Basic Act on Intelligent Information. We plan to comprehensively analyze the impact of AI based on safety, transparency, etc. and utilize it when establishing future AI-related policies or technical and management measures.

⑦ Improvement of systems to increase AI trust - Among the tasks discovered through the 'AI Law, System, and Regulation Improvement Roadmap' last year, we will improve related systems such as △creating an industry-autonomous algorithm management and supervision environment related to securing AI trust, protecting user lives, etc., △ensuring fairness and transparency of platform algorithms, △establishing algorithm disclosure standards to protect trade secrets, and △establishing high-risk technology standards.

[3] Spreading healthy AI awareness throughout society

⑧ Strengthening AI ethics education - A general introduction to AI ethics education will be prepared that includes content that allows for recognition of social and humanistic perspectives and social practice of ethical standards. In addition, based on this, we plan to develop and implement customized ethics education for researchers, developers, and ordinary citizens.

⑨ Preparation and distribution of checklists by subject - Develop and distribute checklists that reflect technological development patterns, allowing researchers, developers, users, etc. to self-check whether they are complying with ethics in their work and daily lives, as specific guidelines for AI ethics standards.

⑩ Operation of an ethics policy platform - Provide a venue for various members of society, including academia, businesses, and the public, to discuss AI ethics, collect opinions, and discuss development directions.
#AI
이수민 기자
기사 전체보기