개인정보보호위원회(위원장 고학수)는 인공지능(AI) 개발에 필수적인 공개 데이터가 현행 개인정보 규율체계 내에서 적법하고 안전하게 처리될 수 있도록 ‘인공지능(AI) 개발·서비스를 위한 공개된 개인정보 처리 안내서’를 마련해 공개했다.
Considering AI technology changes, flexible introduction and implementation of detailed safety measures
Emphasize the role of AI companies and personal information protection officers in AI development
Government-level standards have been released to safely process ‘public data on the Internet’ used in the development of generative artificial intelligence (AI) models.
The Personal Information Protection Commission (Chairman Koh Hak-soo) has prepared and published the ‘Guide to the Processing of Open Personal Information for Artificial Intelligence (AI) Development and Services’ to ensure that open data essential for artificial intelligence (AI) development can be processed legally and safely within the current personal information regulatory system.
According to the guide, it was first clarified that personal information disclosed under the “legitimate interest” clause of Article 15 of the Protection Act can be used for artificial intelligence (AI) learning and service development.
In addition, for this 'legitimate interest' clause to be applied, three requirements must be met: the legitimacy of the purpose of developing artificial intelligence (AI), the necessity of processing disclosed personal information, and the specific balance of benefits.
In addition, the Personal Information Protection Commission provided specific guidance through the guide on technical and management security measures and measures to protect the rights of data subjects that artificial intelligence (AI) companies can consider when processing disclosed personal information based on “legitimate interests.”
However, rapid changes in artificial intelligence (AI) technology, etc. In consideration of this, detailed safety measures, etc. can be flexibly introduced and implemented. Artificial intelligence (AI) companies are not required to implement all safety measures. They can select and implement the 'optimal combination of safety measures' that suits their characteristics by considering the positive functions of the various safety measures presented in the guide, the negative effects such as AI performance degradation and bias, and the level of technological maturity.
The Personal Information Protection Commission has provided actual safety measure implementation cases of major large-scale language model (LLM) operators identified through preliminary artificial intelligence (AI) inspections in March 2024, so that companies can use them as a reference when determining the ‘optimal combination.’
If the Personal Information Protection Commission periodically detects domain information (URLs) where major personally identifiable information is exposed and provides them to artificial intelligence (AI) companies, it recommends that the companies exclude the domain information (URLs) from learning data collection.
Lastly, the role of artificial intelligence (AI) companies and chief privacy officers (CPOs) in processing learning data for artificial intelligence (AI) development was emphasized.
It was recommended that the ‘(tentative name) Artificial Intelligence (AI) Privacy Organization’ centered around the Chief Privacy Officer (CPO) be autonomously organized and operated, and that it be evaluated to see whether the standards in the guideline have been met and the basis for this be prepared and kept.
We will periodically monitor risk factors such as major technological changes, such as improvements in artificial intelligence (AI) performance, and concerns about personal information infringement, and we will also prepare a plan to quickly take legal action in the event of a breach, such as leakage or exposure of personal information.
The guide will be continuously updated in consideration of future enactment and revision of personal information-related laws, trends in the development of artificial intelligence (AI) technology, and trends in overseas regulatory development.
Personal Information Protection Commission Chairman Koh Hak-soo said, “Although AI technology is advancing rapidly, open data learning, which is a key to AI development, isHe said, “Whether or not wetlands are legal and safe in light of the wetland protection law was a blank space,” and added, “Through this guide, we hope that companies will create their own AI and data processing practices that the public can trust, and that the accumulated best practices will continue to be reflected in the guide.”