생성형 AI를 활용한 콘텐츠가 우후죽순으로 등장하는 가운데, AI 창작물의 저작권 침해 및 불법 콘텐츠 생성 등 사회적 문제점이 대두되고 있다. 이에 따라 AI 기술의 발전, 창작가 및 소비자의 권리 보호 두 마리의 토끼를 잡기 위한 선제적·구체적 입법 대응이 강조된다.
Seeking ways to protect copyright of AI content creators
Generative AI Copyright Guidelines Published in December Last Year
As content utilizing generative AI is rapidly emerging, social problems such as copyright infringement and illegal content creation of AI creations are emerging. Accordingly, preemptive and specific legislative responses are emphasized to catch two birds with one stone: advancement of AI technology and protection of the rights of creators and consumers.
The 'National Assembly Public Hearing for the Mandatory Use of Made by AI Labeling' hosted by the National Assembly Culture, Sports and Tourism Committee and organized by the Korea Music Copyright Association was held on the 30th at the National Assembly Members' Hall. At the hearing, Professor Lee Dae-hee of Korea University Law School presented a presentation on the need for revision of Article 26 of the 'Content Industry Promotion Act (hereinafter referred to as the Content Industry Promotion Act)'.
Since the launch of ChatGPT, people have been able to easily create content using generative AI-based tools such as 'Dali' and 'Midjourney'. However, various problems have been raised, including infringement of the creator's intellectual property rights, as generative AI learns from existing large amounts of data to create results.
■ Need for legislation on labeling AI products emerges Controversies surrounding creative works in the artistic fields such as video production, art, webtoons, and music, which were considered to be exclusive domains of humans, were anticipated with the advent of generative AI. With the emergence of ChatGPT, there are mixed opinions that AI technology can be used as a useful auxiliary tool while also being used as a resource to replace the role of the creator.
In many countries, including the US, EU, and France, AI product labeling bills are being proposed, and in Korea, a partial revision bill to the Content Industry Promotion Act has been proposed. Professor Lee Dae-hee said, “The revision bill to the Content Industry Promotion Act has an important meaning in imposing rules and responsibilities for the content ecosystem based on generative AI.”
Last year, a boycott movement for AI webtoons took place in Naver Webtoon's "Challenge Cartoon." Kwon Hyuk-joo, the president of the Korea Webtoon Writers Association, who attended the discussion that day, said, "If the 'TDM (Text and Data Mining)' exemption regulation is passed, AI will learn data without the original creator's permission and use it for commercial purposes, which will lead to a decrease in the creator's motivation and economic loss."
In addition to copyright disputes regarding the learning process and output of generative AI, the need for effective legislation has emerged in the face of recent illegal AI content such as deepfakes, fake news, and AI cover songs.
For example, there was a recent incident where a synthetic pornographic image of American pop star Taylor Swift was spread through social media X. The video is believed to have been created using generative AI. As AI technology becomes more accessible, cases of pornography damage using 'deepfake' technology, which synthesizes a specific person's face with another photo or video, are increasing.
Professor Lee Dae-hee said, “If it is not indicated that it is an AI product, users will mistake it for a human creation and purchase, consume, and use it, which will lead to the spread of fake news and the infringement of privacy by unauthorized use of other people’s names or voices, making it difficult to distinguish between humans and AI.”
■ Ministry of Culture, Sports and Tourism publishes generative AI copyright guide The issue in labeling AI products is the ambiguity of the distinction between AI products and human creations. Attorney Seunghee Kang of Gangnam Law Firm argued that “the concept of judging content using AI is ambiguous in practice, so we must be careful.” For example, she explained that the AI labeling is ambiguous for tasks such as taking pictures with a smartphone and using AI to set the focus in real time.
Attorney Kang said, “If we label even minor or non-core content as AI-based content, it could actually cause industry shrinkage and confusion,” and argued, “In order to ensure consistency in the AI regulatory system, it should be aligned with the AI Basic Law currently under discussion.”
In order to legislate labeling for AI products, it is necessary to consider the following: △imposing an appropriate scope of labeling obligations, △indicating AI modifications and editing, △indicating methods and contents, △indicating participants in the AI life cycle, △differentiating labeling methods by media, △indicating technology, and △need to fulfill labeling obligations.
At the end of December last year, the Ministry of Culture, Sports and Tourism and the Korea Copyright Commission announced guidelines called the ‘Generative AI Copyright Guidelines.’ This is a summary of what each stakeholder should know about copyright in the process of creating generative AI output.
Generative AI technology is divided into two stages: △collecting and processing necessary data to train AI, and △using the trained AI to create output.
Datasets consisting of text, images, etc. for AI learning may cause copyright infringement, and if the results produced using the learned AI model are similar to existing works, copyright infringement may occur.
According to the current copyright law, there is no explicit reason for restricting copyright when using copyrighted works for AI learning purposes. Here, the fair use clause is raised as an issue, and in a situation where the application of fair use regulations to AI learning is unclear, it is recommended to provide appropriate compensation to the copyright holder and secure legal use rights when using copyrighted works as learning data.
In addition, AI business operators are advised to use separate technical devices when designing AI neural networks to avoid producing outputs similar to existing works, and to clearly assign responsibilities. It was also stated that care should be taken when performing fine tuning in the future.
Copyright holders can state in their terms and conditions that they do not want their content to be used for AI learning, or they can apply the robot exclusion standard (robots.txt). Recently, media websites have been taking these measures.