UNIST 인공지능대학원 백승렬 교수팀이 창의적인 텍스트 프롬프트 기반의 조명 효과를 적용하는 인공지능 모델 ‘텍스트투리라이트(Text2Relight)’를 개발하며, 복잡한 편집도구를 쓰지 않고도 텍스트 입력만으로 ‘따끈따끈한 치킨’, ‘차가운 푸른빛’과 같은 언어의 감성까지 담아 사진이나 영상의 색감을 쉽게 보정할 수 있게 됐다.
UNIST develops AI model Text2Relight technology
An artificial intelligence has been developed that can create lighting effects for photos and videos using just text input. It is now possible to easily correct the colors of photos or videos to capture the emotions of words such as 'hot chicken' and 'cold blue light' without using complex editing tools.
Professor Seungryeol Baek's research team at UNIST's Graduate School of Artificial Intelligence has developed an AI model called 'Text2Relight' that applies lighting effects based on creative text prompts.
This technology can adjust the color and lighting effects of photos and videos simply by entering text, making it easy to create emotional feelings such as “hot chicken” or “cold blue light” without using complex editing tools.
This research was conducted in collaboration with Adobe and was accepted by the Association for Artificial Intelligence (AAAI) 2025, one of the three major artificial intelligence conferences.
The research team plans to present this technology at a regular academic conference to be held in Philadelphia, USA, starting on the 25th.
The TextToLight model has the strength of being able to express various lighting characteristics such as color, brightness, and emotional mood through creative natural language text.
Additionally, you can adjust the color tones of the character and background simultaneously without distorting the original image.
While existing text-based image editing AI models were not specialized for lighting data, resulting in distortion of the original image or limited lighting control, this model solved these problems.
The research team developed the technology by creating a large-scale synthetic dataset that allows AI to learn the correlation between creative text and lighting.
Create lighting data using ChatGPT and text-based diffusion models, and learn various lighting conditions by applying OLAT techniques and Lighting Transfer.We built a large-scale synthetic dataset.
We also trained additional auxiliary learning data, such as shadow removal and light position adjustment, to enhance visual consistency and lighting realism.
Professor Baek Seung-ryeol explained, “Text2Relight technology has great potential in the content field, such as shortening work time in photo and video editing and increasing immersion in virtual and augmented reality.”
This study was conducted with the support of Adobe and the Ministry of Science and ICT, with researcher Junwook Cha of UNIST's Graduate School of Artificial Intelligence participating as the first author.