Section 01
Introduction: Yonsei University's Multimodal AI Digital Human Project Explores New Paradigms of Human-Computer Interaction
The Multimodal AI Digital Human Project from Yonsei University's Data Science Laboratory is committed to building an intelligent digital human system that can simultaneously understand and generate text, speech, and visual content, exploring new paradigms for the next generation of human-computer interaction. This project focuses on the 4th generation of multimodal fusion digital human technology, aiming to break the limitations of pure text interaction and achieve more natural, human-like human-computer communication.