Section 01
SignMotion-LLM Project Guide: Exploration of Generating Sign Language Movements with Large Language Models
The SignMotion-LLM project aims to solve the problem of automatically converting text or speech into natural and fluent sign language movements. It tokenizes sign language movement data using VQ-VAE technology, laying the foundation for training large language models that can generate sign language. The project involves the SMPL-X human model, SignAvatars dataset, etc., exploring the application of multimodal AI in accessibility technology and representing the cutting-edge direction of sign language synthesis technology.