Zing Forum

Reading

SignMotion-LLM: Research Exploration of Generating Sign Language Movements Using Large Language Models

This article introduces the SignMotion-LLM project, which tokenizes sign language movement data using VQ-VAE technology, laying the foundation for training large language models capable of generating sign language.

手语生成大语言模型VQ-VAE动作标记化SMPL-XSignAvatarsHow2Sign多模态AI无障碍技术
Published 2026-03-30 08:45Recent activity 2026-03-30 08:52Estimated read 6 min
SignMotion-LLM: Research Exploration of Generating Sign Language Movements Using Large Language Models
1

Section 01

SignMotion-LLM Project Guide: Exploration of Generating Sign Language Movements with Large Language Models

The SignMotion-LLM project aims to solve the problem of automatically converting text or speech into natural and fluent sign language movements. It tokenizes sign language movement data using VQ-VAE technology, laying the foundation for training large language models that can generate sign language. The project involves the SMPL-X human model, SignAvatars dataset, etc., exploring the application of multimodal AI in accessibility technology and representing the cutting-edge direction of sign language synthesis technology.

2

Section 02

Project Background and Core Objectives

Sign language is an important communication method for the hearing-impaired community, but traditional rule-based or template-based methods struggle to capture the complex grammar and subtle expressive differences of sign language. The core objective of the SignMotion-LLM project is to build a system that converts text input into continuous sign language movement sequences, using a phased strategy: first, tokenize sign language movement data via VQ-VAE (converting continuous sequences into discrete tokens), then use these tokens to train/fine-tune large language models.

3

Section 03

Technical Route: Movement Tokenization and LLM Training Foundation

The project's technical architecture centers on movement tokenization, using SMPL-X format human movement sequences from the SignAvatars dataset (a subset of How2Sign). VQ-VAE compresses movement sequences into discrete tokens via an encoder and reconstructs them back into movements via a decoder, enabling high-dimensional continuous data to be processed by LLMs. The experimental notebooks cover various directions: e.g., Notebook 02 compares the impact of optimizers (Muon vs. AdamW) and expansion designs; Notebook 07 explores training with normalized 6D SMPL-X features; Notebook 11 compares differences between model outputs and real data.

4

Section 04

Experimental Design and Evaluation Result Analysis

The project uses MPJPE (Mean Per Joint Position Error, which penalizes posture and global position errors) and MPJPE-PA (Procrustes Aligned MPJPE, focusing on posture quality) as evaluation metrics. Models with different configurations show significant performance differences: the three-stream architecture + 1024 codebook achieves MPJPE of 35.839mm and MPJPE-PA of 58.151mm at 24FPS; the joint tokenization spatiotemporal VQ-VAE is better, reaching MPJPE of 13.682mm and MPJPE-PA of 7.680mm, indicating that architecture and training strategies play a decisive role in performance.

5

Section 05

Dataset and Toolchain Details

The project relies on SignAvatars (providing SMPL-X parameterized human body data) and How2Sign (providing RGB videos and text metadata). Experiments use Jupyter Notebooks, and results are stored in the artifacts directory (including charts, CSVs, videos, etc.). The environment requires CUDA-enabled PyTorch and libraries like smplx and imageio; WSL users have a startup command to avoid CUDA initialization issues.

6

Section 06

Research Significance and Future Outlook

Social significance of the project: If successful, it can provide a more natural communication tool for the hearing-impaired community, breaking language barriers; Academic value: It explores a new paradigm of combining continuous movement data with LLMs, which can be applied to fields like dance synthesis. Current challenges: Improving the naturalness and fluency of movements, handling the complexity of sign language grammar and semantics, and achieving real-time generation. Future directions: A unified model of multimodal large language models combining visual, text, and movement data.