Zing Forum

Reading

Clinical Application of Multimodal Deep Learning in Speech Assessment for Cleft Lip and Palate Patients

A multimodal deep learning study integrating audio, facial video, fluoroscopic imaging, and clinical variables enables automated detection of compensatory articulation and hypernasality in cleft lip and palate patients, providing an objective auxiliary tool for clinical speech assessment.

多模态深度学习唇腭裂语音评估代偿性构音高鼻音临床AI医疗影像X线透视
Published 2026-04-16 13:59Recent activity 2026-04-16 14:53Estimated read 7 min
Clinical Application of Multimodal Deep Learning in Speech Assessment for Cleft Lip and Palate Patients
1

Section 01

Introduction to Clinical Application of Multimodal Deep Learning in Speech Assessment for Cleft Lip and Palate Patients

This study integrates audio, facial video, fluoroscopic imaging, and clinical variables to construct a multimodal deep learning model, enabling automated detection of compensatory articulation and hypernasality in cleft lip and palate patients. It aims to provide an objective auxiliary tool for clinical speech assessment and address the limitations of traditional subjective evaluation.

2

Section 02

Research Background and Clinical Needs

Cleft lip and palate is a common congenital craniofacial malformation with a global incidence of approximately 1 in 700 newborns. Postoperative patients often face speech disorders such as compensatory articulation and hypernasality. Traditional assessment relies on subjective evaluation by speech pathologists, which has problems like inter-rater variability, low repeatability, resource dependence, and difficulty in quantification. Multimodal deep learning can learn multi-dimensional patterns that are hard for humans to handle by integrating multi-source information, providing an objective and consistent assessment solution.

3

Section 03

Research Methods

Dataset Construction: Retrospective analysis of data from 34 Korean post-cleft palate patients, including 1254 word-level samples of 30 target words, with simultaneous collection of audio-visual data, fluoroscopic images, and clinical variables (gender, Veau classification, cleft width, age at initial repair).

Model Architecture: Modular design, including audio encoder (convolution + Transformer), video encoder (3D CNN), VFS image encoder (temporal model), tabular data encoder (embedding + fully connected), and exploration of 7 modal combination strategies.

Training and Evaluation: Patient-level 5-fold cross-validation was used, with evaluation metrics of AUROC and AUPRC.

4

Section 04

Research Results and Evidence

Compensatory Articulation Detection: The audio + video + VFS combination achieved an AUROC of 0.76 (best), with the video modality contributing the most; the model focused on lip movements and jaw positions.

Hypernasality Detection: The full-modal combination achieved an AUROC of 0.67 (best), with VFS and clinical variables contributing significantly, reflecting anatomical structure correlation.

Modal Contribution: Through Grad-CAM and attention visualization, compensatory articulation relies on video, while hypernasality depends on VFS and clinical variables.

5

Section 05

Technical Innovations

  1. Clinical-oriented design: Modal selection corresponds to actual clinical assessment methods, with translational value; 2. Patient-level cross-validation: Avoids data leakage and more accurately reflects generalization ability; 3. Systematic modal ablation: Tests all combinations to reveal the essential characteristics of different speech abnormalities.
6

Section 06

Open-Source Implementation Details

Code Structure: Includes main.py (training and evaluation), model.py (model architecture), dataset.py (data loading), etc.

Pretrained Weights: Provides pretrained weights for 7 modal combinations.

Usage Examples: Supports training commands for audio-only modality, full modality, etc.

Visualization Tools: Can generate ROC/PRC curves, VFS ablation analysis, and Grad-CAM attention visualization.

7

Section 07

Clinical Significance and Future Prospects

Immediate Value: Provides objective reference for pathologists, reduces assessment differences; serves as a teaching tool; quantifies and tracks treatment effects.

Future Directions: Multi-center large-scale validation; real-time assessment system; multi-language expansion; application to other craniofacial malformations or neurogenic speech disorders.

8

Section 08

Research Summary

This study demonstrates the potential of multimodal deep learning in speech assessment for cleft lip and palate patients, providing an objective detection tool by integrating multi-source information. Key findings: Compensatory articulation is a functional compensation that requires attention to video; hypernasality is anatomically related and needs to rely on imaging and clinical factors. This finding optimizes model design and provides a basis for clinical assessment processes. It is expected to become a standard tool in the comprehensive treatment of cleft lip and palate in the future.