Zing Forum

Reading

AsyncCosyVoice: Practice of Asynchronous Transformation for the CosyVoice Speech Synthesis Engine

This article introduces an open-source project that applies vLLM's AsyncLLMEngine to the asynchronous transformation of the CosyVoice speech synthesis engine. It details first-packet latency optimization, streaming inference strategies, and production environment deployment solutions, providing references for the engineering implementation of speech synthesis services.

语音合成CosyVoicevLLMAsyncLLMEngineTTS异步推理流式生成首包延迟大模型部署
Published 2026-03-31 06:14Recent activity 2026-03-31 06:19Estimated read 5 min
AsyncCosyVoice: Practice of Asynchronous Transformation for the CosyVoice Speech Synthesis Engine
1

Section 01

AsyncCosyVoice Project Guide: Core Practices of CosyVoice's Asynchronous Transformation

AsyncCosyVoice is an open-source project that uses vLLM's AsyncLLMEngine to perform asynchronous transformation on the CosyVoice speech synthesis engine. It addresses issues such as response latency and low resource utilization of the native synchronous inference mode in high-concurrency scenarios. Through first-packet latency optimization, streaming inference strategies, and production environment deployment solutions, the project provides references for the engineering implementation of speech synthesis services.

2

Section 02

CosyVoice Background and Issues with the Native Architecture

CosyVoice is an open-source speech synthesis large model from Alibaba Tongyi Laboratory, supporting multiple modes such as text-to-speech and voice cloning. Its core architecture is based on Transformer. The native synchronous inference mode has problems like high first-packet latency in interactive scenarios and limited resource utilization under high concurrency.

3

Section 03

Asynchronous Transformation and Key Optimization Methods

The core transformation is the introduction of vLLM's AsyncLLMEngine, which enables request-level asynchronous processing and continuous batching to improve GPU parallel utilization. First-packet latency optimization uses the Token Hop strategy: the first streaming chunk uses a smaller hop length (15 is recommended), and subsequent chunks return to a hop length of 25 to ensure quality. Engineering optimizations include standardized instruction input, audio caching to avoid repeated IO, HTTP service layer support for voice_id registration, and OpenAI-compatible APIs.

4

Section 04

Performance Test Data and Analysis

Tests on RTX 4090: After warm-up, the first-packet latency for single concurrency is <200ms, and the average latency for 8 concurrencies is 514ms. In formal tests, the average latency for single concurrency is 197ms, the success rate for 8 concurrencies is 100%, and the Real-Time Factor (RTF) ranges from 0.07 to 0.42, meeting real-time interaction requirements.

5

Section 05

Deployment Guide and Application Scenarios

Deployment steps: Recursively clone the project and its submodules, create a Python 3.10 conda environment, install specified dependencies, and download the model from Hugging Face or Modelscope. To start the service, you can specify the model path and port. Application scenarios include real-time dialogue systems (low latency), high-concurrency voice services (improved throughput), and edge device deployment (optimization ideas are applicable).

6

Section 06

Project Limitations and Future Improvement Directions

Limitations: Only supports CosyVoice 3.0 (2.0 cannot run); upstream voice color issues are not fixed; ONNX optimization has no significant improvement and is not included; voice_id becomes invalid after restart. Future improvements can focus on upstream updates, adding a voice persistence layer, etc.

7

Section 07

Project Value and Open-Source Contribution Summary

AsyncCosyVoice transforms cutting-edge technology into a production-ready solution, solves CosyVoice's performance bottlenecks, and provides a reusable engineering path. The project has detailed documentation and complete test data, making it suitable for secondary development and providing an important reference for speech synthesis implementation.