Zing Forum

Reading

Single-machine Multi-model GPU Inference Server: Unified Deployment Solution for Qwen + Whisper + TimesFM

This project provides a solution to run Qwen 3.5 (conversation + vision), Whisper (speech transcription), and TimesFM 2.5 (time-series prediction) unifiedly on a single Tesla P40 GPU, achieving efficient GPU resource utilization through an intelligent loading/unloading mechanism.

llminferencegpuqwenwhispertimesfmdockermultimodal
Published 2026-04-06 12:37Recent activity 2026-04-06 12:56Estimated read 7 min
Single-machine Multi-model GPU Inference Server: Unified Deployment Solution for Qwen + Whisper + TimesFM
1

Section 01

Main Floor: Introduction to the Core Solution of Single-machine Multi-model GPU Inference Server

This project provides a solution to run Qwen 3.5 (conversation + vision), Whisper (speech transcription), and TimesFM 2.5 (time-series prediction) unifiedly on a single Tesla P40 GPU. The core achieves efficient GPU resource utilization through the "on-demand loading, idle unloading" mechanism. When idle, the GPU power consumption is as low as about 12W, and all models are deployed in a single Docker container.

2

Section 02

Project Background and Overview

llm-inference-server is a unified multi-model GPU inference server that supports four AI models: Qwen3.5 9B (general conversation), Qwen3.5 0.8B (lightweight multimodal), Whisper large-v3-turbo (speech transcription), and TimesFM2.5 (time-series prediction). The core design concept is "on-demand loading, idle unloading"—models are loaded only when needed, automatically unloaded after idle timeout, and GPU power consumption is low when there are no tasks.

3

Section 03

Architecture Design Details

The system uses a single-port routing architecture, providing external services via HTTP port 8088, with server.py (pure Python) acting as the router internally. server.py runs continuously, responsible for listening to requests and starting the corresponding model subprocesses; models are automatically closed to release memory when idle beyond IDLE_TIMEOUT (default 300 seconds). server.py does not import GPU libraries, so the GPU is in P8 state (12W) when all models are idle, suitable for scenarios with low call frequency and long-term operation.

4

Section 04

Model Resource Usage

Memory and power consumption in different states:

State Memory Usage Power Consumption GPU State
All idle ~200 MiB 12W P8
Qwen9B only ~10.5GB 55W P0
Qwen0.8B only ~1.5GB 55W P0
Whisper only ~2.5GB 55W P0
TimesFM only ~6.5GB 55W P0
All four models loaded ~18.9GB 60W P0
After idle timeout ~200MiB 12W P8
Tesla P40 (24GB memory) can load all models simultaneously with about 5GB buffer remaining.
5

Section 05

API Interfaces and Usage Instructions

Supports multiple API endpoints:

  • Conversation completion (Qwen9B): POST /v1/chat/completions
  • Audio transcription (Whisper): POST /v1/audio/transcriptions
  • Multimodal transcription (Qwen0.8B): POST /v1/transcribe
  • Time-series prediction (TimesFM): POST /v1/forecast
  • Health check: GET /health

Provides OpenAI-compatible APIs, which can be called using the OpenAI SDK. Note: Qwen models use chain-of-thought reasoning by default; it is recommended to set max_tokens to 300-500 to avoid running out midway.

6

Section 06

Deployment Preparation and Steps

Hardware requirements: NVIDIA GPU (≥20GB memory, Tesla P40 tested), CPU supporting Ivy Bridge instruction set, CUDA driver 13.0+, Docker + NVIDIA Container Toolkit. Model download: Need to download Qwen3.5 9B, Qwen3.5 0.8B (including visual projection), Whisper large-v3-turbo separately; TimesFM is downloaded automatically on first use. Deployment steps: docker compose build (about 15-20 minutes for the first time) → docker compose up -d. IDLE_TIMEOUT (default 300 seconds) and START_TIMEOUT (default 120 seconds) can be configured via the .env file.

7

Section 07

Analysis of Technical Highlights

  1. Optimized llama.cpp build: Uses the TurboQuant branch, supports KV cache quantization, reduces memory usage while maintaining quality; 2. Old hardware optimization: Optimized for Ivy Bridge CPUs (without AVX2/FMA), allowing old servers to run efficiently; 3. PyTorch version selection: TimesFM depends on PyTorch 2.4.1, which is the last version supporting the Pascal architecture (sm_61), ensuring compatibility with Tesla P40.
8

Section 08

Applicable Scenarios and Summary

Applicable scenarios: Edge AI deployment (single server with multiple models, low energy consumption), private AI infrastructure (local operation without cloud APIs), multimodal applications (unified backend supporting text/speech/image/time-series), cost-sensitive environments (maximizing hardware utilization). Summary: This project demonstrates a practical multi-model deployment mode. Through intelligent resource management and a unified routing layer, it achieves production-ready multimodal AI services on a single GPU, suitable for local/private cloud deployment needs.