Zing Forum

Reading

LLM Inference Benchmark on RTX 5090: A Practical Guide for Local Deployment

This article delves into the open-source LLM inference benchmark project by patrickwhelan-uk, which systematically tests the inference performance of multiple mainstream large language models on the NVIDIA RTX 5090 graphics card. It covers key metrics such as generation speed, first-token latency, VRAM usage, and power consumption, providing data support for local AI deployment.

LLM基准测试RTX 5090本地部署量化推理性能llama.cppOllama
Published 2026-04-01 07:39Recent activity 2026-04-01 07:48Estimated read 7 min
LLM Inference Benchmark on RTX 5090: A Practical Guide for Local Deployment
1

Section 01

[Introduction] LLM Inference Benchmark on RTX5090: Core Summary of the Practical Guide for Local Deployment

This article introduces the open-source LLM inference benchmark project by Patrick Whelan, which systematically tests the inference performance of multiple mainstream large language models on the NVIDIA RTX 5090 graphics card. It covers key metrics such as generation speed, first-token latency, VRAM usage, and power consumption, aiming to provide data support for local AI deployment and help developers solve core issues in hardware selection and model configuration.

2

Section 02

Project Background and Objectives

Local LLM deployment offers advantages like data privacy, low latency, and controllable costs, but the complexity of hardware selection and model configuration often deters people. The goal of this project is to provide clear and comparable actual measurement data to help engineers make informed decisions. The tests focus on metrics critical to real-world applications: tokens per second, first-token time, VRAM usage, and power consumption, using a systematic and reproducible approach.

3

Section 03

Test Hardware and Methodology

The test hardware is the NVIDIA RTX 5090 (32GB GDDR7 VRAM, consumer flagship), with future plans to expand to Apple Silicon M-series. Each test is run 3 times, with the average value taken and standard deviation reported. Core metrics include:

  • Generation speed (decoding phase speed, affects interactive experience)
  • Prefill speed (speed of processing input prompts, important for long contexts)
  • First-token time (latency from prompt submission to first output token)
  • Peak VRAM usage (determines whether the hardware can support the model)
  • Power consumption (sampled via nvidia-smi, used for efficiency comparison and thermal design)
4

Section 04

Test Models and Quantization Levels

The tests cover mainstream open-source models from 7B to 70B parameters:

Model Parameter Count Tested Quantization Levels
Llama 3.1 8B Instruct 8B Q4_K_M, Q5_K_M, Q8_0, F16
Llama 3.1 70B Instruct 70B Q4_K_M
Mistral 7B Instruct v0.3 7B Q4_K_M, Q8_0
Qwen 2.5 7B Instruct 7B Q4_K_M, Q8_0
DeepSeek-R1 Distill Llama 8B 8B Q4_K_M, Q8_0
Phi-4 14B Q4_K_M, Q8_0

Characteristics of quantization levels:

  • Q4_K_M: 4-bit k-quant with medium quality, balancing speed and quality
  • Q5_K_M:5-bit k-quant with medium quality, slightly better than Q4
  • Q8_0:8-bit quantization, close to native quality, higher VRAM usage
  • F16: Half-precision floating point, quality baseline (when VRAM allows)
5

Section 05

Test Tools and Engines

Multiple inference engines are used for comparison:

  1. llama.cpp: Low-overhead measurement via llama-bench, serving as the main benchmark tool
  2. Ollama: API timing to measure end-to-end performance (including API overhead), representing a popular local deployment solution
  3. vLLM: Production-grade service framework, testing planned

Comparing multiple engines ensures results reflect the real performance of actual deployment scenarios.

6

Section 06

Practical Significance and Application Recommendations

Key insights for local deployment developers:

  1. Quantization selection requires balancing quality and speed: Q4_K_M is sufficient for most scenarios; Q8_0 offers higher quality but requires more VRAM
  2. First-token time is critical for interactive applications: Even if generation speed is fast, excessive TTFT (Time To First Token) leads to perceived latency; prompt length and prefill strategies can be optimized
  3. Power consumption is important for long-running services: The RTX5090 has strong performance, but full-load power consumption and heat dissipation need to be included in deployment planning
7

Section 07

Community Contribution and Conclusion

The project welcomes contributions from other hardware platforms, which need to follow the unified configuration: 512-token prompt length, 128-token generation length, and 3 runs to ensure comparable results.

Conclusion: As the demand for local LLM deployment grows, systematic benchmarking is an important reference for hardware selection and model optimization. This project not only provides actual measurement data for the RTX5090 but also establishes a reproducible and extensible testing methodology, making it a practical resource for those concerned with the efficiency of local AI deployment.