Zing Forum

Reading

Comprehensive Benchmark Test for Local Large Language Models: Comparison of Local Inference Performance of 8 Open-Source Models

A comprehensive benchmark test on 8 open-source large language models, fully evaluating their performance in local inference scenarios to provide references for local deployment selection.

本地部署大语言模型基准测试开源模型LlamaMistral推理优化量化
Published 2026-05-12 22:15Recent activity 2026-05-12 22:29Estimated read 6 min
Comprehensive Benchmark Test for Local Large Language Models: Comparison of Local Inference Performance of 8 Open-Source Models
1

Section 01

Comprehensive Benchmark Test for Local Large Language Models: Guide to Performance Comparison of 8 Open-Source Models

This test conducts a comprehensive benchmark on the local inference performance of 8 mainstream open-source large language models, covering core dimensions such as inference speed, resource usage, and task performance. It aims to provide objective and reproducible reference data for local deployment selection. The tested models include Llama series, Mistral series, Qwen2.5, Phi3, Gemma, CodeLlama, etc. Consumer-grade hardware (NVIDIA RTX4090, etc.) and the llama.cpp framework are used, with a unified Q4_K_M quantization configuration. Multi-scenario task capabilities are evaluated and targeted recommendations are provided.

2

Section 02

Research Background and Motivation

With the exponential growth of large language model scales, local deployment has become a popular choice due to advantages like data privacy, offline requirements, and reduced API costs. However, it faces challenges such as hardware limitations, inference efficiency, capability trade-offs, and selection confusion. The local-llm-benchmarks project addresses users' selection difficulties and provides references for local deployment through systematic testing of 8 mainstream open-source models.

3

Section 03

Overview of Tested Models

This test covers 8 mainstream open-source models:

  • Llama series: Llama3 8B (Meta's latest, balancing capability and efficiency), Llama2 7B/13B (mature ecosystem);
  • Mistral series: Mistral7B (sliding window attention, high performance for small models), Mixtral8x7B (MoE architecture, balancing capability and efficiency);
  • Others: Qwen2.5 (excellent multilingual/code capabilities), Phi3 (high performance for small models), Gemma (safe and multilingual), CodeLlama (code optimization).
4

Section 04

Testing Methodology

Hardware Environment: NVIDIA RTX4090 (24GB), high-end CPU, 64GB DDR5, NVMe SSD; Inference Framework: llama.cpp (unified Q4_K_M quantization, batch size=1 to simulate interactive scenarios); Evaluation Dimensions: Inference performance (tokens/s, first token latency, memory usage), task capability (general Q&A, code generation, etc.), stability (multiple averages to ensure reliability).

5

Section 05

Core Test Results

Inference Speed: Mistral7B, Phi3, Llama3 8B lead; Qwen2.5 and Gemma are medium; Mixtral8x7B and Llama2 13B have higher resource consumption; Memory Usage: Lightweight (<6GB: Phi3, Mistral7B), mainstream (6-10GB: Llama3 8B, etc.), large models (>15GB: Mixtral8x7B); Task Capability: Strong overall (Llama3 8B, Mistral7B), code expertise (CodeLlama, Qwen2.5), multilingual (Qwen2.5, Gemma), creative dialogue (Llama3 8B, Phi3).

6

Section 06

Selection Recommendations and Practical Guidance

Scenario Recommendations: For daily dialogue, choose Llama3 8B/Mistral7B; for programming, choose CodeLlama/Qwen2.5; for low-resource environments, choose Phi3; for multilingual use, choose Qwen2.5; for extreme performance, choose Mixtral8x7B (if hardware allows); Hardware Recommendations: 8GB VRAM (Phi3/highly quantized 7B models), 16GB (7B-8B models), 24GB (full models); Optimization Tips: Quantization strategy (Q4_K_M balance), adjust context length, increase batch size in non-interactive scenarios.

7

Section 07

Limitations, Future Outlook, and Conclusion

Limitations: Only tested on RTX4090, fixed Q4 quantization, limited tasks, version timeliness; Future: Expand hardware coverage, long-term version tracking, community contributions, real application testing; Conclusion: There is no perfect model; selection should be based on needs. Llama3 8B/Mistral7B are the first choice for most, Qwen2.5/Phi3 for special needs, and the trend of local deployment is positive.