Zing Forum

Reading

tps.sh: A Performance Benchmarking Tool for Local and Cloud Large Language Models

This article introduces an open-source tool called tps.sh, which can perform performance benchmarking for local and cloud large language models on Apple Silicon devices. By measuring tokens per second (TPS), output quality, and cost, it helps users choose the most suitable model solution.

LLMbenchmarktokens per secondApple SiliconOllamaClaude APIperformance testing
Published 2026-03-29 13:13Recent activity 2026-03-29 13:21Estimated read 5 min
tps.sh: A Performance Benchmarking Tool for Local and Cloud Large Language Models
1

Section 01

tps.sh: Guide to Performance Benchmarking Tool for Local and Cloud LLMs

tps.sh is an open-source performance testing tool for large language models, designed specifically for Apple Silicon Macs and supporting Windows platforms. By measuring tokens per second (TPS), output quality, and cost, it helps users compare performance differences between locally deployed models (e.g., Ollama) and cloud API services (e.g., Claude), assisting in choosing the most suitable model solution. The tool covers 147 tests, 7 models, and 21 sample prompts, lowering the technical barrier for LLM performance evaluation.

2

Section 02

Design Background and Comparison Scenarios of tps.sh

The design purpose of tps.sh is to help users choose between two mainstream LLM solutions: 1. Local deployment of Ollama (open-source models on Apple Silicon computers); 2. Claude API service (cloud-based commercial model service). The tool is mainly developed and tested for Apple Silicon Macs while supporting Windows platforms, aiming to quantify performance differences of solutions under different hardware and network environments.

3

Section 03

Testing Methods and System Requirements of tps.sh

Testing Scale: 147 independent tests, 7 models, 21 text prompts covering multiple scenarios, with measurement dimensions including TPS, output quality, and cost-effectiveness. System Requirements: For Windows: version 10+, 8GB RAM, 2GHz processor, 500MB storage space, and internet connection; Apple Silicon is the main development and testing platform. Installation Steps: Download the latest version from GitHub → Unzip/install → Run the tool via command line. Testing Process: Load 7 models → Test with 21 prompts → Run multiple rounds to ensure stability → Record TPS → Generate comparison report.

4

Section 04

Test Result Presentation of tps.sh

Test results are presented in text form, including average TPS of each model, performance comparison between local models and cloud APIs, latency distribution, and stability indicators. Through these data, users can intuitively understand the inference speed and performance differences of different models, providing an objective basis for choosing solutions.

5

Section 05

Usage Suggestions and Local Model Configuration of tps.sh

Local Model Configuration: Install the target model software → Ensure it can be called via command line → Edit the configuration file to point to the model command/API endpoint → Retest (configuration examples are included in the download package). Best Practices: Close resource-intensive programs, use a stable network, check the prompt list, update regularly, and back up custom configurations. Troubleshooting: Check system updates, script execution permissions, correctness of configuration files, or visit GitHub issues for help.

6

Section 06

Application Scenarios and Practical Value of tps.sh

Target Users: Technical decision-makers (evaluating cost-effectiveness), developers (choosing project models), researchers (quantifying inference performance), ordinary users (understanding LLM performance without programming). Practical Value: Helps avoid over/under configuration, balance privacy and convenience, optimize cost structure, improve user experience, and select appropriate model deployment solutions through data-driven decisions.