Zing Forum

Reading

EvalSense: NHS England's Open-Source LLM System Evaluation Framework Supporting Multiple Evaluation Methods and Models

EvalSense is a systematic large language model (LLM) evaluation framework developed by the NHS England Data Science Team, focusing on the evaluation of open-ended generation tasks. It supports multiple model providers, advanced evaluation methods (such as G-Eval, QAGS, BERTScore), and offers an interactive web interface and meta-evaluation tools to help developers select the most suitable evaluation methods for their use cases.

LLM评估大语言模型NHS EnglandG-EvalBERTScore开源框架模型评测医疗AI
Published 2026-04-12 05:35Recent activity 2026-04-12 05:59Estimated read 6 min
EvalSense: NHS England's Open-Source LLM System Evaluation Framework Supporting Multiple Evaluation Methods and Models
1

Section 01

EvalSense: NHS England's Open-Source LLM Evaluation Framework Overview

EvalSense is an open-source LLM evaluation framework developed by NHS England's data science team, focusing on open-ended generation tasks. It supports multiple model providers (local and cloud-based), integrates advanced evaluation methods (G-Eval, QAGS, BERTScore, ROUGE), and offers an interactive Web interface plus meta-evaluation tools to help developers select the most suitable evaluation methods for their use cases.

2

Section 02

Background: The Need for Systematic LLM Evaluation

With LLMs widely used across industries, accurate performance evaluation has become a key challenge. Traditional metrics like accuracy or perplexity are insufficient for open-ended tasks (e.g., medical consultation, customer service). In high-risk fields like healthcare, evaluation accuracy directly impacts patient safety and decision quality. NHS England developed EvalSense to address this pain point, providing a systematic, repeatable, and scalable LLM evaluation solution.

3

Section 03

Model Support & Efficient Execution Engine

EvalSense supports various local and cloud-based model providers: local models (Ollama, Hugging Face Transformers, vLLM) and cloud APIs (OpenAI, Anthropic Claude). Its execution engine features smart experiment scheduling for local models, async parallel calls for remote APIs, and comprehensive logging of all key evaluation information (model parameters, prompts, outputs, results, metadata) in machine-readable format.

4

Section 04

Advanced Evaluation Methods Integrated in EvalSense

EvalSense integrates multiple cutting-edge evaluation methods: 1. G-Eval (generative evaluation using LLM to generate scores for nuanced judgment); 2. QAGS (question generation and answer consistency check for summary/dialogue tasks);3. BERTScore (semantic similarity calculation via pre-trained model embeddings);4. ROUGE (recall-based text overlap metric for summary tasks).

5

Section 05

Modular Architecture & Extensibility

EvalSense adopts a modular design with core components (evaluators, model interfaces, data pipeline) that can be used independently or replaced. It supports easy extensibility: adding new evaluation metrics, integrating new model providers, customizing data loading logic, and implementing domain-specific evaluation logic.

6

Section 06

Practical Application Scenarios

EvalSense has proven value in real-world scenarios: 1. Medical dialogue evaluation (reliable quality assessment on ACI-Bench dataset, including diagnosis accuracy and communication effect);2. Meta-evaluation (using perturbed data to verify evaluation method reliability);3. Model comparison (parallel execution enables large-scale model comparisons for informed selection).

7

Section 07

Quick Start & Community Contribution

Installation steps: pip install evalsense (basic) or pip install "evalsense[webui]" (interactive features). Start Web UI with evalsense webui. EvalSense is open-source under MIT license; community contributions are welcome (new methods, model support, docs, bug fixes). Maintained by Adam Dejl, feedback via GitHub Issues.

8

Section 08

Summary & Future Outlook

EvalSense represents a significant advancement in LLM evaluation tools, establishing a systematic process from method selection to result analysis. It is essential for organizations using LLMs, especially in fields with high-quality requirements like healthcare. NHS England's development sets a benchmark for the industry. Future evolution will support more evaluation needs and model types, serving as a valuable starting point for teams building their own evaluation systems.