Zing Forum

Reading

FUSE: Integrate Verifiers Without Labeled Data, Achieve Zero-Shot In-Test Expansion

FUSE proposes a fully unsupervised verifier integration method that improves verification quality without any ground truth annotations. It matches or even outperforms semi-supervised methods on benchmarks like GPQA Diamond and Humanity's Last Exam.

验证器集成无监督学习测试时扩展大语言模型谱算法零样本学习
Published 2026-04-21 01:40Recent activity 2026-04-21 13:25Estimated read 5 min
FUSE: Integrate Verifiers Without Labeled Data, Achieve Zero-Shot In-Test Expansion
1

Section 01

Introduction: FUSE—A New Method for Verifier Integration Without Labeled Data

FUSE proposes a fully unsupervised verifier integration method that improves verification quality without any ground truth annotations. By controlling the conditional dependencies between verifiers and using spectral algorithms to achieve zero-shot integration, it matches or even outperforms semi-supervised methods on benchmarks like GPQA Diamond and Humanity's Last Exam, providing a more flexible and cost-effective verification solution for the training and deployment of large language models (LLMs).

2

Section 02

Background and Challenges of Verifier Integration

As the capabilities of large language models (LLMs) improve, verifying the correctness of model outputs has become a core issue. However, obtaining ground truth annotations is time-consuming and costly. Traditional verifier integration methods rely on labeled data to calibrate weights, making it difficult to determine the reliability of each verifier in unlabeled scenarios, leading to integration challenges.

3

Section 03

Core Principles and Features of FUSE

FUSE (Fully Unsupervised Score Ensembling) improves the performance of spectral algorithms in unsupervised scenarios by controlling the conditional dependencies between verifiers. Its features include: 1. Zero annotation requirement; 2. Applicable to various types of verifiers; 3. Flexible integration of any number of verifiers; 4. Theoretical guarantees based on spectral algorithms.

4

Section 04

Experimental Verification Results of FUSE

FUSE has been validated effective across diverse benchmarks: it stably improves performance on traditional academic benchmarks like GPQA Diamond; demonstrates generalization ability on cutting-edge unsaturated benchmarks like Humanity's Last Exam; and the fully unlabeled FUSE often matches or outperforms semi-supervised methods that require partial annotations.

5

Section 05

Application Scenarios and Value of FUSE

The zero-shot feature of FUSE is applicable to multiple scenarios: real-time verification in reinforcement learning fine-tuning (RLHF/RLAIF); in-test expansion to improve output quality; rapid deployment in new domains (without annotations); and cost-sensitive applications (avoiding annotation costs).

6

Section 06

Technical Contributions and Limitations of FUSE

Technical contributions: 1. First realization of high-quality verifier integration with zero annotations; 2. Revealing the key impact of verifier dependency structure on integration performance; 3. Extending spectral algorithms to unsupervised scenarios; 4. Empirical verification across multiple benchmarks. Limitations: Improvements are limited when verifier quality is too low; relies on specific conditional assumptions; theoretical limits need to be explored.

7

Section 07

Future Directions and Conclusion

Future directions can include adaptive dependency structure learning, combination with active learning, multi-modal verification, etc. By applying conditional dependency control and spectral algorithms, FUSE achieves fully unsupervised verification integration, providing a more flexible and cost-effective verification solution for LLMs, which has important practical value.