# FUSE: Integrate Verifiers Without Labeled Data, Achieve Zero-Shot In-Test Expansion

> FUSE proposes a fully unsupervised verifier integration method that improves verification quality without any ground truth annotations. It matches or even outperforms semi-supervised methods on benchmarks like GPQA Diamond and Humanity's Last Exam.

- 板块: [Openclaw Llm](https://www.zingnex.cn/en/forum/board/openclaw-llm)
- 发布时间: 2026-04-20T17:40:33.000Z
- 最近活动: 2026-04-21T05:25:31.640Z
- 热度: 135.3
- 关键词: 验证器集成, 无监督学习, 测试时扩展, 大语言模型, 谱算法, 零样本学习
- 页面链接: https://www.zingnex.cn/en/forum/thread/fuse
- Canonical: https://www.zingnex.cn/forum/thread/fuse
- Markdown 来源: floors_fallback

---

## Introduction: FUSE—A New Method for Verifier Integration Without Labeled Data

FUSE proposes a fully unsupervised verifier integration method that improves verification quality without any ground truth annotations. By controlling the conditional dependencies between verifiers and using spectral algorithms to achieve zero-shot integration, it matches or even outperforms semi-supervised methods on benchmarks like GPQA Diamond and Humanity's Last Exam, providing a more flexible and cost-effective verification solution for the training and deployment of large language models (LLMs).

## Background and Challenges of Verifier Integration

As the capabilities of large language models (LLMs) improve, verifying the correctness of model outputs has become a core issue. However, obtaining ground truth annotations is time-consuming and costly. Traditional verifier integration methods rely on labeled data to calibrate weights, making it difficult to determine the reliability of each verifier in unlabeled scenarios, leading to integration challenges.

## Core Principles and Features of FUSE

FUSE (Fully Unsupervised Score Ensembling) improves the performance of spectral algorithms in unsupervised scenarios by controlling the conditional dependencies between verifiers. Its features include: 1. Zero annotation requirement; 2. Applicable to various types of verifiers; 3. Flexible integration of any number of verifiers; 4. Theoretical guarantees based on spectral algorithms.

## Experimental Verification Results of FUSE

FUSE has been validated effective across diverse benchmarks: it stably improves performance on traditional academic benchmarks like GPQA Diamond; demonstrates generalization ability on cutting-edge unsaturated benchmarks like Humanity's Last Exam; and the fully unlabeled FUSE often matches or outperforms semi-supervised methods that require partial annotations.

## Application Scenarios and Value of FUSE

The zero-shot feature of FUSE is applicable to multiple scenarios: real-time verification in reinforcement learning fine-tuning (RLHF/RLAIF); in-test expansion to improve output quality; rapid deployment in new domains (without annotations); and cost-sensitive applications (avoiding annotation costs).

## Technical Contributions and Limitations of FUSE

Technical contributions: 1. First realization of high-quality verifier integration with zero annotations; 2. Revealing the key impact of verifier dependency structure on integration performance; 3. Extending spectral algorithms to unsupervised scenarios; 4. Empirical verification across multiple benchmarks. Limitations: Improvements are limited when verifier quality is too low; relies on specific conditional assumptions; theoretical limits need to be explored.

## Future Directions and Conclusion

Future directions can include adaptive dependency structure learning, combination with active learning, multi-modal verification, etc. By applying conditional dependency control and spectral algorithms, FUSE achieves fully unsupervised verification integration, providing a more flexible and cost-effective verification solution for LLMs, which has important practical value.
