Zing Forum

Reading

C-voting: A Confidence Voting Test-Time Scaling Strategy Without Explicit Energy Functions

This paper proposes the C-voting strategy, which achieves test-time scaling for recurrent neural networks via a confidence voting mechanism, enhancing inference task performance without the need for explicit energy functions.

测试时扩展循环神经网络置信度投票推理模型数独求解迷宫求解
Published 2026-04-15 14:10Recent activity 2026-04-16 10:50Estimated read 6 min
C-voting: A Confidence Voting Test-Time Scaling Strategy Without Explicit Energy Functions
1

Section 01

[Introduction] C-voting: A Confidence Voting Test-Time Scaling Strategy Without Explicit Energy Functions

This paper proposes the C-voting strategy, which achieves test-time scaling for recurrent neural networks via a confidence voting mechanism, enhancing inference task performance without explicit energy functions. This strategy addresses the limitation of existing test-time scaling methods that rely on energy functions, and has wide applicability—it can be applied to various recurrent architectures and inference tasks such as Sudoku solving and maze navigation.

2

Section 02

Background: Recurrent Inference Models and Current State of Test-Time Scaling

Neural network models with latent recurrent processing have gained attention in recent years. Their key feature is the recursive application of the same layer to latent states, making them ideal for performing inference tasks. Such models support Test-Time Scaling—improving performance by increasing computation during the testing phase without additional training. Typical examples include:

  • Hierarchical Reasoning Model (HRM): Achieves deep reasoning by increasing recurrent steps
  • Artificial Kohn Oscillatory Neurons (AKOrN): Uses oscillatory dynamics for reasoning These models have been successfully applied to tasks like Sudoku solving, maze navigation, and AGI benchmark tests.
3

Section 03

Limitations of Existing Strategies: Dependence on Energy Functions

Existing test-time scaling strategies (e.g., energy-based voting) are effective but have a critical limitation: they require the model to have an explicit energy function. This greatly restricts their applicability, as many recurrent models do not have explicit energy functions.

4

Section 04

C-voting Strategy: Detailed Explanation of the Confidence Voting Mechanism

The research team proposes C-voting (Confidence-Based Voting), a test-time scaling strategy designed specifically for recurrent models with multiple latent candidate trajectories. Core mechanisms:

  1. Multi-candidate Initialization: Initialize latent states with random variables to generate multiple candidate trajectories
  2. Confidence Evaluation: Calculate the average of top-1 probabilities for each candidate's prediction results
  3. Optimal Candidate Selection: Select the candidate with the highest confidence as the final output This method directly uses the model's own prediction confidence as the selection criterion, without the need for additional energy functions.
5

Section 05

Experimental Evidence: Performance of C-voting

Experimental results show significant advantages of C-voting:

Sudoku Hard Problems

  • On Sudoku-hard, it achieves an accuracy 4.9% higher than energy-based voting

Sudoku Extreme and Maze Tasks

When combined with the attention recurrent model ItrSA++:

  • Sudoku-extreme accuracy reaches 95.2%, far exceeding HRM's 55.0%
  • Maze task accuracy reaches 78.6%, better than HRM's 74.5%
6

Section 06

Conclusion: Core Advantage of C-voting—Universality

The most important advantage of C-voting is its wide applicability. Since it does not rely on explicit energy functions, it can be applied to:

  • Various recurrent neural network architectures
  • Models without energy functions
  • Any model that can output probability distributions This significantly lowers the deployment threshold for test-time scaling strategies.
7

Section 07

Future Directions: Exploration of C-voting Extensions

The success of C-voting shows that test-time scaling does not require complex energy function design—simple confidence metrics can outperform complex methods. This provides new ideas for strategy design:

  • Explore other confidence-based selection mechanisms
  • Study the relationship between confidence and reasoning depth
  • Extend C-voting to more types of inference tasks