Zing Forum

Reading

SeLaR: Selective Latent Reasoning in Large Language Models

SeLaR, an ACL 2026 accepted paper, proposes a selective latent reasoning method that enables large models to intelligently decide when to perform deep reasoning, balancing performance and efficiency.

选择性推理潜在推理思维链ACL 2026模型效率元认知大语言模型
Published 2026-04-10 12:37Recent activity 2026-04-10 12:54Estimated read 7 min
SeLaR: Selective Latent Reasoning in Large Language Models
1

Section 01

SeLaR: Introduction to Selective Latent Reasoning in Large Language Models

SeLaR, an ACL 2026 accepted paper, proposes a selective latent reasoning method that allows large models to intelligently decide when to perform deep reasoning, balancing performance and efficiency. This method introduces a meta-decision mechanism to separate reasoning decisions from content, improves efficiency through latent space reasoning, and ensures accuracy for complex problems, bringing new insights to the LLM reasoning paradigm.

2

Section 02

Cost and Necessity of Reasoning: Problem Formulation

Cost and Necessity of Reasoning

The reasoning ability of large language models is key to solving complex problems, but deep reasoning requires generating a large number of intermediate steps (chain of thought), which significantly increases computational cost and response latency. Overthinking for simple queries not only wastes resources but may also introduce errors. Core question: Can models learn selective reasoning—only deep thinking when truly needed, and quick responses for simple problems?

3

Section 03

Core of SeLaR Method and Technical Implementation

SeLaR: Selective Latent Reasoning

SeLaR introduces a meta-decision mechanism to evaluate whether a problem requires deep reasoning before generating reasoning steps. Its core innovation is latent reasoning: implicit reasoning in the model's latent representation space, which is compact, flexible, and learnable. The architecture consists of two components:

Selector: A lightweight module that quickly evaluates problem complexity to decide whether to activate the reasoner; Latent Reasoner: When activated, performs multi-step reasoning in the latent space and passes the results.

4

Section 04

Training Strategy and Optimization of SeLaR

Training Strategy and Optimization

SeLaR uses curriculum learning-style training:

  1. Initial stage: Encourage extensive use of the reasoner to build a foundation of reasoning ability;
  2. Mid stage: Introduce efficiency constraints and penalize unnecessary reasoning activations;
  3. Late stage: Fine-tune the selector's decision boundary to optimize the Pareto frontier of accuracy and efficiency.

Progressive training balances reasoning dependence and efficiency.

5

Section 05

Experimental Results: Balance Between Accuracy and Efficiency

Experimental Results and Performance Analysis

Evaluated on benchmarks such as mathematics (GSM8K, MATH), logic (LogiQA), and commonsense (CommonsenseQA):

  • Accuracy: No significant decline compared to the full reasoning baseline, with improvements on some datasets;
  • Efficiency: Average reasoning steps reduced by 40-60%, lowering computational cost and latency;
  • Adaptability: Frequent skipping of reasoning for simple tasks, high activation rate for complex tasks.
6

Section 06

Implications of SeLaR for Reasoning Paradigms and Conclusions

Implications for Reasoning Paradigms

SeLaR's contributions include:

  1. Metacognitive ability: Introduce cognition and control over the thinking process;
  2. Balance between efficiency and quality: Intelligent selection mechanism achieves both;
  3. Value of latent space: Demonstrate the efficiency of implicit reasoning.

Conclusion

SeLaR promotes the transformation of LLM reasoning from a one-size-fits-all approach to an adaptive strategy, providing ideas for practical and sustainable AI systems, and expects to inspire more research on selective computing and metacognitive AI.

7

Section 07

Limitations and Future Research Directions

Limitations and Future Directions

Limitations: The current selector relies on surface features of the input and lacks sufficient judgment on the structure of complex problems; training requires a large amount of labeled data to indicate reasoning needs.

Future directions: Develop methods to automatically learn selection strategies from feedback; explore applications in multimodal scenarios (multimodal reasoning has higher costs).

8

Section 08

SeLaR Open Source and Community Resources

Open Source and Community

SeLaR's code and pre-trained models have been open-sourced on GitHub. Developers can:

  • Fine-tune and evaluate SeLaR on their own tasks;
  • Explore selector architectures and training strategies;
  • Integrate into existing reasoning systems.

Open source promotes the widespread adoption and development of innovation.