Zing Forum

Reading

LongCat-Flash-Prover: Meituan Open-Sources 560B Parameter MoE Model, Setting a New Record in Mathematical Formal Reasoning

Meituan's LongCat team released the open-source 560B parameter MoE model LongCat-Flash-Prover. Through native formal reasoning and reinforcement learning integrated with agent tools, it set a new record for open-source models on the Lean4 theorem proving benchmark.

LongCat-Flash-Prover美团MoE形式化推理Lean4定理证明开源模型数学推理强化学习HisPO
Published 2026-04-02 14:25Recent activity 2026-04-02 14:52Estimated read 5 min
LongCat-Flash-Prover: Meituan Open-Sources 560B Parameter MoE Model, Setting a New Record in Mathematical Formal Reasoning
1

Section 01

[Introduction] Meituan Open-Sources LongCat-Flash-Prover: 560B Parameter MoE Model Sets New Record in Mathematical Formal Reasoning

Meituan's LongCat team officially open-sourced the 560B parameter Mixture of Experts (MoE) model LongCat-Flash-Prover, which is deeply optimized for mathematical formal reasoning tasks. Through the native formal reasoning paradigm and reinforcement learning integrated with agent tools, combined with the innovative HisPO training algorithm, this model set a new record for open-source models on the Lean4 theorem proving benchmark, marking an important breakthrough in the field of formal mathematical reasoning.

2

Section 02

Background: Core Position of Mathematical Reasoning and Limitations of Traditional Models

Mathematical reasoning ability is a core measure of the intelligence level of large models. Traditional large language models rely on natural language reasoning chains to handle mathematical problems, lacking strict logical verification mechanisms and being prone to the "hallucination" problem. To address this, the LongCat team proposed the native formal reasoning paradigm, which treats formal reasoning as a basic capability of the model and directly uses formal operators like Lean4 to solve tasks without modifying the architecture.

3

Section 03

Core Technologies: Native Formal Reasoning and HisPO Algorithm

Native formal reasoning is decomposed into three agent capabilities: automatic formalization (converting natural language to verified formal statements), sketch generation (lemma-style proof sketches), and proof (complete proof or auxiliary lemmas), enhanced by tool-integrated reasoning. The training framework uses Mixture of Experts iterative optimization and synthesizes verifiable trajectory data. To address the training stability of the 560B parameter MoE, the HisPO algorithm is proposed, which eliminates gradient differences through a hierarchical clipping strategy and designs theorem consistency checks to prevent reward hacking.

4

Section 04

Benchmark Results: New Record for Open-Source Models

LongCat-Flash-Prover performed excellently on authoritative benchmarks: the automatic formalization Pass@8 metric reached the best among open-source models; in terms of theorem proving, MiniF2F-Test achieved a 97.1% pass rate with only 72 attempts, ProverBench solved 70.8% of the problems, and PutnamBench (Putnam competition-level difficult problems) solved 41.5%, significantly outperforming existing open-source baselines.

5

Section 05

Technical Details and Deployment Information

The model is based on the MoE architecture (560B parameters), with optimized dialogue templates supporting tool declarations, interleaved thinking, and reasoning retention. It has been adapted to SGLang and vLLM, with deployment guides provided. The model weights are open-sourced under the MIT license, allowing free use and modification.

6

Section 06

Research Significance and Future Outlook

This model proves the effectiveness of large-scale MoE in formal reasoning. The native formalization paradigm opens up a new direction for AI mathematical research, and the open-source strategy promotes collaboration in the field. Future AI systems are expected to play a more important role in mathematical discovery, program verification, scientific computing, and other fields, and LongCat-Flash-Prover is a key step.