Zing Forum

Reading

Can Large Language Models Deobfuscate Binary Code? A Systematic Analysis and the BinDeObfBench Benchmark

This paper systematically evaluates the performance of LLMs on binary deobfuscation tasks using the BinDeObfBench benchmark, finding that reasoning ability and domain expertise are more important than model size, and task-specific fine-tuning outperforms general pre-training.

二进制反混淆大语言模型逆向工程软件安全BinDeObfBench代码混淆监督微调推理模型
Published 2026-04-09 18:56Recent activity 2026-04-10 09:48Estimated read 6 min
Can Large Language Models Deobfuscate Binary Code? A Systematic Analysis and the BinDeObfBench Benchmark
1

Section 01

[Introduction] Systematic Analysis of Large Language Models for Binary Deobfuscation and the BinDeObfBench Benchmark

This paper systematically evaluates the performance of Large Language Models (LLMs) on binary deobfuscation tasks by constructing the BinDeObfBench benchmark. Key findings include: reasoning ability and domain expertise are more important than model size; supervised fine-tuning (SFT) for deobfuscation tasks outperforms general pre-training; models with reasoning capabilities exhibit stronger robustness in heavily obfuscated scenarios and better cross-architecture generalization. The release of BinDeObfBench provides a standardized evaluation foundation for LLM-assisted deobfuscation research.

2

Section 02

Background: Challenges in Binary Deobfuscation and Gaps in Existing Research

Binary deobfuscation is a core challenge in software security reverse engineering. Traditional methods (rules, pattern matching, symbolic execution) are insufficient against new obfuscation techniques. The ability of LLMs in code understanding and other fields has raised questions about whether they can solve deobfuscation problems, but existing research has limitations: it only focuses on specific obfuscation types or model architectures, lacking systematic comparisons; existing benchmarks cover limited scenarios and are difficult to reflect real-world capability boundaries.

3

Section 03

Methodology: Construction of the BinDeObfBench Comprehensive Evaluation Benchmark

BinDeObfBench is the first comprehensive benchmark for LLM binary deobfuscation. Its design features include: 1. Multi-stage obfuscation coverage: pre-compilation (source code layer), compilation time (compiler IR layer), post-compilation (binary layer); 2. Cross-architecture and optimization levels: covering instruction sets such as x86, ARM, RISC-V, and optimization levels from O0 to O3, ensuring evaluation universality.

4

Section 04

Key Findings: Reasoning Ability and Domain Fine-Tuning Are Critical, Size Is Not a Determining Factor

Experimental evaluations yielded key findings: 1. Reasoning ability outperforms size: medium-sized but well-trained models may outperform untrained large models; 2. Task fine-tuning is better: SFT models consistently outperform general pre-trained models; 3. Reasoning models are robust: chain-of-thought-based models perform better in heavily obfuscated scenarios and have outstanding cross-architecture generalization; 4. Context learning effect varies: significant improvement for standard models, but limited gain for reasoning models.

5

Section 05

Conclusions and Practical Recommendations: Importance of Domain Training and Reasoning Ability

Practical recommendations based on the findings: 1. Prioritize domain-specific training: fine-tuning models on deobfuscation datasets, although additional annotation is required, the performance improvement is significant; 2. Emphasize reasoning ability cultivation: use chain-of-thought data, multi-step reasoning supervision, etc., to enhance model reasoning ability; 3. Establish a continuous evaluation mechanism: use BinDeObfBench to regularly track the performance of new models/technologies to cope with the development of obfuscation techniques.

6

Section 06

Limitations and Future Directions: Shortcomings of BinDeObfBench and Subsequent Research Directions

Limitations of the current benchmark: it mainly focuses on pseudocode-level deobfuscation and does not involve original source code recovery; it does not include dynamic obfuscation (such as runtime self-modifying code). Future directions: expand to more architectures (GPU kernels, embedded firmware); add dynamic analysis dimensions; explore human-machine collaboration models; develop more efficient fine-tuning strategies to reduce domain adaptation costs.