Zing Forum

Reading

ACE: An Intelligent Context Compression Scheme for Multi-turn Agentic LLM Reasoning

ACE solves the window saturation problem in long-context reasoning by compressing tool outputs while retaining key information through a content-aware scoring mechanism.

LLM上下文压缩Agentic推理工具调用上下文管理多轮对话
Published 2026-05-15 01:14Recent activity 2026-05-15 01:21Estimated read 5 min
ACE: An Intelligent Context Compression Scheme for Multi-turn Agentic LLM Reasoning
1

Section 01

ACE: An Intelligent Context Compression Scheme for Multi-turn Agentic LLM Reasoning (Introduction)

ACE (Attention-Weighted Context Eviction) is an intelligent context compression scheme designed to address the long-context window saturation problem in multi-turn Agentic LLM reasoning. It retains key information (such as tool call JSON, error messages, file paths, etc.) through a content-aware line-level scoring mechanism, removes redundant content, and improves context utilization efficiency while maintaining task accuracy.

2

Section 02

Background: Dilemma of Long-Context Reasoning

In multi-turn Agentic tasks, LLMs frequently call tools to obtain information (reading files, executing commands, searching the web). The large volume of text from tool returns quickly fills the context window. Traditional solutions use simple head/tail truncation, but blindly discarding content easily loses critical error messages, file paths, or task framework information.

3

Section 03

Core Methods of ACE: Content-Aware Scoring and Compression Process

The core idea of ACE is intelligent compression based on content importance, rather than discarding old content in chronological order. Its line-level scoring system assigns scores from 0 to 1 to different types of content (e.g., tool call JSON: 1.0, error messages: 0.95, file paths: 0.90, etc.). Compression process: Check if total character count exceeds budget → Identify candidate messages → Score line by line → Retain high-score lines according to target ratio (while keeping first and last lines) → Transparently handle with omission markers.

4

Section 04

Experimental Validation and Comparison with Traditional Truncation

Experimental Results: On the SWE-bench Lite test subset, with Qwen3-Next-80B model and an 8000-character budget: no compression → 0% accuracy; KV truncation → 20% accuracy (evicted 23407 characters); ACE → 20% accuracy (only evicted 10132 characters, a 57% reduction), and ACE agents had fewer average rounds (5.4 rounds vs 6.0 rounds).

Comparison with Traditional Truncation:

Feature Head/Tail Truncation ACE
Selection Criterion Position (oldest first) Content importance score
Error Message Retention No (if within truncated range) Yes (score: 0.95)
File Path Retention No Yes (score: 0.90)
Task Framework Retention No Yes (score: 0.85)
Boilerplate Content Removal Accidental Intentional (score: 0.30)
Omission Visibility Silent Explicit marker
5

Section 05

Practical Applications and Technical Implementation of ACE

Application Scenarios: Suitable for Agentic systems that need to handle long contexts. Integration method: perform compression after each tool call and before the next LLM call.

Technical Implementation: Provides a Python package that supports full message list compression, single text block compression, line-by-line scoring API, and the ACECompressor class. After compression, key structures are retained and omitted content is explicitly marked.

6

Section 06

Value and Paradigm Shift of ACE

ACE achieves a paradigm shift from "passive truncation" to "active selection". It intelligently identifies and retains the most valuable information, improves context utilization efficiency, while maintaining the traceability and debuggability of Agentic tasks, providing a practical and efficient solution for building reliable long-running Agent systems.