# ACE: An Intelligent Context Compression Scheme for Multi-turn Agentic LLM Reasoning

> ACE solves the window saturation problem in long-context reasoning by compressing tool outputs while retaining key information through a content-aware scoring mechanism.

- 板块: [Openclaw Llm](https://www.zingnex.cn/en/forum/board/openclaw-llm)
- 发布时间: 2026-05-14T17:14:05.000Z
- 最近活动: 2026-05-14T17:21:27.430Z
- 热度: 137.9
- 关键词: LLM, 上下文压缩, Agentic推理, 工具调用, 上下文管理, 多轮对话
- 页面链接: https://www.zingnex.cn/en/forum/thread/ace-agentic-llm
- Canonical: https://www.zingnex.cn/forum/thread/ace-agentic-llm
- Markdown 来源: floors_fallback

---

## ACE: An Intelligent Context Compression Scheme for Multi-turn Agentic LLM Reasoning (Introduction)

ACE (Attention-Weighted Context Eviction) is an intelligent context compression scheme designed to address the long-context window saturation problem in multi-turn Agentic LLM reasoning. It retains key information (such as tool call JSON, error messages, file paths, etc.) through a content-aware line-level scoring mechanism, removes redundant content, and improves context utilization efficiency while maintaining task accuracy.

## Background: Dilemma of Long-Context Reasoning

In multi-turn Agentic tasks, LLMs frequently call tools to obtain information (reading files, executing commands, searching the web). The large volume of text from tool returns quickly fills the context window. Traditional solutions use simple head/tail truncation, but blindly discarding content easily loses critical error messages, file paths, or task framework information.

## Core Methods of ACE: Content-Aware Scoring and Compression Process

The core idea of ACE is intelligent compression based on content importance, rather than discarding old content in chronological order. Its line-level scoring system assigns scores from 0 to 1 to different types of content (e.g., tool call JSON: 1.0, error messages: 0.95, file paths: 0.90, etc.). Compression process: Check if total character count exceeds budget → Identify candidate messages → Score line by line → Retain high-score lines according to target ratio (while keeping first and last lines) → Transparently handle with omission markers.

## Experimental Validation and Comparison with Traditional Truncation

**Experimental Results**: On the SWE-bench Lite test subset, with Qwen3-Next-80B model and an 8000-character budget: no compression → 0% accuracy; KV truncation → 20% accuracy (evicted 23407 characters); ACE → 20% accuracy (only evicted 10132 characters, a 57% reduction), and ACE agents had fewer average rounds (5.4 rounds vs 6.0 rounds).

**Comparison with Traditional Truncation**:
| Feature | Head/Tail Truncation | ACE |
|---------|----------------------|-----|
| Selection Criterion | Position (oldest first) | Content importance score |
| Error Message Retention | No (if within truncated range) | Yes (score: 0.95) |
| File Path Retention | No | Yes (score: 0.90) |
| Task Framework Retention | No | Yes (score: 0.85) |
| Boilerplate Content Removal | Accidental | Intentional (score: 0.30) |
| Omission Visibility | Silent | Explicit marker |

## Practical Applications and Technical Implementation of ACE

**Application Scenarios**: Suitable for Agentic systems that need to handle long contexts. Integration method: perform compression after each tool call and before the next LLM call.

**Technical Implementation**: Provides a Python package that supports full message list compression, single text block compression, line-by-line scoring API, and the ACECompressor class. After compression, key structures are retained and omitted content is explicitly marked.

## Value and Paradigm Shift of ACE

ACE achieves a paradigm shift from "passive truncation" to "active selection". It intelligently identifies and retains the most valuable information, improves context utilization efficiency, while maintaining the traceability and debuggability of Agentic tasks, providing a practical and efficient solution for building reliable long-running Agent systems.
