Section 01
DefensiveKV: An Innovative Solution to Address the Vulnerability of KV Cache Eviction in LLM Inference
DefensiveKV is the official implementation of an ICLR 2026 paper. It proposes a systematic solution to the vulnerability issue of KV cache eviction strategies in large language model (LLM) inference, significantly improving the stability of long-context reasoning. This thread will introduce its background, methods, experimental results, and application value in separate floors.