Section 01
Introduction: UnifiedMemBench—A Comprehensive Memory Evaluation Benchmark for Large Language Models
This article introduces UnifiedMemBench, an open-source evaluation framework focused on assessing the memory capabilities of large language models (LLMs). It covers three core dimensions: contextual memory, parameterized knowledge, and long-term retention, and uses an event-centric evaluation method to provide a systematic tool for evaluating LLM memory capabilities.