Zing Forum

Reading

ARPM: Heterogeneous Temporal Memory Governance Framework Enables Long-Term Personality Consistency in LLMs

ARPM maintains long-term personality consistency in high-noise environments by separating static knowledge memory from dynamic dialogue experience memory, and integrating technologies such as vector retrieval, BM25, RRF fusion, and dual-temporal reordering.

长期对话人格一致性记忆治理向量检索BM25RRF融合时序重排序证据验证
Published 2026-05-14 21:12Recent activity 2026-05-15 10:23Estimated read 6 min
ARPM: Heterogeneous Temporal Memory Governance Framework Enables Long-Term Personality Consistency in LLMs
1

Section 01

ARPM: Heterogeneous Temporal Memory Governance Framework Enables Long-Term Personality Consistency in LLMs

ARPM (Heterogeneous Temporal Memory Governance Framework) addresses issues like fact loss, timeline confusion, and personality drift in long-term LLM dialogues by separating static knowledge memory from dynamic dialogue experience memory, and integrating vector retrieval, BM25, RRF fusion, dual-temporal reordering, and controlled analysis protocols. It maintains long-term personality consistency in high-noise environments. This framework treats personality continuity as a traceable, auditable, and transferable governance issue, breaking through the limitations of existing solutions at the system level.

2

Section 02

Core Challenges of Long-Term Dialogue and Limitations of Existing Solutions

Large language models face issues like fact loss, timeline confusion, personality drift, and decreased stability in long-term dialogues, which are particularly severe in scenarios with high-noise knowledge bases, regular context cleaning, and cross-model switching. Existing solutions fall into two categories: fine-tuning to encode personality (difficult to adapt to dynamic settings) and relying on ultra-long contexts (affected by length limitations and attention dilution), neither of which can effectively solve the long-term consistency problem.

3

Section 03

Core Design of ARPM: Dual-Track Memory Architecture and Multi-Layer Retrieval Fusion

Dual-Track Memory Architecture: Separates static knowledge memory (user profiles, preferences, and other factual information) from dynamic dialogue experience memory (interaction history, emotional changes, etc.), supporting different retrieval/update mechanisms, audit corrections, and cross-model personality transfer. Multi-Layer Retrieval Fusion: The base layer uses vector retrieval to capture semantic similarity, supplemented by BM25 for exact matching; results are fused via RRF, then undergo dual-temporal reordering (time decay + temporal evidence reading) to maintain dialogue coherence.

4

Section 04

Validation of ARPM Effectiveness: Experimental Data and Extreme Scenario Testing

Controlled Analysis Protocol: Validates evidence for relevance, timeliness, consistency, and credibility, binding evidence to responses. Experiments show that manual review in high-noise environments significantly improves recall rates (from 54% to 100% at 1:5 noise ratio, and from 44% to 80% at 1:200+ noise ratio). Ablation Experiments: Disabling dialogue history retrieval reduces accuracy from 100% to 66.7%, and disabling BM25 reduces it to 80%, proving the necessity of these components. Extreme Scenarios: Maintains semantic, boundary, and personality consistency under a 5.1 million-character noise base + context cleaning + multi-model switching, but performance is affected when protocol compliance is weak.

5

Section 05

Value and Limitations of ARPM

White-Box Evaluation: Modular design supports independent evaluation of component contributions, enabling quick identification of issues (retrieval/validation/update links) and reducing maintenance costs. Limitations: System performance degrades when protocol compliance is weak; actual deployment requires ensuring correct configuration and collaboration of components.

6

Section 06

Application Prospects and Insights of ARPM

ARPM provides technical references for long-term companion AI (mental health, educational counseling, personal assistants, etc.), indicating that personality consistency requires a combination of system architecture and governance processes. By externalizing memory management as an auditable component, more transparent, controllable, and maintainable AI systems can be built, which is of great significance for key domain applications of LLMs.