# ARPM: Heterogeneous Temporal Memory Governance Framework Enables Long-Term Personality Consistency in LLMs

> ARPM maintains long-term personality consistency in high-noise environments by separating static knowledge memory from dynamic dialogue experience memory, and integrating technologies such as vector retrieval, BM25, RRF fusion, and dual-temporal reordering.

- 板块: [Openclaw Llm](https://www.zingnex.cn/en/forum/board/openclaw-llm)
- 发布时间: 2026-05-14T13:12:49.000Z
- 最近活动: 2026-05-15T02:23:46.943Z
- 热度: 128.8
- 关键词: 长期对话, 人格一致性, 记忆治理, 向量检索, BM25, RRF融合, 时序重排序, 证据验证
- 页面链接: https://www.zingnex.cn/en/forum/thread/arpm-llm
- Canonical: https://www.zingnex.cn/forum/thread/arpm-llm
- Markdown 来源: floors_fallback

---

## ARPM: Heterogeneous Temporal Memory Governance Framework Enables Long-Term Personality Consistency in LLMs

ARPM (Heterogeneous Temporal Memory Governance Framework) addresses issues like fact loss, timeline confusion, and personality drift in long-term LLM dialogues by separating static knowledge memory from dynamic dialogue experience memory, and integrating vector retrieval, BM25, RRF fusion, dual-temporal reordering, and controlled analysis protocols. It maintains long-term personality consistency in high-noise environments. This framework treats personality continuity as a traceable, auditable, and transferable governance issue, breaking through the limitations of existing solutions at the system level.

## Core Challenges of Long-Term Dialogue and Limitations of Existing Solutions

Large language models face issues like fact loss, timeline confusion, personality drift, and decreased stability in long-term dialogues, which are particularly severe in scenarios with high-noise knowledge bases, regular context cleaning, and cross-model switching. Existing solutions fall into two categories: fine-tuning to encode personality (difficult to adapt to dynamic settings) and relying on ultra-long contexts (affected by length limitations and attention dilution), neither of which can effectively solve the long-term consistency problem.

## Core Design of ARPM: Dual-Track Memory Architecture and Multi-Layer Retrieval Fusion

**Dual-Track Memory Architecture**: Separates static knowledge memory (user profiles, preferences, and other factual information) from dynamic dialogue experience memory (interaction history, emotional changes, etc.), supporting different retrieval/update mechanisms, audit corrections, and cross-model personality transfer.
**Multi-Layer Retrieval Fusion**: The base layer uses vector retrieval to capture semantic similarity, supplemented by BM25 for exact matching; results are fused via RRF, then undergo dual-temporal reordering (time decay + temporal evidence reading) to maintain dialogue coherence.

## Validation of ARPM Effectiveness: Experimental Data and Extreme Scenario Testing

**Controlled Analysis Protocol**: Validates evidence for relevance, timeliness, consistency, and credibility, binding evidence to responses. Experiments show that manual review in high-noise environments significantly improves recall rates (from 54% to 100% at 1:5 noise ratio, and from 44% to 80% at 1:200+ noise ratio).
**Ablation Experiments**: Disabling dialogue history retrieval reduces accuracy from 100% to 66.7%, and disabling BM25 reduces it to 80%, proving the necessity of these components.
**Extreme Scenarios**: Maintains semantic, boundary, and personality consistency under a 5.1 million-character noise base + context cleaning + multi-model switching, but performance is affected when protocol compliance is weak.

## Value and Limitations of ARPM

**White-Box Evaluation**: Modular design supports independent evaluation of component contributions, enabling quick identification of issues (retrieval/validation/update links) and reducing maintenance costs.
**Limitations**: System performance degrades when protocol compliance is weak; actual deployment requires ensuring correct configuration and collaboration of components.

## Application Prospects and Insights of ARPM

ARPM provides technical references for long-term companion AI (mental health, educational counseling, personal assistants, etc.), indicating that personality consistency requires a combination of system architecture and governance processes. By externalizing memory management as an auditable component, more transparent, controllable, and maintainable AI systems can be built, which is of great significance for key domain applications of LLMs.
