Current large language models (LLMs) and AI assistants exhibit impressive language understanding and generation capabilities in conversations, but their handling of "memory" often remains superficial. Most systems simply store dialogue content as raw text or vector embeddings, and rely on semantic similarity matching when retrieval is needed. This "keyword matching"-style retrieval has a fundamental flaw: it can only find content similar in "what was said", but struggles to capture the emotional and meaningful layers of "why it matters".
Imagine you tell your AI assistant a memory about your grandmother. Months later, when you mention "summer" or "old house", you hope the AI can recall that conversation about your grandmother—even if those words didn't appear at the time. Traditional vector retrieval is hard to achieve this because it lacks explicit modeling of emotional importance, memory triggers, and identity associations.
This is the problem the EDM (Emotional Data Metadata) specification aims to solve. EDM does not replace traditional vector retrieval; instead, it adds a "meaning layer"—a structured data layer that explicitly encodes emotional weight, recall triggers, and identity clues.