Section 01
Introduction: Factual Preference Alignment Framework—A New Path to Address LLM Hallucinations
The Factual Preference Alignment framework, open-sourced by Vector Institute, focuses on ensuring the factual accuracy of large language models during preference optimization, providing a systematic solution to mitigate model 'hallucination' issues. This framework integrates factual alignment into the core training process, supports open-source collaboration, and helps build more reliable AI systems.