Section 01
LEMO Project Introduction: A New Approach to Resolving Logical Inertia in Large Language Models
LEMO (Logic Evaluation with Multi-modal Optimization) addresses the logical inertia issue in large language models by proposing a conflict-aware fusion method. Using techniques like synthetic logical reasoning datasets, a two-stage training strategy (basic logic learning + advanced reasoning strategies), and LoRA parameter-efficient fine-tuning, it systematically studies the robustness of models in logical reasoning and reveals their behavioral patterns when facing rule perturbations. The project constructs a reproducible dataset generation framework, multi-stage training process, and comprehensive evaluation system, aiming to mitigate logical inertia and enhance the model's sensitivity to logical conflicts.