Section 01
Introduction: Rules.txt—Debugging LLM Thought Processes with a Rationalist Rule Set
Rules.txt is a rationalist rule set designed for large language models (LLMs) and humans. Its core goal is to address the prevalent "moral performativity" issue in LLMs (such as empty moralizing on sensitive topics and gaslighting behavior when making mistakes), promote rational dialogue, reduce idealism and moral evasion, and provide a mechanism to audit the model's internal reasoning and detect biases. The project has clear positioning: it is not a complete jailbreak tool, not a one-size-fits-all solution, does not guarantee authenticity, requires active user participation, and the stronger the model's capabilities, the more benefits it can derive from the set.