Section 01
Introduction: Core Overview of the Reasoning Model Safety Research Project
Introduction to the Toxic Reasoning Models Project
An open-source research project initiated by researcher sfschouten, focusing on the safety issues of reasoning models such as OpenAI o1/o3 and DeepSeek-R1. It aims to identify and mitigate the risks of toxic content generation, promoting the development of AI safety and ethics.