Section 01
Rt-LRM: Introduction to the Red Teaming Framework for Large Reasoning Models
The Rt-LRM (Red Teaming Large Reasoning Models) project, jointly launched by East China Normal University, Tsinghua University Shenzhen International Graduate School, and other institutions, provides a comprehensive red teaming test toolkit for large reasoning models, covering three key dimensions: authenticity, security, and efficiency, to help researchers systematically evaluate model performance in adversarial scenarios.