Section 01
AI Red Team Playground: Guide to the Interactive Experimental Platform for LLM Security Testing
AI Red Team Playground is an interactive experimental platform for red team security testing of large language models (LLMs), aiming to systematically evaluate the security boundaries of LLMs. The platform covers various testing scenarios such as prompt injection, jailbreak attacks, data leakage, and adversarial sample generation, helping developers, researchers, and learners explore LLM security risks and accumulate defense experience.