Section 01
Spectra: Introduction to the Privacy and Security Auditing Tool for Large Language Models
Spectra is an open-source privacy and security auditing tool specifically designed for Large Language Models (LLMs). It can systematically detect security risks in models, such as PII leakage, verbatim repetition risk, and membership inference attacks. Its core value lies in helping enterprises with compliance audits, assisting in model selection, supporting red team testing and academic research, and helping to protect user privacy and data security while enjoying LLM capabilities.