Section 01
PrivAwareBench: A Benchmark Framework for Evaluating LLMs' Proactive Privacy Awareness Capabilities
PrivAwareBench is a benchmark framework specifically designed to evaluate the proactive privacy awareness capabilities of large language models (LLMs), focusing on the models' ability to identify and warn users of potential privacy risks in daily conversations. It aims to drive AI from passive privacy defense to proactive prevention, providing an evaluation tool for building a safer and more trustworthy AI ecosystem.