Section 01
OmicsBench: Introduction to the Benchmark for Distinguishing Multi-Omics Reasoning from Shortcut Learning in Large Models
OmicsBench is a benchmark developed by the SeedScientist team, focusing on evaluating the genuine reasoning ability of large language models on multi-omics data rather than relying on surface pattern matching. It aims to help researchers identify whether models have true biological reasoning capabilities and avoid scientific research misdirection caused by pseudo-reasoning. This benchmark detects shortcut learning through strategies such as adversarial sample design, multi-omics integration tasks, and interpretability evaluation, which is of great significance to the biomedical AI field.