Section 01
CheXOne: Introduction to the Visual-Language Foundation Model for Chest X-Rays with Reasoning Capabilities
CheXOne is a visual-language model for chest X-ray interpretation developed by the AIMI Lab at Stanford University. Its core features include explicit reasoning capabilities and GRPO reinforcement learning optimization. In over 50% of cases, its report quality meets or exceeds the level of resident physicians. It aims to address the shortage of radiologists, enhance the interpretability of AI diagnoses, and provide auxiliary support for medical practice.