Section 01
[Introduction] Core Summary of the Empirical Study on Few-Shot Learning of Large Language Models in Biomedical Named Entity Recognition
This paper conducts a systematic evaluation of 18 models from 9 architecture families to explore the few-shot learning performance patterns of large language models (LLMs) in Biomedical Named Entity Recognition (BioNER) tasks. Key findings include: 8B parameter models achieve the best balance between efficiency and effectiveness; chemical entity recognition outperforms disease entity recognition; in-context learning has a saturation effect—excessively increasing examples may lead to performance degradation.