Section 01
Small Models, Big Wisdom: How Qwen3-1.7B Breaks Through the 'Reasoning Gap' in Vietnamese Mathematical Reasoning
A groundbreaking study focuses on the potential and challenges of small language models (SLMs) in non-English reasoning tasks, using Qwen3-1.7B as the research object. By constructing the Vietnamese elementary math dataset Vi-S1K and the evaluation benchmark Vi-Elementary-Bench, it was found that supervised fine-tuning (SFT) can unlock the hidden reasoning capabilities of the model, while complex agent frameworks (such as ReAct) instead become a cognitive burden, providing a new path for edge AI to achieve complex reasoning.