Zing Forum

Reading

Practical Exploration of Small Language Models in Vietnamese Financial Numerical Reasoning

In-depth Analysis of the ViNumQA Dataset Study: Exploring Optimization Strategies for Numerical Reasoning Capabilities of Small Language Models in Low-Resource Scenarios

小语言模型越南语数值推理金融AI低资源语言提示工程
Published 2026-03-30 02:11Recent activity 2026-03-30 02:22Estimated read 6 min
Practical Exploration of Small Language Models in Vietnamese Financial Numerical Reasoning
1

Section 01

[Introduction] Practical Exploration of Small Language Models in Vietnamese Financial Numerical Reasoning

This article focuses on the research of Small Language Models (SLMs) in Vietnamese financial numerical reasoning tasks. It constructs the ViNumQA dataset as an evaluation benchmark, and verifies the practical value of small models in low-resource scenarios through prompt engineering (e.g., Chain-of-Thought), self-assessment mechanisms, and domain optimization strategies. The study shows that the optimized 7B-parameter model can achieve satisfactory numerical reasoning capabilities, providing a reference for AI applications in resource-constrained environments.

2

Section 02

Research Background and Challenges of Low-Resource Languages

Vietnam, as a fast-growing economy in Southeast Asia, is accelerating its financial digital transformation. However, Vietnamese financial text processing faces multiple challenges: significant differences in language characteristics from English, unique numerical expressions, strong domain professionalism, and scarcity of annotated datasets. This study aims to explore the feasibility of small language models in this scenario.

3

Section 03

ViNumQA Dataset: A Vietnamese Financial QA Benchmark

The research team constructed the ViNumQA dataset, which features diverse sources (financial reports, news, etc.), rich question types (direct extraction, comparative calculation, etc.), difficulty stratification, and professional annotation. The included reasoning types are direct extraction, arithmetic operations, comparative analysis, trend inference, and multi-step reasoning.

4

Section 04

Selection Strategy for Small Language Models

Advantages of choosing small models (1B-7B parameters): low deployment cost, fast inference speed, flexible customization, and privacy protection. The models evaluated in experiments include PhoGPT (Vietnamese-optimized), SeaLLM (Southeast Asian multilingual), Qwen-Chat, and Llama-2-Chat.

5

Section 05

Core Technical Methods

  1. Prompt Engineering Optimization: Zero-shot (testing basic capabilities), few-shot (3-5 examples are optimal), Chain-of-Thought (guiding step-by-step reasoning to improve accuracy); 2. Self-Assessment Mechanism: Verifying answer rationality, cross-validation, confidence evaluation, and triggering manual review for low-confidence results; 3. Domain-Specific Optimization: Building a financial terminology dictionary (term mapping, abbreviation expansion, etc.), numerical normalization (handling Vietnamese-specific numerical expressions).
6

Section 06

Experimental Results and Analysis

Key findings: Chain-of-Thought improves accuracy by 15-20%; 7B models with optimization approach the performance of 13B models; self-assessment further enhances accuracy; performance is significantly improved after domain fine-tuning. Error classification: Numerical positioning (35%), calculation errors (28%), unit confusion (18%), etc. Cross-model comparison:

Model Zero-shot Few-shot CoT +Self-assessment
PhoGPT-4B 42% 51% 58% 63%
SeaLLM-7B 45% 54% 61% 66%
Qwen-7B 48% 56% 64% 68%
Llama-2-7B 38% 47% 52% 57%
7

Section 07

Practical Insights and Future Directions

Practical insights: Data first (high-quality datasets are the foundation), leverage of prompt engineering, more practical small model optimization, self-assessment enhances reliability. Financial applications: Automated financial report analysis, investment research assistance, compliance checks, customer service. Limitations: Small dataset size, limited model generalization, challenges in complex reasoning. Future directions: Expand datasets, multimodal fusion, real-time learning, cross-language transfer.