Zing Forum

Reading

Rain: A Complete Practice of Building a 100M-Parameter Chinese Large Language Model from Scratch

Rain is an open-source end-to-end training project for a 100M-parameter Chinese Decoder-only large language model, covering the entire workflow from Tokenizer construction, pre-training, SFT fine-tuning, GRPO reinforcement learning to evaluation and inference deployment.

大语言模型LLM训练TransformerPyTorch中文NLPGRPO强化学习开源项目
Published 2026-05-07 17:12Recent activity 2026-05-07 17:19Estimated read 8 min
Rain: A Complete Practice of Building a 100M-Parameter Chinese Large Language Model from Scratch
1

Section 01

Rain Project Guide: A Complete Practice of Building a 100M-Parameter Chinese Large Language Model from Scratch

Rain is an open-source end-to-end training project for a 100M-parameter Chinese Decoder-only large language model, covering the entire workflow from Tokenizer construction, pre-training, SFT fine-tuning, GRPO reinforcement learning to evaluation and inference deployment. The project is implemented purely with PyTorch (no high-level encapsulation), providing developers with a learning platform to deeply understand the working principles of LLMs and bridge theoretical knowledge with engineering practice.

2

Section 02

Project Background and Significance

In today's era of rapid development of large language model technology, most developers only have access to pre-trained model APIs or weight files. To truly understand the working principles of LLMs, one needs to dive deep into every link of the training process. Thus, the Rain project was born: with a parameter scale of 100 million (0.1B), it covers the complete workflow of industrial-grade LLM development; implemented purely with PyTorch (without high-level encapsulation like Hugging Face), it allows learners to master the working principles of each component, making it an excellent platform for in-depth understanding of Transformer architecture and large model training techniques.

3

Section 03

Technical Architecture and Training Workflow

Architecture Design

Adopts the classic Decoder-only Transformer architecture, with core components including:

  • Tokenizer: BPE tokenizer optimized for Chinese, improving Chinese encoding efficiency
  • Model Structure: Multi-head self-attention, feed-forward neural network, residual connection and layer normalization, Rotary Position Encoding (RoPE), causal mask

Training Workflow

Divided into four stages:

  1. Pre-training: Self-supervised learning on large-scale unlabeled Chinese corpus to lay the language foundation
  2. Supervised Fine-tuning (SFT): Fine-tuning with instruction-response data to enable the model to have dialogue capabilities
  3. GRPO Reinforcement Learning: Group Relative Policy Optimization algorithm, with reward models guiding high-quality responses
  4. Evaluation and Inference: Perplexity/BLEU/human evaluation system + efficient deployment solutions
4

Section 04

Core Technical Innovations

  1. Pure PyTorch Implementation: Fully based on native APIs, code is readable and controllable, facilitating architectural experiments and modifications
  2. End-to-End Complete Workflow: Covers data cleaning and preprocessing, Tokenizer training, distributed training, model export and quantization, inference service deployment
  3. Chinese Optimization: Chinese corpus filtering and cleaning, Chinese Tokenizer training, Chinese evaluation benchmarks, Chinese dialogue templates
5

Section 05

Practical Value and Application Scenarios

Educational Learning

  • Understand the mathematical principles of Transformer
  • Observe changes in training loss
  • Experiment with the impact of hyperparameters
  • Compare differences in architectural design

Research Experiments

  • Rapid verification of new architectures
  • Comparison of training algorithms
  • Ablation studies on data strategies
  • Exploration of the capability boundaries of small models

Engineering Reference

  • Best practices for training pipelines
  • Distributed training configuration solutions
  • Model compression and deployment experience
  • Troubleshooting and debugging skills
6

Section 06

Training Experience and Insights

  • Scale and Quality: Even small models with 100M parameters can exhibit excellent capabilities on high-quality data; data quality is no less important than model scale
  • Training Stability: Loss curves of small models are smoother, making it easier to observe training dynamics
  • Chinese Characteristics: The characteristics of Chinese characters have unique requirements for Tokenizer and model design; directly using English solutions leads to poor results
  • RLHF Challenges: GRPO is more stable than PPO, but reward model design and training still require a lot of experimental tuning
7

Section 07

Future Outlook and Conclusion

Future Directions

  • Scale expansion: Gradually increase the parameter scale
  • Multimodal fusion: Integrate image and audio processing
  • Tool usage: Integrate external tool calls
  • Long context: Expand the context window

Conclusion

Rain proves that building large language models is not out of reach. Through systematic learning and practice, developers can deeply understand this technology. The project is open-sourced on GitHub; contributions and learning are welcome. Understanding the underlying principles is more valuable in the long run than simply using APIs. Rain is a bridge connecting theory and practice, helping developers find their place in the era of large models.