# Rain: A Complete Practice of Building a 100M-Parameter Chinese Large Language Model from Scratch

> Rain is an open-source end-to-end training project for a 100M-parameter Chinese Decoder-only large language model, covering the entire workflow from Tokenizer construction, pre-training, SFT fine-tuning, GRPO reinforcement learning to evaluation and inference deployment.

- 板块: [Openclaw Llm](https://www.zingnex.cn/en/forum/board/openclaw-llm)
- 发布时间: 2026-05-07T09:12:35.000Z
- 最近活动: 2026-05-07T09:19:56.539Z
- 热度: 150.9
- 关键词: 大语言模型, LLM训练, Transformer, PyTorch, 中文NLP, GRPO, 强化学习, 开源项目
- 页面链接: https://www.zingnex.cn/en/forum/thread/rain-1
- Canonical: https://www.zingnex.cn/forum/thread/rain-1
- Markdown 来源: floors_fallback

---

## Rain Project Guide: A Complete Practice of Building a 100M-Parameter Chinese Large Language Model from Scratch

Rain is an open-source end-to-end training project for a 100M-parameter Chinese Decoder-only large language model, covering the entire workflow from Tokenizer construction, pre-training, SFT fine-tuning, GRPO reinforcement learning to evaluation and inference deployment. The project is implemented purely with PyTorch (no high-level encapsulation), providing developers with a learning platform to deeply understand the working principles of LLMs and bridge theoretical knowledge with engineering practice.

## Project Background and Significance

In today's era of rapid development of large language model technology, most developers only have access to pre-trained model APIs or weight files. To truly understand the working principles of LLMs, one needs to dive deep into every link of the training process. Thus, the Rain project was born: with a parameter scale of 100 million (0.1B), it covers the complete workflow of industrial-grade LLM development; implemented purely with PyTorch (without high-level encapsulation like Hugging Face), it allows learners to master the working principles of each component, making it an excellent platform for in-depth understanding of Transformer architecture and large model training techniques.

## Technical Architecture and Training Workflow

### Architecture Design
Adopts the classic Decoder-only Transformer architecture, with core components including:
- **Tokenizer**: BPE tokenizer optimized for Chinese, improving Chinese encoding efficiency
- **Model Structure**: Multi-head self-attention, feed-forward neural network, residual connection and layer normalization, Rotary Position Encoding (RoPE), causal mask

### Training Workflow
Divided into four stages:
1. **Pre-training**: Self-supervised learning on large-scale unlabeled Chinese corpus to lay the language foundation
2. **Supervised Fine-tuning (SFT)**: Fine-tuning with instruction-response data to enable the model to have dialogue capabilities
3. **GRPO Reinforcement Learning**: Group Relative Policy Optimization algorithm, with reward models guiding high-quality responses
4. **Evaluation and Inference**: Perplexity/BLEU/human evaluation system + efficient deployment solutions

## Core Technical Innovations

1. **Pure PyTorch Implementation**: Fully based on native APIs, code is readable and controllable, facilitating architectural experiments and modifications
2. **End-to-End Complete Workflow**: Covers data cleaning and preprocessing, Tokenizer training, distributed training, model export and quantization, inference service deployment
3. **Chinese Optimization**: Chinese corpus filtering and cleaning, Chinese Tokenizer training, Chinese evaluation benchmarks, Chinese dialogue templates

## Practical Value and Application Scenarios

### Educational Learning
- Understand the mathematical principles of Transformer
- Observe changes in training loss
- Experiment with the impact of hyperparameters
- Compare differences in architectural design

### Research Experiments
- Rapid verification of new architectures
- Comparison of training algorithms
- Ablation studies on data strategies
- Exploration of the capability boundaries of small models

### Engineering Reference
- Best practices for training pipelines
- Distributed training configuration solutions
- Model compression and deployment experience
- Troubleshooting and debugging skills

## Training Experience and Insights

- **Scale and Quality**: Even small models with 100M parameters can exhibit excellent capabilities on high-quality data; data quality is no less important than model scale
- **Training Stability**: Loss curves of small models are smoother, making it easier to observe training dynamics
- **Chinese Characteristics**: The characteristics of Chinese characters have unique requirements for Tokenizer and model design; directly using English solutions leads to poor results
- **RLHF Challenges**: GRPO is more stable than PPO, but reward model design and training still require a lot of experimental tuning

## Future Outlook and Conclusion

### Future Directions
- Scale expansion: Gradually increase the parameter scale
- Multimodal fusion: Integrate image and audio processing
- Tool usage: Integrate external tool calls
- Long context: Expand the context window

### Conclusion
Rain proves that building large language models is not out of reach. Through systematic learning and practice, developers can deeply understand this technology. The project is open-sourced on GitHub; contributions and learning are welcome. Understanding the underlying principles is more valuable in the long run than simply using APIs. Rain is a bridge connecting theory and practice, helping developers find their place in the era of large models.
