Section 01
MyLLM: Introduction to a Transparent Practical Framework for Building LLMs from Scratch
MyLLM is an education-oriented, research-friendly large language model framework designed to address the "black-box dependency" problem in the current LLM ecosystem—where developers rely on high-level abstraction libraries but have only a superficial understanding of the internal principles of Transformers. The framework covers the complete workflow from tokenization, attention mechanisms, training to RLHF and inference, and adopts a three-layer progressive architecture (Notebooks, Modules, Core Framework). Its core values lie in transparency, modifiability, and research-friendliness, making it suitable for learning and rapid experimentation, but it is not designed for production environments.