Zing Forum

Reading

A Systematic Machine Learning Learning Roadmap: From Traditional Algorithms to Large Language Models

This article introduces a structured machine learning learning repository covering a complete knowledge system from traditional machine learning and deep learning to cutting-edge fields like LLM, RAG, and Agentic AI, providing learners with a clear learning path.

机器学习深度学习大语言模型LLMRAG学习路线知识图谱具身智能强化学习
Published 2026-05-02 14:44Recent activity 2026-05-02 14:48Estimated read 6 min
A Systematic Machine Learning Learning Roadmap: From Traditional Algorithms to Large Language Models
1

Section 01

【Main Floor/Introduction】A Systematic Machine Learning Learning Roadmap: From Traditional Algorithms to Large Language Models

In today's era of rapid development of artificial intelligence technology, how to systematically learn machine learning is a difficult problem for many developers. The open-source project wenyuexin/machine-learning introduced in this article provides a structured learning roadmap, helping learners gradually deepen from basic algorithms to cutting-edge large language models and agent technologies, covering a complete knowledge system including traditional machine learning, deep learning, LLM, RAG, Agentic AI, etc.

2

Section 02

Project Background and Design Philosophy

This learning note repository aims to solve the problem of knowledge fragmentation in the machine learning field. It uses a hierarchical structure to organize content (from basic theory to engineering practice), and the modular design allows each module to be learned independently while forming an organic knowledge network, providing learners with a clear path.

3

Section 03

Basic Method Layer: Core of Traditional Machine Learning

The bottom layer of the project focuses on traditional machine learning methods, which is a necessary path to understand modern AI:

  • Supervised learning: Classification (SVM, decision tree), regression (linear regression, ridge regression)
  • Unsupervised learning: Clustering (K-means, DBSCAN), dimensionality reduction (PCA, t-SNE)
  • Semi-supervised/self-supervised learning: Semi-supervised self-training, co-training, pseudo-labeling; self-supervised contrastive learning, mask prediction (all are the basis for large model pre-training)
4

Section 04

Deep Learning and Reinforcement Learning Module

The deep learning module covers basic architectures such as CNN, RNN, Transformer, as well as generative models like GAN, VAE, Diffusion; the reinforcement learning part extends from basic theory to policy optimization algorithms (PPO, TRPO), laying the foundation for understanding AI decision-making systems.

5

Section 05

Core Content of Large Language Models (LLM)

The LLM module is a highlight:

  • Basics: Transformer architecture, attention mechanism
  • Mainstream open-source models: Technical reports of GPT series, LLaMA series, Qwen series, DeepSeek series, Mistral series, Gemma series
  • Post-training: Alignment technologies such as Supervised Fine-tuning (SFT), Direct Preference Optimization (DPO), RLHF
  • Inference usage: Prompt engineering, decoding strategies (essential for practical applications)
6

Section 06

Cutting-edge Application Layer: RAG, Agentic AI, and Embodied Intelligence

The project keeps up with cutting-edge trends:

  • RAG: Combining external knowledge bases to solve model hallucinations
  • Agentic AI: Autonomous systems driven by LLM
  • Embodied Intelligence: AI implementation in the physical world
  • World Model: Environment modeling and prediction (direction of general AI)
  • Training infrastructure: Engineering practices such as distributed training, memory optimization
7

Section 07

Computer Vision and Multimodal Module

  • CV module: From traditional methods (SIFT, HOG, Canny) to deep learning, covering image classification, object detection (YOLO, Faster R-CNN), segmentation (UNet, SAM), pose estimation, face recognition, OCR, object tracking, 3D vision (NeRF), video understanding
  • Multimodal large models: Vision-Language Models (VLM), Audio-Language Models, Video-Language Models, Full-modal models (Any2Any)
8

Section 08

Learning Suggestions and Summary Outlook

Learning Suggestions: Beginners should start with traditional ML, then move to deep learning after mastering the basics; those with a foundation can directly focus on cutting-edge modules like LLM and RAG; each module includes theory, implementation, and paper reading to form a learning loop, with recommended materials and books. Summary: The value of the repository lies in its systematicness and forward-looking nature, covering from traditional ML to cutting-edge fields like LLM, Agentic AI, and embodied intelligence, making it a high-quality resource for AI deep learners; Outlook: The structured resources help developers build a solid knowledge system and maintain technical competitiveness.