Section 01
llm_sim: Observable LLM Internal Behavior Simulator (Introduction)
llm_sim is a Python project designed specifically for educational purposes. It simulates the complete reasoning process of large language models through a modular architecture (including prompt construction, tokenization, reasoning agent, tool calling, and token-by-token generation) and provides JSON execution trace visualization. Its core value lies in transparently revealing the black-box reasoning process of LLMs, making it suitable for teaching, debugging understanding, or architecture learning.