Zing Forum

Reading

AI Workflow: A Ticket-Driven Agent Software Development Workflow Derived from the MealPrep Project

AI Workflow is a systematic agent-driven software development process derived from a real-world project (MealPrep AI). It enables AI-assisted delivery of large-scale complex projects through ticket-level control, self-verification loops, and parallel agent mechanisms.

AI辅助开发智能体工作流软件开发票单管理MealPrep AILLD代码审查
Published 2026-05-12 06:43Recent activity 2026-05-12 09:29Estimated read 6 min
AI Workflow: A Ticket-Driven Agent Software Development Workflow Derived from the MealPrep Project
1

Section 01

[Introduction] AI Workflow: Core Analysis of Ticket-Driven Agent Software Development Workflow

AI Workflow is a systematic agent-driven development process derived from the MealPrep AI project. It enables AI-assisted delivery of large-scale complex projects through ticket-level control, self-verification loops, and parallel agent mechanisms. The core philosophy is "AI does the typing, humans do the thinking"—humans oversee architectural decisions, while AI converts detailed designs into high-quality code, addressing issues such as insufficient context windows for single AI agents and the difficulty in reviewing fully AI-generated code.

2

Section 02

Project Background and Origin

AI Workflow originated from the practice of the MealPrep AI project: this project involved 70-90 backend tasks and four development waves. The team faced challenges—single AI agents couldn't cover the entire project with their context windows, fully manual coding was inefficient, and fully AI-generated code often produced un-reviewable code garbage. Amidst these tensions, a new paradigm emerged: ticket-level task allocation, self-verification loops, standardized operation manuals, and humans acting as architects, reviewers, and merge coordinators.

3

Section 03

Core Philosophy and Architectural Principles

Core Philosophy: "AI does the typing, humans do the thinking". AI converts detailed designs into code, while humans steer the architectural direction. Key Principles: 1. Tickets must be "agent-friendly" (scope limited to 10-25 files); 2. Self-verification mechanism (agents must pass test suites like Maven's mvn verify before completion); 3. LLD (Detailed Design Document) as the single source of truth (tickets, agents, and reviews all rely on LLD; errors must first be fixed in LLD).

4

Section 04

Detailed Explanation of Core Workflow Components

The repository includes five core directories: 1. Playbook (Rule Manual: defines hard constraints such as ticket definitions, self-verification requirements, and review standards); 2. Templates (Templates for tickets, agent prompts, and style guides to ensure clear tasks, optimal instructions, and consistent code); 3. Conventions (Practical patterns: crystallization of experience like verification loops and parallel agent coordination); 4. Decisions (ADR-formatted decision logs that record design choices and their reasons, e.g., splitting tickets to improve delivery speed); 5. Starter-kits (Tech stack scaffolding: mainly Spring Boot, including infrastructure like build configurations and CI pipelines).

5

Section 05

Application Process and Performance Metrics

The application process has six phases: 1. HLD & LLD (AI-assisted design); 2. Codebase initialization (using starter-kits or manual setup); 3. Ticket writing (decompose LLD into tasks per templates); 4. Agent execution (generate instructions; agents read tickets/LLD/playbook to write code and self-verify);5. Human review and merge (check reports, code differences, merge after CI);6. Playbook iteration (optimize convention templates). Performance: After tuning, ticket processing takes 30-60 minutes (with 5-10 minutes of human intervention); parallel agents improve throughput, but each batch may have 5-15 minutes of merge conflict time.

6

Section 06

Applicable Scenarios and Project Outlook

Applicable Scenarios: Total workload exceeding 10 tickets/50 hours, AI handles coding while humans control architecture, strict testing requirements, willingness to invest time in writing detailed specifications. Not Applicable: One-off scripts/prototypes (high overhead), codebases with unformed architecture, pure research experiment code. Current Status: First wave (playbook + templates) completed in May 2026; original project delivered 9 production tickets, second wave under verification. Outlook: Continuously updated as the project evolves, providing a practical reference framework for AI-assisted development at scale for teams.