Phase 1: Evaluation and Research
Starting with evaluation: AI proposes research questions to clarify scope and objectives. In the research phase, questions need to be answered and evidence collected. The research proof mechanism requires findings to be accompanied by typed proofs (code location, network source, code differences), which are verified by the Prover agent. Unverified claims cannot pass.
Phase 2: Planning
Develop a detailed execution plan including architecture diagrams, task decomposition, and acceptance criteria. Execution tasks are divided into multiple sub-phases, defining the scope of files that must be modified (must) and those that may be modified (may). Acceptance criteria are accepted/rejected by the user, and server verifies test-related criteria upon submission.
Phase 3: Execution
Iterative loop mode: Each sub-phase includes implementation, verification, repair, code review, and submission steps. Production and test code are written by different agents. The scope locking mechanism enforces file scope constraints; updating the scope or plan requires re-approval. Verification configuration files run automatic code quality checks.
Phase 4: Review and Delivery
Blind review phase (4.0): The reviewer agent does not know the implementation details. Issue handling phase (4.1): Resolve identified issues. Final approval (4.2): Deliver after user confirmation. The review system centrally manages feedback, and ReviewGuard blocks phase progression until all review items are resolved by the user.