Zing Forum

Reading

Spec2Cov: An Agent Framework-Driven Automated Solution for Digital Hardware Coverage Closure

This article introduces the Spec2Cov agent framework, which automatically generates test stimuli via large language models to achieve end-to-end automation from design specifications to coverage closure, reaching 100% coverage on simple designs.

硬件验证覆盖率闭合智能体框架大语言模型自动化测试芯片设计Spec2Cov
Published 2026-04-17 09:08Recent activity 2026-04-20 10:17Estimated read 6 min
Spec2Cov: An Agent Framework-Driven Automated Solution for Digital Hardware Coverage Closure
1

Section 01

Spec2Cov: A Guide to the Agent Framework-Driven Automated Solution for Hardware Coverage Closure

Spec2Cov is an agent framework-based automated solution for digital hardware coverage closure. It automatically generates test stimuli using large language models to realize an end-to-end automated process from design specifications to coverage closure. This solution addresses the pain points in hardware verification where coverage closure relies on manual work and takes a long time, achieving 100% coverage on simple designs and providing a new technical path for chip verification automation.

2

Section 02

Pain Points in Hardware Verification and Opportunities Brought by Large Language Models

Hardware verification is one of the most challenging stages in the chip design process. The verification cycle usually accounts for more than 50% of the chip development cycle, and the coverage closure stage is mainly manual, slow, and labor-intensive. In recent years, large language models have shown outstanding capabilities in code generation. By integrating with external tools to build agent workflows, they provide a new technical direction for automated coverage closure, and the Spec2Cov framework was born based on this idea.

3

Section 03

Spec2Cov Architecture and Key Technical Strategies

Core Architecture

Spec2Cov's core is a closed-loop feedback system that coordinates interactions between large language models and hardware simulators: input design specifications → LLM generates initial test stimuli → simulator performs compilation and simulation → error capture and feedback for correction → parse coverage reports → iterative optimization until the target is met.

Key Mechanisms

  • Intelligent error handling: Classify and parse compilation/simulation errors, then provide structured feedback to LLM for targeted corrections;
  • Coverage report parsing: Convert multi-dimensional coverage data into natural language descriptions to guide LLM in adjusting test strategies;

No-Fine-Tuning Enhancement Strategies

Including context enhancement (injecting historical iteration information), coverage-guided prompt engineering, multi-turn dialogue generation strategies, and error pattern learning (maintaining a library of common errors to quickly apply solutions).

4

Section 04

Experimental Evaluation Results of Spec2Cov

The research team evaluated Spec2Cov on 26 designs of different scales (including the CVDP benchmark suite):

  • Simple designs: Successfully achieved 100% coverage closure;
  • Complex designs: Reached a maximum coverage of 49%. Although not fully covered, the generated test stimuli provide a good starting point for manual verification and significantly shorten the verification time.
5

Section 05

Technical Significance and Industry Impact of Spec2Cov

Spec2Cov represents an important advancement in the field of hardware verification automation. By combining LLM's code generation capabilities with precise feedback from hardware simulation, it creates a new verification paradigm.

  • Chip industry: Improves verification efficiency and frees engineers to focus on creative strategy design and complex scenario analysis;
  • EDA ecosystem: Demonstrates the possibility of AI-native verification tools. In the future, a human-AI collaboration model will be formed where AI handles mechanical tasks and humans are responsible for supervision and key decision-making.
6

Section 06

Limitations and Future Improvement Directions

The current version still has room for improvement in coverage on complex designs. Future improvement directions include:

  • Introducing reinforcement learning to optimize test generation strategies;
  • Supporting more types of coverage metrics (e.g., functional coverage);
  • Extending to system-level verification scenarios; At the same time, it is necessary to ensure the interpretability and maintainability of generated tests to facilitate debugging by engineers.