Zing Forum

Reading

Making Website Content Discoverable and Computable by AI Models: In-Depth Analysis of Claude Code's model-discoverability Skill

Introduces model-discoverability-skill, a Claude Code skill that helps website content be discovered and computed by AI models like ChatGPT and Claude through a four-layer architecture (llms.txt, Schema.org structured data, computation guidelines, and OpenAPI specifications).

Claude CodeAI discoverabilityllms.txtSchema.orgJSON-LDOpenAPIAI SEOcontent strategyNimble Books
Published 2026-04-13 11:17Recent activity 2026-04-13 11:37Estimated read 7 min
Making Website Content Discoverable and Computable by AI Models: In-Depth Analysis of Claude Code's model-discoverability Skill
1

Section 01

Introduction: Claude Code's model-discoverability Skill Helps Website Content Be Discovered and Computed by AI

This article provides an in-depth analysis of Claude Code's model-discoverability skill, developed by Fred Zimmerman of Nimble Books LLC. It helps website content be discovered and computed by AI models like ChatGPT and Claude through a four-layer architecture (llms.txt, Schema.org structured data, Companion Data + computation guidelines, and OpenAPI specifications). Additionally, this skill offers functions such as auditing, generation, and deployment to help websites adapt to content visibility needs in the AI era. Project link: https://github.com/fredzannarbor/model-discoverability-skill, License: MIT.

2

Section 02

Background: New Challenges for Website Visibility in the AI Era

With the popularity of AI conversational models like ChatGPT and Claude, users' way of obtaining information has shifted from traditional keyword search to conversational access. Whether AI models can find, understand, and use website content has become a new challenge. The model-discoverability skill aims to solve this problem—it is not only a technical tool but also a content strategy for the AI era.

3

Section 03

Four-Layer Architecture: A Complete Path from Discovery to Computation

The model-discoverability skill adopts a four-layer progressive architecture:

  1. llms.txt: Placed in the website root directory, it provides a navigation map (organization overview, content directory, etc.) for AI models, different from the crawler instructions in robots.txt;
  2. Schema.org JSON-LD: Structured data embedded in HTML pages to help AI accurately understand content semantics. Pages with correct markup are 3 times more likely to appear in Google AI Overviews than ordinary pages;
  3. Companion Data + Computation Guidelines: Machine-readable data files (JSON/CSV) plus a _computation_guide field to guide AI in data analysis (such as comparison, sorting, calculating averages, etc.);
  4. OpenAPI Specifications: Describe data endpoints to support AI models in programmatic access to dynamic data.
4

Section 04

How to Use the Skill: Auditing, Generation, Deployment, and Data Addition

The skill provides multiple functions:

  • Audit Mode: Checks the AI discoverability status of the website (e.g., whether llms.txt exists, whether Schema markup is complete, etc.) and outputs a score from 0 to 8;
  • Generation Mode: Generates files for missing components (llms.txt, Schema JSON-LD, Companion Data, OpenAPI specifications);
  • Deployment Mode: Supports deployment in environments like Apache, Nginx, static hosting (Vercel/Netlify/S3), etc.;
  • Data Addition Mode: Upload CSV/JSON files, automatically generates computation guidelines, multi-format files, OpenAPI entries, and updates llms.txt.
5

Section 05

Computation Guidelines: The Bridge Connecting Static Data and AI Computation

_computation_guide field is the core innovation, whose JSON structure includes:

  • description: Dataset description;
  • suggested_analyses: Types of analysis AI can perform (e.g., comparing dimension performance, sorting, calculating averages, etc.);
  • column_definitions: Column meanings, units, etc.;
  • download_formats: JSON/CSV download links;
  • citation: Citation method;
  • license: Terms of use. AI models with code execution capabilities can directly analyze data based on this.
6

Section 06

Best Practice Recommendations

Best practices for using this skill:

  1. Update llms.txt when publishing new content;
  2. Add computation guidelines to data-rich pages;
  3. Use open licenses like CC-BY-SA;
  4. Include suggested analyses;
  5. Keep Schema markup up to date;
  6. Test content discoverability with real AI queries.
7

Section 07

Practical Significance and Conclusion

The model-discoverability skill represents a new paradigm for content publishing in the AI era: content needs to be oriented to both humans and AI. It has significant value for scenarios such as data-driven research, product catalogs, knowledge bases, and open data projects. Its core concept is: in the AI era, website visibility depends not only on search engine rankings but also on whether AI can understand and use the content. The four-layer architecture realizes a complete path from 'being discovered' to 'being used', making it an important tool for content strategy in the AI era.