example.com
Opaque. Agents will scrape HTML and hope.
- ✗ robots.txt present 5pt
Missing or errored
Fix: Add /robots.txt — at minimum `User-agent: *` + `Allow: /` + `Sitemap: /sitemap.xml`.
- ✗ robots.txt explicitly allows AI crawlers 10pt
No AI crawler is explicitly allow-listed
Fix: Add User-agent blocks + `Allow: /` for GPTBot, OAI-SearchBot, ClaudeBot, Claude-Web, Claude-SearchBot, ChatGPT-User, PerplexityBot. Most sites implicitly allow these, but explicit blocks beat implicit in Google + Bing's ranking signals.
- ✗ /llms.txt is live 15pt
Missing — the llmstxt.org convention for summarizing your site to agents.
Fix: Generate /llms.txt — a plain-text summary of what your site is + the most important URLs. llmstxt.org has the spec; EmDash + similar CMSes have one-line generators.
- ✗ /sitemap.xml with lastmod timestamps 8pt
Missing sitemap
Fix: Every <url> in the sitemap should have a <lastmod> — agents use it to decide re-fetch cadence.
- ✗ MCP server exposed (/api/mcp or /.well-known/mcp) 20pt
No MCP server detected at /api/mcp or /.well-known/mcp
Fix: Add a JSON-RPC 2.0 MCP server at /api/mcp exposing your site's content as typed tools (list_posts, search, get_post, etc.). The Claude Code MCP spec is ~10 pages.
- ✗ OpenGraph + Twitter meta on homepage 5pt
0 og:* tags, 0 twitter:* tags
Fix: Add og:title, og:description, og:image, og:url, twitter:card, twitter:title, twitter:description. One og:image per page (not duplicated).
- ✗ schema.org structured data 10pt
No JSON-LD structured data on homepage
Fix: Add JSON-LD blocks for WebSite + Organization + (for blogs) BlogPosting. Google's Rich Results Test validates each type.
- ✗ RSS/Atom/JSON feeds 5pt
No RSS/Atom/JSON feed discovered on homepage or /rss.xml
Fix: Ship at minimum /rss.xml. Agent-era content consumers (readers, aggregators, summarizers) fall back to RSS when MCP isn't available.
- ✗ x402 micropayments (optional, high-value) 8pt
No x402 challenges detected on /api — this is fine if you don't monetize via agent payments, but it's the easiest $10-$10k/mo new revenue surface
Fix: If your API has agent buyers, wire x402 on one paid route. The middleware is ~20 lines and settles USDC on Base. Romy builds this as a $2.5k 1-week sprint: /services#x402-integration-sprint
- ✗ canonical URL on homepage 3pt
No <link rel=canonical> — agent crawlers dedupe differently than Google; explicit canonical helps both.
Fix: Add <link rel="canonical" href="..." /> to every page. One-line fix in most frameworks.
- ✗ security.txt 3pt
No /.well-known/security.txt — not agent-related but a trust signal bots + humans both check.
Fix: Add /.well-known/security.txt with a Contact: URL (or email). RFC 9116 is half a page.
- ✗ HTTPS + HSTS 3pt
HTTPS but no HSTS header
Fix: Add Strict-Transport-Security header. Most hosts do this automatically when HTTPS is on.
- ✗ Machine-readable API spec (OpenAPI) 5pt
No OpenAPI spec at /openapi.json
Fix: If your site has a public API, ship /openapi.json. Agents use it for typed discovery the same way humans use docs pages.
Want the full report?
This free checker runs 13 probes. The paid Agent-Ready Audit is ~50 checks + manual review of your schema, content strategy, and agent discoverability — plus a PR with the highest-leverage change already implemented.
Book the full audit — $1.5k ($150 deposit) → Or ask a question first