agentlint
npx skills add https://github.com/cjavdev/agent-lint --skill agentlint
Agent 安装分布
Skill 文档
AgentLint
Audit websites for AI/agent-friendliness. Runs 17 rules across 5 categories, produces a 0-100 AgentScore, and guides remediation.
Workflow
Step 1: Run the CLI
npx @cjavdev/agent-lint <url> --agent
The --agent flag outputs a structured markdown report optimized for parsing. If the user wants raw JSON, use --json instead.
Common flags:
| Flag | Default | Description |
|---|---|---|
--max-depth <n> |
3 | Maximum crawl depth |
--max-pages <n> |
30 | Maximum pages to crawl |
--json |
â | Output as JSON |
--agent |
â | Output agent-friendly markdown |
--config <path> |
â | Path to config file |
Exit codes: 0 = no errors found, 1 = errors found, 2 = invalid input/system error.
Step 2: Parse Results
Extract from the CLI output:
- Score (0-100) and letter grade (A/B/C/D/F)
- Violations grouped by severity: errors, warnings, info
- Per-page details for page-specific violations
Step 3: Present Remediation Plan
Prioritize fixes by impact:
- Errors first (-10 pts each) â These are the biggest score killers
- High-ROI warnings (-4 pts each) â Fix easy ones first (e.g., adding a sitemap vs. restructuring content)
- Info items (-1 pt each) â Nice-to-have improvements
For each violation, provide:
- What’s wrong and why it matters
- Concrete fix steps (reference
references/remediation-guide.mdfor detailed instructions) - Expected score improvement
Score Interpretation
| Grade | Score | Meaning |
|---|---|---|
| A | 90-100 | Excellent. Site is highly agent-friendly. |
| B | 80-89 | Good. Minor improvements possible. |
| C | 70-79 | Fair. Several gaps in agent-friendliness. |
| D | 60-69 | Poor. Significant barriers for AI agents. |
| F | 0-59 | Failing. Major issues across multiple categories. |
Scoring formula: Start at 100. Subtract 10 per error, 4 per warning, 1 per info. Clamped to 0-100.
Rule Quick Reference
Errors (-10 pts each)
| Rule ID | What It Checks |
|---|---|
transport/accept-markdown |
Returns markdown for Accept: text/markdown |
discoverability/llms-txt |
/llms.txt exists |
Warnings (-4 pts each)
| Rule ID | What It Checks |
|---|---|
transport/content-type-valid |
Valid Content-Type header on responses |
transport/robots-txt |
/robots.txt exists (AI agent blocks are info) |
structure/heading-hierarchy |
H1 exists, no skipped heading levels |
structure/anchor-ids |
Headings have anchor IDs for deep linking |
tokens/page-token-count |
Page under 4,000 tokens (configurable) |
tokens/boilerplate-duplication |
<30% repeated nav/header/footer content |
agent/agent-usage-guide |
Pages mention AI/agent keywords |
Info (-1 pt each)
| Rule ID | What It Checks |
|---|---|
structure/semantic-html |
Uses <main>, <article>, or <section> |
structure/meta-description |
Has <meta name="description"> |
structure/lang-attribute |
<html lang="..."> attribute present |
tokens/nav-ratio |
Nav tokens <20% of page tokens |
agent/mcp-detect |
/.well-known/mcp.json exists |
discoverability/sitemap |
/sitemap.xml exists |
discoverability/openapi-detect |
OpenAPI spec at common paths |
discoverability/structured-data |
JSON-LD structured data present |
Prioritization Logic
When presenting a remediation plan, order fixes by points recoverable per unit of effort:
Quick wins (fix first):
discoverability/llms-txtâ Create a single file, recover 10 ptsstructure/lang-attributeâ One-line HTML change, recover 1 ptstructure/meta-descriptionâ Add meta tags, recover 1 pt per pagediscoverability/sitemapâ Most frameworks auto-generate this
Medium effort:
transport/content-type-validâ Usually a server config fixstructure/heading-hierarchyâ HTML structure fixesstructure/anchor-idsâ Add a rehype/markdown pluginagent/agent-usage-guideâ Write a dedicated docs pagetransport/robots-txtâ Create/update a text file
High effort, high impact:
transport/accept-markdownâ Requires server-side content negotiation (10 pts)tokens/page-token-countâ May require content restructuringtokens/boilerplate-duplicationâ Requires template/layout changes
Configuration
Sites can customize behavior via agent-lint.config.json:
{
"maxDepth": 3,
"maxPages": 30,
"tokenThreshold": 4000,
"ignorePatterns": ["/blog/*"],
"rules": {
"tokens/page-token-count": {
"severity": "info",
"ignorePaths": ["/docs/changelog"]
}
}
}
Detailed Remediation
For step-by-step fix instructions with code examples for each rule (Nginx, Cloudflare Workers, Next.js, Express, static HTML), see references/remediation-guide.md.