editing-claude
npx skills add https://github.com/dawiddutoit/custom-claude --skill editing-claude
Agent 安装分布
Skill 文档
editing-claude
Validate and optimize CLAUDE.md files to maintain focus and effectiveness.
Purpose
This skill validates CLAUDE.md files against Anthropic’s best practices, detects quality issues (contradictions, redundancy, excessive length, emphasis overuse), and suggests safe automated fixes or extraction strategies. Based on comprehensive research into context engineering best practices.
Table of Contents
Core Sections
- When to Use – Explicit, implicit, and file type triggers
- What This Skill Does – 8-step workflow and health scoring
- Instructions – Complete implementation guide
- Step 1: Discover CLAUDE.md Files – Find global, project, and local files
- Step 2: Analyze Structure – Parse headings, measure sections, detect orphans
- Step 3: Analyze Emphasis Usage – Count MUST/NEVER/ALWAYS markers
- Step 4: Detect Redundancy – Intra-file and cross-file duplication
- Step 5: Detect Contradictions – Find conflicting directives
- Step 6: Identify Extraction Opportunities – Content for specialized docs
- Step 7: Validate Links – Check internal/external links
- Step 8: Generate Health Report – Compile comprehensive analysis
- Safe Automated Fixes – Fixes that can be applied automatically
- Fix 1: Remove Cross-File Redundancy – Replace duplication with references
- Fix 2: Fix Broken Links – Update links to correct paths
- Fix 3: Fix Orphaned Sections – Promote or nest sections
- Fix 4: Reduce Emphasis Density – Remove redundant emphasis
- Examples – Real-world usage scenarios
- Validation – Verify skill execution and outcomes
- Integration Points – Quality gates, git workflow, documentation reviews
Advanced Topics
- Supporting Files – Scripts, templates, and references
- Red Flags – Common mistakes to avoid
- Troubleshooting – Common issues and fixes
- Requirements – Python, bash tools, and dependencies
Supporting Resources
- Reference – Anthropic best practices, algorithms, validation checklist
- Examples – Quarterly review, redundancy fixes, extraction strategy
Quick Start
Analyze current CLAUDE.md:
Skill(command: "editing-claude")
Expected output: Health report (0-50 score), issues found (critical/warning/info), extraction opportunities, automated fixes available.
When to Use
Explicit Triggers:
- “Edit CLAUDE.md”
- “Optimize CLAUDE.md”
- “Check for contradictions in CLAUDE.md”
- “Validate documentation”
- “CLAUDE.md is too long”
Implicit Triggers:
- Before committing CLAUDE.md changes
- Document exceeds 200 lines (warn at 200, error at 500)
- Quarterly documentation reviews
- After adding new rules/sections
File Types:
- CLAUDE.md (project or global)
- .claude/*.md (specialized docs)
- *.CLAUDE.md (local overrides)
What This Skill Does
8-Step Workflow:
- Discovery – Find all CLAUDE.md files (global, project, local)
- Structure Analysis – Parse headings, measure sections, detect orphans
- Emphasis Analysis – Count MUST/NEVER/ALWAYS/MANDATORY/CRITICAL markers
- Redundancy Detection – Find duplicate content (intra-file and cross-file)
- Contradiction Detection – Find conflicting directives
- Extraction Opportunities – Identify content for specialized docs
- Link Validation – Verify all internal/external links
- Report & Recommendations – Generate health score with prioritized fixes
Health Score Components (50 points):
- Length: 10 points (100-200 lines = 10, >500 = 0)
- Emphasis Density: 10 points (<1% = 10, >2% = 0)
- Contradictions: 10 points (0 = 10, >3 = 0)
- Redundancy: 10 points (0 instances = 10, >5 = 0)
- Links: 5 points (all valid = 5)
- Structure: 5 points (no orphans = 5)
- Extraction Potential: 5 points (<20% = 5)
- Freshness: 5 points (updated <30 days = 5)
Instructions
Step 1: Discover CLAUDE.md Files
Find all CLAUDE.md files in hierarchy:
# Global CLAUDE.md
test -f ~/.claude/CLAUDE.md && echo "Global: ~/.claude/CLAUDE.md" || echo "No global"
# Project CLAUDE.md
test -f ./CLAUDE.md && echo "Project: ./CLAUDE.md" || echo "No project"
# Get line counts
wc -l ~/.claude/CLAUDE.md ./CLAUDE.md 2>/dev/null
Read the target CLAUDE.md file (usually project).
Step 2: Analyze Structure
Extract headings and calculate section lengths:
# Extract all headings with line numbers
grep -n "^#" ./CLAUDE.md
# Count sections
grep -c "^## " ./CLAUDE.md # H2 sections
grep -c "^### " ./CLAUDE.md # H3 subsections
Use Python script for detailed analysis:
python .claude/skills/editing-claude/scripts/analyze_structure.py ./CLAUDE.md
Detect issues:
- Document >200 lines (warning), >500 lines (error)
- Sections >100 lines (suggest extraction)
- Orphaned sections (H3 without H2 parent)
- Heading hierarchy skips (H1 â H3 without H2)
Step 3: Analyze Emphasis Usage
Extract emphasis markers:
# Find all CRITICAL/MANDATORY/MUST/NEVER/ALWAYS instances
grep -n "\(CRITICAL\|MANDATORY\|MUST\|NEVER\|ALWAYS\)" ./CLAUDE.md
Calculate emphasis density:
python .claude/skills/editing-claude/scripts/analyze_emphasis.py ./CLAUDE.md
Thresholds:
- <1%: â Optimal (restrained)
- 1-2%: â Acceptable (within range)
- 2-5%: â ï¸ Warning (overuse)
-
5%: â Error (nothing is critical)
Step 4: Detect Redundancy
Intra-file redundancy (same file):
Run semantic similarity analysis:
python .claude/skills/editing-claude/scripts/detect_redundancy.py ./CLAUDE.md
Distinguish redundancy from reinforcement:
- Redundancy: Exact duplication in same context â Fix
- Reinforcement: Same info in escalating contexts (catalog â section â checklist) â Keep
Cross-file redundancy (global vs project):
python .claude/skills/editing-claude/scripts/detect_redundancy.py ~/.claude/CLAUDE.md ./CLAUDE.md
Common redundant rules:
- MultiEdit requirement
- Testing philosophy
- Critical thinking principle
Fix: Replace project duplication with reference to global.
Step 5: Detect Contradictions
Run semantic contradiction analysis:
python .claude/skills/editing-claude/scripts/detect_contradictions.py ./CLAUDE.md
Contradiction patterns:
- Direct opposition: “ALWAYS use X” + “NEVER use X”
- Priority conflicts: “X is critical” + “X is optional”
- Conflicting delegation: Multiple agents for same task
- Tool preference conflicts: Skill vs direct tool (check if conditional)
Validation: Check if apparent contradictions are actually conditional fallbacks (not true contradictions).
Step 6: Identify Extraction Opportunities
Analyze each section:
python .claude/skills/editing-claude/scripts/identify_extractions.py ./CLAUDE.md
Extraction criteria:
- Section >50 lines
- Detailed how-to content (should be in guide)
- Low-frequency content (edge cases)
- Catalog-style content (lists of items)
Common extraction targets:
- Skills Catalog (if >40 lines) â .claude/docs/skills-index.md
- Workflow details â .claude/docs/workflow-guides.md
- Anti-pattern details â .claude/docs/enforcement-guide.md
- Agent reference â ../orchestrate-agents/references/dispatch.md (if not already there)
Keep in CLAUDE.md:
- 5 most critical items from each catalog
- Core principles (not implementation details)
- Links to specialized docs
Step 7: Validate Links
Extract and validate all links:
python .claude/skills/editing-claude/scripts/validate_links.py ./CLAUDE.md
Check:
- Internal links (file exists)
- Relative paths (not absolute)
- External links (HTTP 200) – optional
- Link text descriptive (not “click here”)
Fix broken links automatically (safe operation).
Step 8: Generate Health Report
Compile all analyses into comprehensive report:
python .claude/skills/editing-claude/scripts/generate_report.py ./CLAUDE.md
Report format:
# CLAUDE.md Health Report
**Overall Score: X/50** ð¢ EXCELLENT | ð¡ GOOD | ð NEEDS WORK | ð´ CRITICAL
## Issues Found
### ð´ Critical (Fix Immediately)
- [Issue with line numbers and fix]
### ð High Priority (Fix Soon)
- [Issue with line numbers and fix]
### ð¡ Medium Priority (Improve)
- [Issue with line numbers and suggestion]
### ð¢ Low Priority (Optional)
- [Issue with line numbers and suggestion]
## Extraction Opportunities
- [Section] (X lines) â [target-file.md]
## Automated Fixes Available
- [ ] Fix 1 (safe, low-risk)
- [ ] Fix 2 (requires review)
## Recommendations
1. [Priority action]
2. [Next action]
3. [Future improvement]
Write report to .claude/artifacts/YYYY-MM-DD/reports/claude-md-health-report.md.
- Use
reports/subfolder for health reports - If multiple reports exist for same topic, group in topic subfolder (e.g.,
reports/claude-md-optimization/)
Safe Automated Fixes
The following fixes can be applied automatically (with –apply flag):
Fix 1: Remove Cross-File Redundancy
When: Same rule in global and project CLAUDE.md
Action: Replace project duplication with reference to global
# Before (project CLAUDE.md)
### Token Optimization (MultiEdit Required)
[13 lines of MultiEdit explanation]
# After
### Token Optimization
**Global Rule:** See [~/.claude/CLAUDE.md](~/.claude/CLAUDE.md) for MultiEdit requirement
Validation: Ensure global CLAUDE.md exists and contains the rule.
Fix 2: Fix Broken Links
When: Internal link points to non-existent file
Action: Update link to correct path OR remove if file truly missing
Validation: File exists at new path AND content matches expected.
Fix 3: Fix Orphaned Sections
When: H3 section without H2 parent
Action: Promote to H2 OR nest under appropriate existing H2
Validation: Requires semantic understanding – ASK USER for confirmation.
Fix 4: Reduce Emphasis Density (>2%)
When: Too many CRITICAL/MANDATORY markers
Action: Remove redundant emphasis (e.g., heading + body both have emphasis)
Validation: Meaning preserved, emphasis still clear.
Examples
Python Examples
- analyze_structure.py – Parse headings, measure sections, detect orphans
- analyze_emphasis.py – Extract MUST/NEVER/ALWAYS, calculate density
- validate_links.py – Link extraction, existence checking
- generate_report.py – Compile analyses into comprehensive report
Complete Walkthroughs
See examples/examples.md for:
- Example 1: Quarterly Review – Full analysis workflow
- Example 2: Fixing Redundancy – Cross-file duplication
- Example 3: Extraction Strategy – Skills Catalog extraction
- Example 4: Contradiction Resolution – Detecting conflicts
- Example 5: Length Optimization – 502 â 250 lines
- Example 6: Emergency Fix – Broken links and orphans
Supporting Files
- scripts/analyze_structure.py – Parse headings, measure sections, detect orphans
- scripts/analyze_emphasis.py – Extract MUST/NEVER/ALWAYS, calculate density
- scripts/validate_links.py – Link extraction, existence checking
- scripts/generate_report.py – Compile analyses into comprehensive report
- examples/examples.md – Comprehensive examples and use cases
- templates/extraction-template.md – Template for creating specialized docs
- templates/report-template.md – Template for health report
- references/reference.md – Technical depth on algorithms and best practices
Validation
Verify skill execution:
- Health score calculated: 0-50 range
- All 8 analyses run: Structure, emphasis, redundancy, contradictions, extraction, links, freshness, report
- Report generated: Saved to
.claude/artifacts/YYYY-MM-DD/reports/ - No false positives: Reinforcement not flagged as redundancy
- Safe fixes only: No destructive operations without confirmation
Expected outcomes:
- â Report shows health score matching manual assessment
- â Extraction suggestions align with research (e.g., 191 lines identified in Phase 2)
- â No false positives on current CLAUDE.md (0 contradictions expected)
- â Automated fixes apply without breaking links
Integration Points
With Quality Gates:
- Run before committing CLAUDE.md changes
- Warn if health score <40 (FAIR threshold)
With Git Workflow:
- Pre-commit hook validates CLAUDE.md
- Blocks if health score <30 (POOR threshold)
With Documentation Reviews:
- Quarterly review using this skill
- Track health score over time
With Skill Creator:
- Detects when Skills Catalog grows >40 lines
- Recommends extraction to skills-index.md
Red Flags
Common mistakes when using this skill:
- â Auto-apply extraction without review â Always get user approval first
- â Flag reinforcement as redundancy â Check if same info in different contexts
- â Flag conditional fallbacks as contradictions â Validate context
- â Break links during fix â Verify target file exists AND content matches
- â Remove emphasis from Core Rules â Some sections SHOULD have emphasis
- â Ignore project-specific patterns â Adapt recommendations to project needs
- â No rollback plan â Create git commit before applying fixes
- â Overwhelming reports â Prioritize (critical â warning â info)
- â No A/B testing â Measure Claude adherence before/after changes
- â Assume extraction preserves effectiveness â Test impact on adherence
Troubleshooting
Issue: Script not found
- Fix: Ensure scripts/ directory exists with all Python files
- Check:
ls .claude/skills/editing-claude/scripts/
Issue: False positive contradictions
- Fix: Check if statements are conditional (if-then logic)
- Manual review: Validate semantic analysis
Issue: Health score seems wrong
- Fix: Check scoring algorithm in scripts/generate_report.py
- Validate: Manual calculation vs automated
Issue: Extraction suggestions break document
- Fix: Keep 80% of high-frequency content in CLAUDE.md
- Test: Measure Claude adherence before/after extraction
Issue: Links break after fixes
- Fix: Validate all links after applying fixes
- Rollback:
git checkout CLAUDE.mdif needed
Requirements
Python 3.8+ with standard library:
re(regex)os(file operations)sys(arguments)pathlib(path handling)
Bash tools:
grep(pattern matching)wc(line counting)test(file existence)
Claude Code tools:
- Read, Write, Grep, Glob, Bash, Edit, MultiEdit
Optional:
git(for rollback, history analysis)markdownlint(markdown validation)
See Also
- reference.md – Anthropic best practices, algorithms, validation checklist
- examples.md – Real-world usage scenarios
- Anthropic Context Engineering Best Practices – Official guidance
- ../manage-session-workspace/references/session-workspace-guidelines.md – Where to save reports
- research artifacts – Full research findings