skill-factory
npx skills add https://smithery.ai
Agent 安装分布
Skill 文档
Skill Factory
Autonomous skill creation – just tell me what you need, I’ll handle everything.
What This Does
You request a skill, I deliver a production-ready skill with guaranteed quality (score >= 8.0/10).
No decision-making required. No tool selection. No quality checking. Just results.
Anthropic’s Official Best Practices
For comprehensive guidance on creating effective skills, see:
- references/overview.md – Complete overview of Agent Skills architecture, progressive disclosure, and how Skills work across different platforms (API, Claude Code, Agent SDK, claude.ai)
- references/quickstart.md – Quick tutorial on using pre-built Agent Skills in the Claude API with practical code examples
- references/best-practices.md – Detailed authoring best practices including core principles, skill structure, progressive disclosure patterns, workflows, evaluation strategies, and common patterns
- references/anthropic-best-practices.md – Quality scoring system (10/10 criteria) used by skill-factory
These references provide Anthropic’s official guidance and are consulted during the quality assurance phase.
Usage
Simply describe the skill you need:
"Create a skill for Anchor development with latest docs and best practices"
"Create a React skill from react.dev with comprehensive examples"
"Create a skill for Solana transaction debugging workflows"
"Create a skill for writing technical documentation following company standards"
I will automatically:
- â Analyze your request
- â Select the optimal creation method
- â Create the skill
- â Run quality assurance loops (until score >= 8.0)
- â Test with automated scenarios
- â Deliver ready-to-use skill with stats
What You Get
â
anchor-development skill ready!
ð Quality Score: 8.9/10 (Excellent)
ð Lines: 412 (using progressive disclosure)
ð Coverage: 247 documentation pages
ð¡ Examples: 68 code samples
𧪠Test Pass Rate: 100% (15/15 scenarios)
ð Location: ~/.claude/skills/anchor-development/
ð¦ Zip: ~/Downloads/anchor-development.zip
Try it: "How do I create an Anchor program?"
How It Works (Behind the Scenes)
Phase 1: Request Analysis (Automatic)
I analyze your request to determine:
Source Detection:
- Documentation URL/mention? â Automated scraping path
- “Latest docs”, “current version”? â Automated path
- GitHub repository mention? â Automated path
- PDF/manual path? â Automated path
- Custom workflow/process description? â Manual TDD path
- Both documentation AND custom needs? â Hybrid path
Quality Requirements Extraction:
- “Best practices” â Enforce quality gates
- “Latest version” â Scrape current docs
- “Examples” â Ensure code samples included
- “Comprehensive” â Verify coverage completeness
Phase 2: Execution (Automatic)
Path A: Documentation-Based (Skill_Seekers)
Detected: Documentation source available
Method: Automated scraping with quality enhancement
Steps I take:
1. Check Skill_Seekers installation (install if needed)
2. Configure scraping parameters automatically
3. Run scraping with optimal settings
4. Monitor progress
5. Initial quality check
6. If score < 8.0: Run enhancement loop
7. Re-score until >= 8.0
8. Test with auto-generated scenarios
9. Package and deliver
Path B: Custom Workflows (Manual TDD)
Detected: Custom workflow/process
Method: Test-Driven Documentation (obra methodology)
Steps I take:
1. Create pressure test scenarios
2. Run baseline (without skill)
3. Document agent behavior
4. Write minimal skill addressing baseline
5. Test with skill present
6. Identify rationalizations/gaps
7. Close loopholes
8. Iterate until bulletproof
9. Package and deliver
Path C: Hybrid
Detected: Documentation + custom requirements
Method: Scrape then enhance
Steps I take:
1. Scrape documentation (Path A)
2. Identify gaps vs requirements
3. Fill gaps with TDD approach (Path B)
4. Unify and test as whole
5. Quality loop until >= 8.0
6. Package and deliver
Phase 3: Quality Assurance Loop (Automatic)
I enforce Anthropic best practices:
while quality_score < 8.0:
issues = analyze_against_anthropic_guidelines(skill)
if "vague_description" in issues:
improve_description_specificity()
if "missing_examples" in issues:
extract_or_generate_examples()
if "too_long" in issues:
apply_progressive_disclosure()
if "poor_structure" in issues:
reorganize_content()
quality_score = rescore()
Quality Criteria (Anthropic Best Practices):
- â Description: Specific, clear, includes when_to_use
- â Conciseness: <500 lines OR progressive disclosure
- â Examples: Concrete code samples, not abstract
- â Structure: Well-organized, clear sections
- â Name: Follows conventions (lowercase, hyphens, descriptive)
Important: The quality assurance process consults references/best-practices.md for Anthropic’s complete authoring guidelines and references/anthropic-best-practices.md for the 10-point scoring criteria.
Phase 4: Testing (Automatic)
I generate and run test scenarios:
# Auto-generate test cases from skill content
test_cases = extract_key_topics(skill)
for topic in test_cases:
query = f"How do I {topic}?"
# Test WITHOUT skill (baseline)
baseline = run_query_without_skill(query)
# Test WITH skill
with_skill = run_query_with_skill(query)
# Verify improvement
if not is_better(with_skill, baseline):
identify_gap()
enhance_skill()
retest()
Phase 5: Delivery (Automatic)
Package skill:
- Create skill directory structure
- Generate SKILL.md with frontmatter
- Create reference files (if using progressive disclosure)
- Add examples directory
- Create .zip for easy upload
- Install to ~/.claude/skills/ (if desired)
- Generate summary statistics
Progress Reporting
You’ll see real-time progress:
ð Analyzing request...
â
Detected: Documentation-based (docs.rs/anchor-lang)
â
Requirements: Latest version, best practices, examples
ð Creating skill...
ð¥ Scraping docs.rs/anchor-lang... (2 min)
ð Extracting 247 pages...
ð¾ Organizing content...
ð Quality check: 7.4/10
â ï¸ Issues found:
- Description too generic (fixing...)
- Missing examples in 4 sections (adding...)
- Some outdated patterns (updating...)
ð§ Enhancing skill...
âï¸ Description improved
ð Examples added
ð Patterns updated
ð Quality check: 8.9/10 â
𧪠Testing...
â
15/15 scenarios passing
â
anchor-development skill ready!
Dependencies
Required:
- Python 3.10+ (for quality scripts)
- bash (for automation scripts)
Optional (auto-installed when needed):
- Skill_Seekers (for documentation scraping)
- Will prompt for installation on first documentation-based request
- One-command setup:
scripts/install-skill-seekers.sh
Configuration
Environment variables (optional):
# Skill_Seekers installation path
export SKILL_SEEKERS_PATH="$HOME/Skill_Seekers"
# Output directory for created skills
export SKILL_OUTPUT_DIR="$HOME/.claude/skills"
# Minimum quality score (default: 8.0)
export MIN_QUALITY_SCORE="8.0"
# Auto-install dependencies (default: prompt)
export AUTO_INSTALL_DEPS="true"
Examples
Example 1: Documentation Skill
User: "Create a React skill from react.dev"
Agent:
ð Analyzing... â Documentation-based
ð Scraping react.dev... â 3 min
ð Quality: 7.8 â 8.6 â
𧪠Testing: 12/12 passing â
â
react-development skill ready (8.6/10)
Example 2: Custom Workflow Skill
User: "Create a skill for debugging Solana transaction failures"
Agent:
ð Analyzing... â Custom workflow (no docs to scrape)
ð Using TDD methodology...
𧪠RED: Testing baseline...
âï¸ GREEN: Writing skill...
ð REFACTOR: Closing loopholes...
ð Quality: 8.3 â
â
solana-transaction-debugging skill ready (8.3/10)
Example 3: Hybrid Skill
User: "Create an Anchor skill with docs plus custom debugging workflows"
Agent:
ð Analyzing... â Hybrid (docs + custom)
ð¥ Scraping docs.rs/anchor-lang... â 2 min
ð Adding custom debugging workflows...
ð Integrating and testing...
ð Quality: 8.9 â
â
anchor-development skill ready (8.9/10)
Quality Guarantee
Every skill delivered by skill-factory:
- â Scores >= 8.0/10 on Anthropic best practices
- â Has concrete examples (not abstract)
- â Follows structure conventions
- â Tested with auto-generated scenarios
- â Ready to use immediately
If quality < 8.0, I keep working until it reaches 8.0+
Troubleshooting
Skill_Seekers installation fails:
# Manual installation
git clone https://github.com/yusufkaraaslan/Skill_Seekers ~/Skill_Seekers
cd ~/Skill_Seekers
pip install -r requirements.txt
# Or use installation script
~/Projects/claude-skills/skill-factory/skill/scripts/install-skill-seekers.sh
Quality score stuck below 8.0:
- I’ll report what’s blocking and suggest manual review
- Check references/anthropic-best-practices.md for criteria
- Run manual enhancement if needed
Want to understand methodology:
- See references/obra-tdd-methodology.md (testing approach)
- See references/anthropic-best-practices.md (quality criteria)
- See references/skill-seekers-integration.md (automation details)
Reference Files
Anthropic Official Documentation:
- references/overview.md – Agent Skills architecture, progressive disclosure, and platform details
- references/quickstart.md – Quick tutorial on using pre-built Agent Skills in the Claude API
- references/best-practices.md – Comprehensive authoring guidelines from Anthropic
- references/anthropic-best-practices.md – Quality scoring system (10/10 criteria)
Skill Factory Implementation Details:
- references/obra-tdd-methodology.md – Full TDD testing approach
- references/skill-seekers-integration.md – Automation documentation
- references/request-analysis.md – How requests are parsed
- references/quality-loops.md – Enhancement algorithms
Scripts Reference
Available helper scripts in scripts/ directory:
- check-skill-seekers.sh – Check if Skill_Seekers is installed
- install-skill-seekers.sh – One-command Skill_Seekers setup
- quality-check.py – Score any skill against Anthropic best practices
Usage examples:
# Check Skill_Seekers installation
./scripts/check-skill-seekers.sh
# Install Skill_Seekers
./scripts/install-skill-seekers.sh
# Quality check a skill
python3 ./scripts/quality-check.py /path/to/skill/SKILL.md
Philosophy
You don’t want to:
- Navigate decision trees
- Choose between tools
- Check quality manually
- Test with subagents yourself
- Wonder if output is good
You want to:
- Describe what you need
- Get high-quality result
- Start using immediately
That’s what skill-factory delivers.
Credits
Built on top of excellent tools:
- Skill_Seekers – Documentation scraping
- obra/superpowers-skills – TDD methodology
- Anthropic skill-creator – Best practices
Skill-factory orchestrates these tools with automatic quality assurance and testing.
Just tell me what skill you need. I’ll handle the rest.