website-to-skill-folder
npx skills add https://github.com/chirag2653/website-to-skill-folder --skill website-to-skill-folder
Agent 安装分布
Skill 文档
Website-to-Skill Pipeline
Runs scripts/pipeline.py to crawl a website and produce an installable skill folder.
The script handles everything â no need to read it.
1. Locate the Skill
SKILL_DIR="$HOME/.agents/skills/website-to-skill-folder"
[ -d "$SKILL_DIR" ] || SKILL_DIR=$(find "$HOME/.agents/skills" "$HOME/.claude/skills" \
-maxdepth 2 -type d -name "website-to-skill-folder" 2>/dev/null | head -1)
echo "Skill dir: $SKILL_DIR"
2. Pre-flight Checks
Run these in order before the pipeline. Fix anything missing before proceeding â do not skip.
Python (required: 3.8+)
python --version 2>&1 || python3 --version 2>&1
- If
pythonis Python 2 or not found, usepython3for all commands below. - If neither is found: tell the user to install Python 3.8+ and stop.
Python packages (one-time)
python -c "import requests, pydantic, tenacity; print('OK')" 2>&1
If ModuleNotFoundError: install and retry before proceeding.
pip install requests pydantic tenacity
Firecrawl API key
python -c "import os; print('set' if os.environ.get('FIRECRAWL_API_KEY') else 'missing')"
If missing: stop and ask the user for their Firecrawl API key.
Tell them: “A Firecrawl API key is needed to crawl websites. Get a free key at https://firecrawl.dev â no credit card required.”
Once they provide it, use it inline in the run command below (Step 3).
3. Run the Pipeline
Inline both the skill path and the API key so the command is self-contained:
FIRECRAWL_API_KEY="fc-their_key_here" python "$HOME/.agents/skills/website-to-skill-folder/scripts/pipeline.py" https://example.com
If the skill was installed to a different location (check Step 1 output), substitute that path:
FIRECRAWL_API_KEY="fc-their_key_here" python "$SKILL_DIR/scripts/pipeline.py" https://example.com
Options:
| Flag | Purpose |
|---|---|
--description "..." |
One-line site description for the generated SKILL.md |
--max-pages 100 |
Limit pages scraped â directly controls Firecrawl credit cost |
--skip-scrape |
Reassemble from cache â zero API calls |
--force-refresh |
Ignore cache, re-scrape all pages |
4. Install the Output Skill
When the pipeline finishes, it prints the exact install command with the real absolute path to the skill folder it just built. It looks like this (path will differ on your machine):
Install / update skill in agents:
Claude Code:
npx skills add "/Users/yourname/path/to/output/example-com-website-search-skill" -g -y -a claude-code
All agents:
npx skills add "/Users/yourname/path/to/output/example-com-website-search-skill" -g -y
Before running: Show the user the command printed by the pipeline and ask:
“The skill folder is ready. Shall I install it now so your agents can search [domain] offline?”
Only run the install command after the user confirms. Use the exact path from the pipeline output â do not use the example path shown above.
If npx fails
npx skills requires Node.js. Check:
node --version 2>&1
If not found: tell the user “Node.js is required to install the skill. Download it from https://nodejs.org (LTS version) â it includes npx.” Once they install it, re-run the install command.
After installing, the user’s agents can answer questions about the website offline. Re-run the pipeline and re-run the install command any time to pick up new pages.
Cost
1 Firecrawl credit (map) + ~5 credits per page scraped. Example: 100-page site â 501 credits. Incremental re-runs only pay for new or changed pages.