chunky
npx skills add https://github.com/ethandaya/chunky --skill chunky
Agent 安装分布
Skill 文档
Chunky
Spec-first, chunk-based feature shipping for coding agents.
When this skill is activated, follow the three phases below in order. Each phase produces concrete artifacts in the target repo. Do not skip phases.
Assumptions
- CWD is always the target repo root.
- Scripts and assets live in the skill directory â the directory containing this
SKILL.mdfile. Derive the skill directory from this file’s path and use it when running scripts (e.g., if this file is at/home/user/.agents/skills/chunky/SKILL.md, the skill directory is/home/user/.agents/skills/chunky). jqis required for chunk/task context resolution and wave planning. Onlyresolve-context.sh --mode plandegrades gracefully without it.
Phase 1 â Design
Goal: Produce a single spec document the rest of the workflow depends on.
- Create
docs/SPEC.mdin the target repo. - The spec must include these sections:
- Problem / Goal â what we’re building and why.
- Non-goals â what is explicitly out of scope.
- Acceptance criteria â global conditions for the feature to be considered done.
- Constraints â technical, security, or compatibility requirements.
- Verification approach â how we prove it works (test commands, manual steps).
- Stop. Before moving to Phase 2, confirm: every acceptance criterion is testable, constraints don’t contradict goals, and the verification approach can actually prove the criteria are met.
Phase 2 â Plan
Goal: Break the spec into independently executable chunks with routing metadata.
Step 1: Create llms-map.json
Use assets/llms-map.template.json as a starting point. Populate:
schema_versionâ use"1.0.0".updatedâ today’s date inYYYY-MM-DDformat.baseline_read_orderâ files an agent should read when planning (at minimumdocs/SPEC.md).sub_agent_context.always_readâ files loaded for every chunk (at minimumdocs/SPEC.md).context_budgetsâ max files and bytes per mode. Keep chunk budgets small.verification.per_chunkâ commands every chunk must pass after implementation.chunksâ one entry per chunk. Each chunk requires:titleâ short name.targetâ the directory or package this chunk modifies.depends_onâ array of chunk IDs that must be completed first (empty array if none).docsâ files the agent needs to read for this chunk.capsuleâ path to the chunk capsule file (e.g.docs/chunks/P1-C1.md).complexityâ"S","M", or"L".
Optional fields: knowledge_packs, preflight, task_router, orchestrator, phases, freshness. See assets/llms-map.schema.json for the full schema.
Step 2: Write chunk capsules
For each chunk in llms-map.json, create docs/chunks/<CHUNK_ID>.md using the template in references/schema-and-templates.md. Each capsule must include what to build, acceptance criteria, file ownership, and verification commands.
Step 3: Create llms.txt
Create llms.txt in the target repo root. Use assets/llms.txt.template as a starting point. It must include:
- What this repo/feature is.
- Start here â
docs/SPEC.md. - Chunk navigation â
llms-map.jsonanddocs/chunks/. - Verification commands that must pass.
Step 4: Register knowledge packs
If any chunk depends on external documentation (library docs, API references, llms.txt files), add a knowledge_packs map to llms-map.json and reference pack IDs from each chunk’s knowledge_packs array. See references/schema-and-templates.md for the format.
Step 5: Pre-flight Q&A
The preflight has two stages. Stage A is read-only â no file edits. Stage B writes the results.
Stage A â Draft questions (read-only)
- Read the spec, all chunk capsules, and any knowledge pack URLs registered in
llms-map.json. - Identify every question the agent cannot answer from available context. Only ask questions that would change code, schema, verification, rollout, or security decisions. For anything else, state an assumption.
- Present the questions in the conversation (not in a file yet) using this format:
## Pre-flight Questions
### Blocking (must answer before execution)
1. <question> â Assumption if unanswered: <default>
2. <question>
### Non-blocking (will assume default unless overridden)
3. <question> â Default assumption: <assumption>
- Stop. Do not proceed. Ask the human to reply with numbered answers. Do not narrate next steps or continue into Phase 3.
Claude Code hint: If available, use Plan mode or a Plan subagent for Stage A to enforce read-only research and prevent accidental edits.
Codex hint: Use the
update_plantool to track preflight status (Drafting â Awaiting answers â Recording â Done).
Stage B â Record answers
After the human answers (or marks questions N/A):
- Create
docs/PREFLIGHT_QA.mdusing the template inreferences/schema-and-templates.md. - Transcribe all questions, answers, decisions, and discovered constraints into the file.
- Set
preflight.docinllms-map.jsonto"docs/PREFLIGHT_QA.md". - If any answer reveals new constraints, update
docs/SPEC.mdand affected chunk capsules. - If any answer reveals missing external docs, register them in
knowledge_packsand add references to the relevant chunks.
Confirm before proceeding:
- All blocking questions answered or marked N/A with stated assumption.
-
docs/PREFLIGHT_QA.mdwritten and complete. -
llms-map.jsonpreflight.docset. - Spec and capsules updated if answers changed constraints.
Step 6: Plan execution waves
Chunks that share no dependencies can run in parallel. Derive execution waves automatically:
$SKILL_DIR/scripts/plan-waves.sh --map llms-map.json --waves
This computes waves from the depends_on graph â wave 1 is all chunks with no dependencies, wave 2 is chunks whose deps are all in wave 1, and so on. Review the output. If chunks in the same wave touch overlapping files, either add a depends_on edge or split the chunk.
You may optionally materialize waves in llms-map.json under orchestrator.waves for readability, but this is not required â the script derives waves from the dependency graph at execution time.
Human gates (optional)
By default, execution proceeds autonomously through all waves without human approval. Only add orchestrator.human_gates when a wave boundary involves:
- Security-sensitive changes (auth, crypto, secrets)
- Billing or entitlement logic
- Destructive migrations (data loss risk)
- Production configuration or infrastructure
Gates pause between waves. Example: "after_wave_2": ["approve before proceeding"] means pause after wave 2 completes and await human approval before starting the next wave.
Step 7: Validate
Run these from the target repo root (replace $SKILL_DIR with the absolute path to this skill’s directory):
$SKILL_DIR/scripts/check-agent-context.sh .
$SKILL_DIR/scripts/validate-llms-map-schema.sh --map llms-map.json
If either fails, fix the artifacts before proceeding.
Phase 3 â Execute
Goal: Implement all chunks with maximum parallelism and minimal human intervention.
Execution proceeds wave by wave. All chunks in a wave run in parallel unless they share file ownership â in that case, add a depends_on edge or move one to a later wave.
Execution loop
Repeat until all chunks are done:
1. Get the next runnable chunks
$SKILL_DIR/scripts/plan-waves.sh --map llms-map.json --next
# or, if tracking completion:
$SKILL_DIR/scripts/plan-waves.sh --map llms-map.json --next --done docs/CHUNKS_DONE.txt
This outputs the chunk IDs that can run now (all dependencies satisfied).
2. Execute all runnable chunks in parallel
For each chunk in the runnable set, do the following simultaneously:
a. Resolve context:
$SKILL_DIR/scripts/resolve-context.sh --mode chunk --chunk <CHUNK_ID> --map llms-map.json
b. Fetch external docs: If the resolver emits knowledge_packs on stderr, fetch those URLs (prefer llms_full_url, fall back to llms_txt_url or url). Use these as authoritative references. Do not guess at APIs or conventions covered by a knowledge pack.
c. Implement: Read only the resolved context pack and fetched knowledge packs. Do not browse the repo. If you discover missing context, update the chunk’s docs in llms-map.json and its capsule, then re-resolve.
d. Verify: Run the chunk’s verification commands (from the capsule and verification.per_chunk in llms-map.json). Confirm all acceptance criteria are met. Fix and re-verify until all checks pass.
3. Mark completion
The coordinating agent (main thread / lead) appends each completed chunk’s ID to docs/CHUNKS_DONE.txt (one ID per line) after it passes verification. Do not let parallel workers write to this file directly â the coordinator owns it.
4. Advance to the next wave
Re-run plan-waves.sh --next --done docs/CHUNKS_DONE.txt to get the next runnable set. Repeat until no chunks remain.
Parallelism by environment
The execution loop above is environment-agnostic. Use your agent’s native parallelism primitives to run chunks concurrently:
Amp / Claude Code hint: Use the Task tool to spawn one subagent per chunk in the runnable set. Each subagent gets its own context window, resolves context, implements, and verifies independently. The main thread coordinates: computes the runnable set, spawns tasks, collects results, updates
docs/CHUNKS_DONE.txt, and advances to the next wave.
Claude Code agent team hint: For large wave sizes (4+ chunks), consider agent teams instead of subagents. The lead assigns one chunk per teammate. Teammates work in separate sessions with inter-agent messaging â useful when chunks in the same wave need light coordination. Pre-approve common file operations in permission settings to reduce interruptions.
Codex hint: Use background tasks to run chunks in parallel. Each background task handles one chunk’s resolve â implement â verify cycle.
Sequential fallback
If your environment does not support parallel execution, execute chunks one at a time in dependency order. The loop is the same â the runnable set just processes sequentially.
Execution Modes
The context resolver supports three modes:
| Mode | When | Command |
|---|---|---|
| chunk | Implement one chunk | --mode chunk --chunk <CHUNK_ID> |
| task | Route a keyword to likely chunks | --mode task --task <keyword> |
| plan | Load full planning context | --mode plan |
The wave planner supports two modes:
| Mode | When | Command |
|---|---|---|
| waves | Show all derived waves | --waves |
| next | Show next runnable chunks | --next [--done <file>] |
Skill Contents
SKILL.mdâ this filescripts/resolve-context.shâ resolve minimal context pack fromllms-map.jsonscripts/plan-waves.shâ derive execution waves and next runnable chunks from dependency graphscripts/check-agent-context.shâ validate artifact coherencescripts/validate-llms-map-schema.shâ validatellms-map.jsonagainst schemaassets/llms-map.schema.jsonâ canonical JSON schemaassets/llms-map.template.jsonâ starter template forllms-map.jsonassets/llms.txt.templateâ starter template forllms.txtreferences/schema-and-templates.mdâ quick reference for schemas and templates