plan
npx skills add https://github.com/boshu2/agentops --skill plan
Agent 安装分布
Skill 文档
Plan Skill
Quick Ref: Decompose goal into trackable issues with waves. Output:
.agents/plans/*.md+ bd issues.
YOU MUST EXECUTE THIS WORKFLOW. Do not just describe it.
CLI dependencies: bd (issue creation). If bd is unavailable, write the plan to .agents/plans/ as markdown with issue descriptions, and use TaskList for tracking instead. The plan document is always created regardless of bd availability.
Flags
| Flag | Default | Description |
|---|---|---|
--auto |
off | Skip human approval gate. Used by /rpi --auto for fully autonomous lifecycle. |
Execution Steps
Given /plan <goal> [--auto]:
Step 1: Setup
mkdir -p .agents/plans
Step 2: Check for Prior Research
Look for existing research on this topic:
ls -la .agents/research/ 2>/dev/null | head -10
Use Grep to search .agents/ for related content. If research exists, read it with the Read tool to understand the context before planning.
Search knowledge flywheel for prior planning patterns:
if command -v ao &>/dev/null; then
ao search "<topic> plan decomposition patterns" 2>/dev/null | head -10
fi
If ao returns relevant learnings or patterns, incorporate them into the plan. Skip silently if ao is unavailable or returns no results.
Step 3: Explore the Codebase (if needed)
USE THE TASK TOOL to dispatch an Explore agent:
Tool: Task
Parameters:
subagent_type: "Explore"
description: "Understand codebase for: <goal>"
prompt: |
Explore the codebase to understand what's needed for: <goal>
1. Find relevant files and modules
2. Understand current architecture
3. Identify what needs to change
Return: key files, current state, suggested approach
Pre-Planning Audit (Cleanup/Refactoring Epics)
If goal includes “cleanup”, “refactor”, “remove dead”, or “update stale”: Run a quantitative audit BEFORE decomposing into issues:
- Dead code: count packages, LOC, import references
- Stale docs: count files, old vs new references
- Orphaned items: count issues, follow-ups without beads
Output concrete numbers. These become the plan’s scope.
| Bad | Good |
|---|---|
| “clean up dead code” | “Delete 3,003 LOC across 3 packages” |
| “update stale docs” | “Rewrite 4 specs (cli, observability, quest-events, index)” |
| “remove old stuff” | “Remove 5 v2 agent references from 3 role prompts” |
Ground truth with numbers prevents scope creep and makes completion verifiable. In ol-571, the audit found 5,752 LOC to remove â without it, the plan would have been vague.
Step 4: Decompose into Issues
Analyze the goal and break it into discrete, implementable issues. For each issue define:
- Title: Clear action verb (e.g., “Add authentication middleware”)
- Description: What needs to be done
- Dependencies: Which issues must complete first (if any)
- Acceptance criteria: How to verify it’s done
Design Briefs for Rewrites
For any issue that says “rewrite”, “redesign”, or “create from scratch”: Include a design brief (3+ sentences) covering:
- Purpose â what does this component do in the new architecture?
- Key artifacts â what files/interfaces define success?
- Workflows â what sequences must work?
Without a design brief, workers invent design decisions. In ol-571, a spec rewrite issue without a design brief produced output that diverged from the intended architecture.
Issue Granularity
- 1-2 independent files â 1 issue
- 3+ independent files with no code deps â split into sub-issues (one per file)
- Example: “Rewrite 4 specs” â 4 sub-issues (4.1, 4.2, 4.3, 4.4)
- Enables N parallel workers instead of 1 serial worker
- Shared files between issues â serialize or assign to same worker
Conformance Checks
For each issue’s acceptance criteria, derive at least one mechanically verifiable conformance check using validation-contract.md types. These checks bridge the gap between spec intent and implementation verification.
| Acceptance Criteria | Conformance Check |
|---|---|
| “File X exists” | files_exist: ["X"] |
| “Function Y is implemented” | content_check: {file: "src/foo.go", pattern: "func Y"} |
| “Tests pass” | tests: "go test ./..." |
| “Endpoint returns 200” | command: "curl -s -o /dev/null -w '%{http_code}' localhost:8080/api | grep 200" |
| “Config has setting Z” | content_check: {file: "config.yaml", pattern: "setting_z:"} |
Rules:
- Every issue MUST have at least one conformance check
- Checks MUST use validation-contract.md types:
files_exist,content_check,command,tests,lint - Prefer
content_checkandfiles_exist(fast, deterministic) overcommand(slower, environment-dependent) - If acceptance criteria cannot be mechanically verified, flag it as underspecified
Step 5: Compute Waves
Group issues by dependencies for parallel execution:
- Wave 1: Issues with no dependencies (can run in parallel)
- Wave 2: Issues depending only on Wave 1
- Wave 3: Issues depending on Wave 2
- Continue until all issues assigned
Validate Dependency Necessity
For EACH declared dependency, verify:
- Does the blocked issue modify a file that the blocker also modifies? â Keep
- Does the blocked issue read output produced by the blocker? â Keep
- Is the dependency only logical ordering (e.g., “specs before roles”)? â Remove
False dependencies reduce parallelism. Pre-mortem judges will also flag these. In ol-571, unnecessary serialization between independent spec rewrites was caught by pre-mortem.
Step 6: Write Plan Document
Write to: .agents/plans/YYYY-MM-DD-<goal-slug>.md
# Plan: <Goal>
**Date:** YYYY-MM-DD
**Source:** <research doc if any>
## Overview
<1-2 sentence summary of what we're building>
## Boundaries
**Always:** <non-negotiable requirements â security, backward compat, testing, etc.>
**Ask First:** <decisions needing human input before proceeding â in auto mode, logged only>
**Never:** <explicit out-of-scope items preventing scope creep>
## Conformance Checks
| Issue | Check Type | Check |
|-------|-----------|-------|
| Issue 1 | content_check | `{file: "src/auth.go", pattern: "func Authenticate"}` |
| Issue 1 | tests | `go test ./src/auth/...` |
| Issue 2 | files_exist | `["docs/api-v2.md"]` |
## Issues
### Issue 1: <Title>
**Dependencies:** None
**Acceptance:** <how to verify>
**Description:** <what to do>
### Issue 2: <Title>
**Dependencies:** Issue 1
**Acceptance:** <how to verify>
**Description:** <what to do>
## Execution Order
**Wave 1** (parallel): Issue 1, Issue 3
**Wave 2** (after Wave 1): Issue 2, Issue 4
**Wave 3** (after Wave 2): Issue 5
## Next Steps
- Run `/crank` for autonomous execution
- Or `/implement <issue>` for single issue
Step 7: Create Tasks for In-Session Tracking
Use TaskCreate tool for each issue:
Tool: TaskCreate
Parameters:
subject: "<issue title>"
description: |
<Full description including:>
- What to do
- Acceptance criteria
- Dependencies: [list task IDs that must complete first]
activeForm: "<-ing verb form of the task>"
After creating all tasks, set up dependencies:
Tool: TaskUpdate
Parameters:
taskId: "<task-id>"
addBlockedBy: ["<dependency-task-id>"]
IMPORTANT: Create persistent issues for ratchet tracking:
If bd CLI available, create beads issues to enable progress tracking across sessions:
# Create epic first
bd create --title "<goal>" --type epic --label "planned"
# Create child issues (note the IDs returned)
bd create --title "<wave-1-task>" --body "<description>" --parent <epic-id> --label "planned"
# Returns: na-0001
bd create --title "<wave-2-task-depends-on-wave-1>" --body "<description>" --parent <epic-id> --label "planned"
# Returns: na-0002
# Add blocking dependencies to form waves
bd dep add na-0001 na-0002
# Now na-0002 is blocked by na-0001 â Wave 2
Include conformance checks in issue bodies:
When creating beads issues, embed the conformance checks from the plan as a fenced validation block in the issue description. This flows to worker validation metadata via /crank:
bd create --title "<task>" --body "Description...
\`\`\`validation
{\"files_exist\": [\"src/auth.go\"], \"content_check\": {\"file\": \"src/auth.go\", \"pattern\": \"func Authenticate\"}}
\`\`\`
" --parent <epic-id>
Include cross-cutting constraints in epic description:
“Always” boundaries from the plan should be added to the epic’s description as a ## Cross-Cutting Constraints section. /crank reads these from the epic (not per-issue) and injects them into every worker task’s validation metadata.
Waves are formed by blocks dependencies:
- Issues with NO blockers â Wave 1 (appear in
bd readyimmediately) - Issues blocked by Wave 1 â Wave 2 (appear when Wave 1 closes)
- Issues blocked by Wave 2 â Wave 3 (appear when Wave 2 closes)
bd ready returns the current wave – all unblocked issues that can run in parallel.
Without bd issues, the ratchet validator cannot track gate progress. This is required for /crank autonomous execution and /post-mortem validation.
Step 8: Request Human Approval (Gate 2)
Skip this step if --auto flag is set. In auto mode, proceed directly to Step 9.
USE AskUserQuestion tool:
Tool: AskUserQuestion
Parameters:
questions:
- question: "Plan complete with N tasks in M waves. Approve to proceed?"
header: "Gate 2"
options:
- label: "Approve"
description: "Proceed to /pre-mortem or /crank"
- label: "Revise"
description: "Modify the plan before proceeding"
- label: "Back to Research"
description: "Need more research before planning"
multiSelect: false
Wait for approval before reporting completion.
Step 9: Record Ratchet Progress
ao ratchet record plan 2>/dev/null || true
Step 10: Report to User
Tell the user:
- Plan document location
- Number of issues identified
- Wave structure for parallel execution
- Tasks created (in-session task IDs)
- Next step:
/pre-mortemfor failure simulation, then/crankfor execution
Key Rules
- Read research first if it exists
- Explore codebase to understand current state
- Identify dependencies between issues
- Compute waves for parallel execution
- Always write the plan to
.agents/plans/