planning-tasks
npx skills add https://github.com/boojack/skills --skill planning-tasks
Agent 安装分布
Skill 文档
Planning Tasks
Takes design.md and definition.md from docs/issues/YYYY-MM-DD-<slug>/ as input. Produces a task plan detailed enough for a coding agent to execute without re-deriving implementation from the design.
Does NOT introduce new design decisions, modify the design, or write complete implementations. Output is a task plan with enough detail to execute, not pre-written code to copy.
Workflow
Execute all steps in order. Skipping a step is not allowed.
Task Plan Progress:
- [ ] Step 1: Load Inputs
- [ ] Step 2: Explore Codebase
- [ ] Step 3: Break Design into Tasks
- [ ] Step 4: Order Tasks & Dependencies
- [ ] Step 5: Identify Out-of-Scope Tasks
- [ ] Step 6: Surface Open Execution Questions
- [ ] Step 7: Declare Readiness
- [ ] Step 8: Validate Output
Step 1: Load Inputs
Read both upstream documents from the issue folder:
design.mddefinition.md
Extract: design goals, non-goals, proposed design, and current state.
Step 2: Explore Codebase
Read actual files to ground the plan in reality.
- Read files referenced in issue definition’s current state
- Identify patterns, naming conventions, testing style
- Locate exact insertion points for new code
No output section. Purpose: ensure tasks reference real paths and patterns.
Step 3: Break Design into Tasks
Each task MUST use this exact format:
### T<N>: <short imperative title> [S|M|L]
**Objective**: One outcome, traceable to a design element.
**Size**: S (single file, <30 lines) | M (2-3 files, moderate logic) | L (multiple files, complex state/logic)
**Files**:
- Create: `exact/path/to/new_file.ts`
- Modify: `exact/path/to/existing.ts`
- Test: `tests/path/to/test.test.ts`
**Implementation**:
1. In `path/to/file.ts`:
- Add import X from Y
- Modify `functionName()` to do Z
2. In `tests/path/to/test.test.ts`:
- Test: "should X" â assert Y
**Boundaries**: What this task must NOT do
**Dependencies**: T<N> | None
**Expected Outcome**: Observable result (file exists, test passes, etc.)
**Validation**: `exact command` â expected output
Code detail guidance:
- â Interfaces, type definitions, function signatures (they define contracts)
- â Key logic as pseudocode or commented outline
- â Complete function bodies (executing agent writes these)
- â Complete test implementations (describe what to test, not full test code)
Detail consistency: All tasks must have the same depth of specificity. If a task requires the executing agent to choose between approaches, the task is underspecified â resolve the choice or flag it in Open Execution Questions.
Start ## Task List with a Task Index â one line per task for quick scanning:
T1: Add timing instrumentation [S] â T2: Refactor task queue [M] â T3: Add action selector [L]
Step 4: Order Tasks & Dependencies
Explain why tasks are sequenced. Identify sequential vs parallelizable tasks.
Write under: ## Task Ordering Rationale
Step 5: Identify Out-of-Scope Tasks
List tasks that might be assumed necessary but are excluded. Reference design non-goals.
Write under: ## Out-of-Scope Tasks
Step 6: Surface Open Execution Questions
List execution uncertainties (tooling, environment, access). Do NOT resolve â only list.
Write “No open execution questions identified.” if none exist.
Write under: ## Open Execution Questions
Step 7: Declare Readiness
Use exactly one:
- Ready for execution
- Blocked pending clarification
- Requires design update
Write under: ## Readiness Declaration
Step 8: Validate Output
- Every task has all 8 fields (Objective, Size, Files, Implementation, Boundaries, Dependencies, Expected Outcome, Validation)
- Task Index present at top of Task List
- Implementation shows specific changes per file with code examples
- Every task traces to the proposed design
- No vague outcomes (“improved”, “refactored”)
- Validation has exact commands with expected output
- File paths match actual codebase (from Step 2)
- Interfaces/signatures complete, function bodies are outlines not implementations
- No task is significantly vaguer than others
- No embedded design decisions â if “either X or Y”, resolve or move to Open Execution Questions
If any check fails, return to the failing step and revise.
Output Format
Save to docs/issues/YYYY-MM-DD-<slug>/plan.md in the same folder as definition.md and design.md.
Template:
## Task List
## Task Ordering Rationale
## Out-of-Scope Tasks
## Open Execution Questions
## Readiness Declaration
Missing any section invalidates the output.
Anti-patterns
- â Complete implementations: 50-line function body â â signature + outline
- â Full test code â â test descriptions with expected assertions
- â Vague: “Update the function” â â “In
executeTask()(~line 45), add X” - â Untraceable: no goal reference â â “Objective: … (design goal #1)”
- â Embedded design decisions: “Either add new RPC or extend existing” â â pick one, or flag in Open Execution Questions
- â Inconsistent detail: T1 has exact insertion points, T8 says “add component” â â same depth across all tasks