write-atomic-tasks
npx skills add https://github.com/dawiddutoit/custom-claude --skill write-atomic-tasks
Agent 安装分布
Skill 文档
Write Atomic Tasks
Quick Start
Transform vague task descriptions into precise, autonomous-execution-ready specifications:
â BEFORE (vague):
- [ ] Add error handling
â
AFTER (precise):
- [ ] Task 1: Add ConnectionError and TimeoutError handling to ClaudeAgentClient.query()
- Retry 3 times with exponential backoff (1s, 2s, 4s)
- Raise AgentConnectionError after retries exhausted
- Location: src/temet_run/agent/client.py:ClaudeAgentClient.query
- Verify: pytest tests/unit/agent/test_client.py passes
Table of Contents
- When to Use This Skill
- What This Skill Does
- The SMART+ Framework
- Task Template
- Precision Checklist
- Forbidden Vague Patterns
- Task Decomposition Rules
- Examples: Vague â Precise Transformation
- Validation Process
- Supporting Files
- Expected Outcomes
- Requirements
- Red Flags to Avoid
When to Use This Skill
Explicit Triggers:
- “Write a task for…”
- “Create todo items for…”
- “Break down this feature into tasks”
- “Make this task more precise”
- “How should I write this as a task?”
Implicit Triggers:
- Task description lacks file paths
- No verification command specified
- Vague action verbs (“fix”, “add”, “update” without specifics)
- Missing success criteria
- Task would require clarifying questions before starting
Debugging/Quality Scenarios:
- Reviewing task list for autonomous execution readiness
- Agent keeps asking clarifying questions about tasks
- Tasks take longer than expected due to ambiguity
- Post-mortem reveals task interpretation issues
What This Skill Does
This skill transforms vague task descriptions into precise, autonomous-execution-ready specifications by:
- Applying SMART+ Framework – Ensuring every task is Specific, Measurable, Actionable, Referenced, Testable, with Context
- Adding Critical Metadata – File paths, verification commands, success criteria
- Decomposing Large Work – Breaking complex tasks into 30-minute chunks
- Eliminating Vague Patterns – Replacing forbidden phrases with precise alternatives
- Validating Precision – Checking that tasks answer all 6 precision questions
The SMART+ Framework
Every task MUST include these components:
| Component | Description | Example |
|---|---|---|
| Specific | What exactly to do | “Add retry logic to ClaudeAgentClient.query()” not “Add retry logic” |
| Measurable | How to verify completion | “Tests pass, mypy clean” |
| Actionable | Clear first step | “Create file src/temet_run/agent/retry.py“ |
| Referenced | File paths, functions, classes | “in agent/client.py:ClaudeAgentClient“ |
| Testable | Success criteria | “3 retries with exponential backoff, circuit breaker after 5 failures” |
| +Context | Why this matters (optional) | “Needed for ADR-016 conversation persistence” |
Task Template
- [ ] Task N: [ACTION VERB] [SPECIFIC TARGET] [SUCCESS CRITERIA]
Location: [file paths or module names]
Verify: [how to confirm completion]
Complete Example:
- [ ] Task 1: Add ConversationError exception hierarchy to agent/errors.py
- Create: ConversationError(base), MessageError, StateError
- Location: src/temet_run/agent/errors.py
- Verify: mypy passes, errors importable from temet_run.agent
Precision Checklist
Before writing a task, verify it answers ALL these questions:
- â WHAT file(s)? â Include full path from project root
- â WHAT function/class? â Name the specific target
- â WHAT action? â Use precise verbs: Create, Add, Modify, Remove, Rename, Extract, Move
- â WHAT inputs/outputs? â Specify types, fields, parameters
- â
HOW to verify? â Command to run:
pytest path,mypy src/,uv run temet-run ... - â WHY doing this? â Link to ADR, issue, or parent task (optional but helpful)
Forbidden Vague Patterns
NEVER write tasks containing these vague phrases:
| â Forbidden Pattern | â Precise Alternative |
|---|---|
| “Implement the feature” | “Implement X method in Y class with Z behavior” |
| “Add tests” | “Add unit tests for X covering cases A, B, C” |
| “Fix the bug” | “Fix TypeError in X:line Y caused by Z” |
| “Update the code” | “Update X function to accept Y parameter” |
| “Handle errors” | “Add try/except for ConnectionError in X, retry 3 times” |
| “Refactor” | “Extract X logic from Y into new Z class” |
| “Improve performance” | “Reduce X function runtime from 500ms to <100ms by caching Y” |
| “Add logging” | “Add structlog info-level logging to X function for events A, B, C” |
| “Document the code” | “Add Google-style docstring to X function with Args, Returns, Raises” |
Task Decomposition Rules
Large tasks MUST be broken down:
- One concern per task – Don’t mix “create model AND write tests AND add CLI”
- Max 30 minutes of work – If longer, split it
- Dependencies explicit – “Task 3 depends on Task 2” or use sub-numbering (2.1, 2.2)
- Verification per task – Each task independently verifiable
Decomposition Example:
## Feature: Add conversation history command
- [ ] Task 1: Create ConversationRepository protocol
Location: src/temet_run/domain/repositories.py
Verify: mypy passes
- [ ] Task 2: Implement JsonlConversationRepository
Location: src/temet_run/infrastructure/repositories/conversation.py
Verify: Unit tests pass (create tests/unit/infrastructure/test_conversation_repo.py)
- [ ] Task 3: Add `history` subcommand to CLI
Location: src/temet_run/main.py (add to agents group)
Verify: `uv run temet-run agents history --help` shows usage
- [ ] Task 4: Integration test for history command
Location: tests/integration/test_cli_history.py
Verify: pytest tests/integration/test_cli_history.py passes
Examples: Vague â Precise Transformation
See examples/transformation-examples.md for comprehensive examples covering:
- Error handling tasks
- Data model implementation
- Test writing tasks
- Bug fixes
- Feature additions
- Refactoring work
Inline Example:
â VAGUE:
- [ ] Add error handling to the client
â
PRECISE:
- [ ] Add ConnectionError and TimeoutError handling to ClaudeAgentClient.query()
- Retry 3 times with exponential backoff (1s, 2s, 4s)
- Raise AgentConnectionError after retries exhausted
- Location: src/temet_run/agent/client.py:ClaudeAgentClient.query
- Verify: pytest tests/unit/agent/test_client.py passes
Validation Process
To validate a task for precision:
- Read the task description
- Apply the 6-question checklist (Section 5)
- Check for forbidden vague patterns (Section 6)
- Verify SMART+ components present (Section 3)
- Confirm task is <30 minutes of work
Validation Script (if available):
python ~/.claude/skills/write-atomic-tasks/scripts/validate_task.py "task description"
Manual Validation Output:
Task: "Add error handling to the client"
â FAILED Precision Check:
- Missing: Specific file path
- Missing: Function/class name
- Missing: Verification command
- Vague pattern: "Add error handling" (see Section 6)
Suggested rewrite:
- [ ] Add ConnectionError handling to ClaudeAgentClient.query()
Location: src/temet_run/agent/client.py:ClaudeAgentClient.query
Verify: pytest tests/unit/agent/test_client.py passes
Supporting Files
- examples/transformation-examples.md – 10+ examples of vague â precise transformations
- references/smart-framework-deep-dive.md – Detailed explanation of SMART+ components
- scripts/validate_task.py – Automated task precision validator
- templates/task-template.md – Copy-paste task template with placeholders
Expected Outcomes
Successful Task Writing
When tasks are written with this skill:
â Agents can execute autonomously – No clarifying questions needed â Clear verification – Unambiguous success/failure determination â Predictable effort – Tasks complete in <30 minutes â Reduced rework – Fewer interpretation errors â Better planning – Accurate time estimates from precise scoping
Before/After Metrics
| Metric | Before (Vague Tasks) | After (Precise Tasks) |
|---|---|---|
| Clarifying questions per task | 2-5 questions | 0 questions |
| Task completion time variance | ±200% | ±20% |
| Rework due to misunderstanding | 30% of tasks | <5% of tasks |
| Autonomous execution success | 40% | 95% |
| Time spent planning vs doing | 20/80 | 40/60 |
Requirements
Knowledge:
- Understanding of SMART goal framework
- Familiarity with project file structure
- Awareness of verification commands (pytest, mypy, etc.)
Tools:
- None (skill is language/framework agnostic)
Environment:
- Works with any task management system
- Compatible with todo.md, Jira, GitHub Issues, etc.
Red Flags to Avoid
Red Flags Checklist:
- â Task description fits on one line but has no metadata
- â No file path mentioned
- â Action verb is generic (“fix”, “update”, “add”)
- â No verification command provided
- â Success criteria is subjective (“make it better”)
- â Task mixes multiple concerns (AND, THEN, ALSO in description)
- â Task would take >30 minutes but isn’t decomposed
- â Contains forbidden vague patterns (Section 6)
- â Missing inputs/outputs specification for data operations
- â No link to parent task/ADR/issue for context
Common Mistakes:
- Too granular – “Add import statement” is usually too small, combine with actual work
- Too high-level – “Implement authentication” needs 10+ subtasks
- Technology assumption – “Add React component” when framework not decided
- Implicit dependencies – “Write tests” assumes code exists, make dependency explicit
Notes
- This skill is framework-agnostic – Works for any programming language or project type
- Precision scales with task complexity – Simple tasks need less metadata than complex ones
- Context determines precision level – Solo developer vs distributed team changes requirements
- Tasks are living documents – Update tasks when requirements change, don’t let them drift
- Autonomous execution is the goal – If a task requires synchronous communication, it’s not precise enough