review
npx skills add https://github.com/howells/arc --skill review
Agent 安装分布
Skill 文档
<required_reading> Read these reference files NOW:
- ${CLAUDE_PLUGIN_ROOT}/references/review-patterns.md
- ${CLAUDE_PLUGIN_ROOT}/disciplines/dispatching-parallel-agents.md
- ${CLAUDE_PLUGIN_ROOT}/disciplines/receiving-code-review.md </required_reading>
<rules_context> Check for project coding rules:
Use Glob tool: .ruler/*.md
Determine rules source:
- If
.ruler/exists: Read rules from.ruler/ - If
.ruler/doesn’t exist: Read rules from${CLAUDE_PLUGIN_ROOT}/rules/
Pass relevant core rules to each reviewer:
| Reviewer | Rules to Pass |
|---|---|
| daniel-product-engineer | react.md, typescript.md, code-style.md |
| lee-nextjs-engineer | nextjs.md, api.md |
| senior-engineer | code-style.md, typescript.md, react.md |
| architecture-engineer | stack.md, turborepo.md |
| simplicity-engineer | code-style.md |
| security-engineer | api.md, env.md |
| data-engineer | testing.md, api.md |
| accessibility-engineer | (interface rules only â already in agent prompt) |
| designer | design.md, colors.md, spacing.md, typography.md |
| </rules_context> |
<progress_context>
Use Read tool: docs/progress.md (first 50 lines)
Check for context on what led to the plan being reviewed. </progress_context>
If argument provided (e.g., daniel-product-engineer):
- Look for
${CLAUDE_PLUGIN_ROOT}/agents/review/{argument}.md - If found â use only this reviewer, skip Phase 2 detection
- If not found â list available reviewers from
${CLAUDE_PLUGIN_ROOT}/agents/review/and ask user to pick
Available reviewers:
daniel-product-engineerâ Type safety, UI completeness, React patternslee-nextjs-engineerâ Next.js App Router, server-first architecturesenior-engineerâ Asymmetric strictness, review disciplinearchitecture-engineerâ System design, component boundariessimplicity-engineerâ YAGNI, minimalismperformance-engineerâ Bottlenecks, scalabilitysecurity-engineerâ Vulnerabilities, OWASPdata-engineerâ Migrations, transactionsdesignerâ Visual design quality, UX fundamentals, AI slop detection
Phase 1: Find the Plan
Check if plan file path provided as argument:
- If yes â read that file and proceed to Phase 2
- If no â search for plans
Search strategy:
-
Check conversation context first â Look for Claude Code plan mode output
- Look back through recent conversation messages
- Search for plan structure markers:
- “# Plan” or “## Plan” headings
- “Implementation Steps” sections
- Task lists with implementation details
- Step-by-step procedures
- If found â extract the plan content and proceed to Phase 2
-
Search docs/plans/ folder â Look for plan files
Use Glob tool:
docs/plans/*.md- Sort results by modification time (newest first)
- Show all plan files (design, implementation, etc.)
-
Present options if multiple found:
- List up to 5 most recent plans
- Show: filename, modification date, brief preview
- Ask user: “Which plan should I review?”
-
If no plans found:
- “I couldn’t find any plans in the conversation or in
docs/plans/. - Can you point me to a plan file, or paste the plan you’d like me to review?”
- “I couldn’t find any plans in the conversation or in
Once plan located:
- Store the plan content
- Note the source (conversation, file path, or user-provided)
- Proceed to Phase 2
Phase 2: Detect Project Type
Skip if specific reviewer provided in Phase 0.
Detect project type for reviewer selection:
Use Grep tool on package.json:
- Pattern:
"next"â nextjs - Pattern:
"react"â react
Use Glob tool:
requirements.txt,pyproject.tomlâ python.ruler/*.mdâ daniel-project (has coding rules)
Use Grep tool on src/**/*.ts:
- Pattern:
@materia/â daniel-project
Select reviewers based on project type:
Daniel’s projects:
- ${CLAUDE_PLUGIN_ROOT}/agents/review/daniel-product-engineer.md
- ${CLAUDE_PLUGIN_ROOT}/agents/review/simplicity-engineer.md
TypeScript/React:
- ${CLAUDE_PLUGIN_ROOT}/agents/review/daniel-product-engineer.md
- ${CLAUDE_PLUGIN_ROOT}/agents/review/senior-engineer.md
- ${CLAUDE_PLUGIN_ROOT}/agents/review/architecture-engineer.md
Next.js:
- ${CLAUDE_PLUGIN_ROOT}/agents/review/lee-nextjs-engineer.md
- ${CLAUDE_PLUGIN_ROOT}/agents/review/daniel-product-engineer.md
- ${CLAUDE_PLUGIN_ROOT}/agents/review/senior-engineer.md
Python:
- ${CLAUDE_PLUGIN_ROOT}/agents/review/senior-engineer.md
- ${CLAUDE_PLUGIN_ROOT}/agents/review/performance-engineer.md
- ${CLAUDE_PLUGIN_ROOT}/agents/review/architecture-engineer.md
General/Unknown:
- ${CLAUDE_PLUGIN_ROOT}/agents/review/senior-engineer.md
- ${CLAUDE_PLUGIN_ROOT}/agents/review/architecture-engineer.md
- ${CLAUDE_PLUGIN_ROOT}/agents/review/simplicity-engineer.md
Conditional addition (all UI project types):
- If plan involves UI components, forms, or user-facing features â add
${CLAUDE_PLUGIN_ROOT}/agents/review/accessibility-engineer.md - If plan involves UI components, pages, or visual design â add
${CLAUDE_PLUGIN_ROOT}/agents/review/designer.md
Phase 3: Run Expert Review
If specific reviewer from Phase 0: Spawn single reviewer agent.
Otherwise: Spawn 3 reviewer agents in parallel:
Task [reviewer-1] model: sonnet: "Review this plan for [specialty concerns].
Plan:
[plan content]
Focus on: [specific area based on reviewer type]"
Task [reviewer-2] model: sonnet: "Review this plan for [specialty concerns]..."
Task [reviewer-3] model: sonnet: "Review this plan for [specialty concerns]..."
Phase 4: Consolidate and Present
Transform findings into Socratic questions:
See ${CLAUDE_PLUGIN_ROOT}/references/review-patterns.md for approach.
Instead of presenting critiques:
- Turn findings into exploratory questions
- “What if we…” not “You should…”
- Collaborative spirit, not adversarial
Example transformations:
- Reviewer: “This is overengineered” â “We have three layers here. What if we started with one?”
- Reviewer: “Missing error handling” â “What happens if the API call fails? Should we handle that now or later?”
- Reviewer: “Security concern” â “This stores the token in localStorage. Is that acceptable for this use case?”
Present questions one at a time:
- Wait for user response
- If user wants to keep something, they probably have context
- Track decisions as you go
Phase 5: Apply Decisions
For each decision:
- Note what was changed
- Note what was kept and why
If plan came from a file:
- Update the file with changes
- Commit:
git commit -m "docs: update <plan> based on review"
Phase 6: Summary and Next Steps
## Review Summary
**Reviewed:** [plan name/source]
**Reviewers:** [list]
### Changes Made
- [Change 1]
- [Change 2]
### Kept As-Is
- [Decision 1]: [reason]
### Open Questions
- [Any unresolved items]
Show remaining arc (if reviewing implementation plan):
/arc:ideate â Design doc (on main) â
â
/arc:detail â Implementation plan â
â
/arc:review â Review implementation plan â YOU ARE HERE
â
/arc:implement â Execute task-by-task
Offer next steps based on what was reviewed:
If reviewed a design doc:
- “Ready to create an implementation plan?” â
/arc:detail - “Done for now” â end
If reviewed an implementation plan:
- “Ready to implement?” â
/arc:implement - “Done for now” â end
Phase 7: Cleanup
Kill orphaned subagent processes:
After spawning reviewer agents, some may not exit cleanly. Run cleanup:
${CLAUDE_PLUGIN_ROOT}/scripts/cleanup-orphaned-agents.sh
<arc_log>
After completing this skill, append to the activity log.
See: ${CLAUDE_PLUGIN_ROOT}/references/arc-log.md
Entry: /arc:review â [Plan name] reviewed
</arc_log>
<success_criteria> Review is complete when:
- Plan located (conversation, file, or user-provided)
- Project type detected and reviewers selected
- Parallel expert review completed (3 agents)
- All findings presented as Socratic questions
- User made decisions on each finding
- Plan updated (if from file)
- Summary presented
- Remaining arc shown (based on plan type)
- User chose next step (detail/implement or done)
- Progress journal updated
- Orphaned agents cleaned up </success_criteria>