review

📁 howells/arc 📅 9 days ago
1
总安装量
1
周安装量
#51741
全站排名
安装命令
npx skills add https://github.com/howells/arc --skill review

Agent 安装分布

cline 1
opencode 1
codex 1
claude-code 1
gemini-cli 1

Skill 文档

<required_reading> Read these reference files NOW:

  1. ${CLAUDE_PLUGIN_ROOT}/references/review-patterns.md
  2. ${CLAUDE_PLUGIN_ROOT}/disciplines/dispatching-parallel-agents.md
  3. ${CLAUDE_PLUGIN_ROOT}/disciplines/receiving-code-review.md </required_reading>

<rules_context> Check for project coding rules:

Use Glob tool: .ruler/*.md

Determine rules source:

  • If .ruler/ exists: Read rules from .ruler/
  • If .ruler/ doesn’t exist: Read rules from ${CLAUDE_PLUGIN_ROOT}/rules/

Pass relevant core rules to each reviewer:

Reviewer Rules to Pass
daniel-product-engineer react.md, typescript.md, code-style.md
lee-nextjs-engineer nextjs.md, api.md
senior-engineer code-style.md, typescript.md, react.md
architecture-engineer stack.md, turborepo.md
simplicity-engineer code-style.md
security-engineer api.md, env.md
data-engineer testing.md, api.md
accessibility-engineer (interface rules only — already in agent prompt)
designer design.md, colors.md, spacing.md, typography.md
</rules_context>

<progress_context> Use Read tool: docs/progress.md (first 50 lines)

Check for context on what led to the plan being reviewed. </progress_context>

If argument provided (e.g., daniel-product-engineer):

  • Look for ${CLAUDE_PLUGIN_ROOT}/agents/review/{argument}.md
  • If found → use only this reviewer, skip Phase 2 detection
  • If not found → list available reviewers from ${CLAUDE_PLUGIN_ROOT}/agents/review/ and ask user to pick

Available reviewers:

  • daniel-product-engineer — Type safety, UI completeness, React patterns
  • lee-nextjs-engineer — Next.js App Router, server-first architecture
  • senior-engineer — Asymmetric strictness, review discipline
  • architecture-engineer — System design, component boundaries
  • simplicity-engineer — YAGNI, minimalism
  • performance-engineer — Bottlenecks, scalability
  • security-engineer — Vulnerabilities, OWASP
  • data-engineer — Migrations, transactions
  • designer — Visual design quality, UX fundamentals, AI slop detection

Phase 1: Find the Plan

Check if plan file path provided as argument:

  • If yes → read that file and proceed to Phase 2
  • If no → search for plans

Search strategy:

  1. Check conversation context first — Look for Claude Code plan mode output

    • Look back through recent conversation messages
    • Search for plan structure markers:
      • “# Plan” or “## Plan” headings
      • “Implementation Steps” sections
      • Task lists with implementation details
      • Step-by-step procedures
    • If found → extract the plan content and proceed to Phase 2
  2. Search docs/plans/ folder — Look for plan files

    Use Glob tool: docs/plans/*.md

    • Sort results by modification time (newest first)
    • Show all plan files (design, implementation, etc.)
  3. Present options if multiple found:

    • List up to 5 most recent plans
    • Show: filename, modification date, brief preview
    • Ask user: “Which plan should I review?”
  4. If no plans found:

    • “I couldn’t find any plans in the conversation or in docs/plans/.
    • Can you point me to a plan file, or paste the plan you’d like me to review?”

Once plan located:

  • Store the plan content
  • Note the source (conversation, file path, or user-provided)
  • Proceed to Phase 2

Phase 2: Detect Project Type

Skip if specific reviewer provided in Phase 0.

Detect project type for reviewer selection:

Use Grep tool on package.json:

  • Pattern: "next" → nextjs
  • Pattern: "react" → react

Use Glob tool:

  • requirements.txt, pyproject.toml → python
  • .ruler/*.md → daniel-project (has coding rules)

Use Grep tool on src/**/*.ts:

  • Pattern: @materia/ → daniel-project

Select reviewers based on project type:

Daniel’s projects:

  • ${CLAUDE_PLUGIN_ROOT}/agents/review/daniel-product-engineer.md
  • ${CLAUDE_PLUGIN_ROOT}/agents/review/simplicity-engineer.md

TypeScript/React:

  • ${CLAUDE_PLUGIN_ROOT}/agents/review/daniel-product-engineer.md
  • ${CLAUDE_PLUGIN_ROOT}/agents/review/senior-engineer.md
  • ${CLAUDE_PLUGIN_ROOT}/agents/review/architecture-engineer.md

Next.js:

  • ${CLAUDE_PLUGIN_ROOT}/agents/review/lee-nextjs-engineer.md
  • ${CLAUDE_PLUGIN_ROOT}/agents/review/daniel-product-engineer.md
  • ${CLAUDE_PLUGIN_ROOT}/agents/review/senior-engineer.md

Python:

  • ${CLAUDE_PLUGIN_ROOT}/agents/review/senior-engineer.md
  • ${CLAUDE_PLUGIN_ROOT}/agents/review/performance-engineer.md
  • ${CLAUDE_PLUGIN_ROOT}/agents/review/architecture-engineer.md

General/Unknown:

  • ${CLAUDE_PLUGIN_ROOT}/agents/review/senior-engineer.md
  • ${CLAUDE_PLUGIN_ROOT}/agents/review/architecture-engineer.md
  • ${CLAUDE_PLUGIN_ROOT}/agents/review/simplicity-engineer.md

Conditional addition (all UI project types):

  • If plan involves UI components, forms, or user-facing features → add ${CLAUDE_PLUGIN_ROOT}/agents/review/accessibility-engineer.md
  • If plan involves UI components, pages, or visual design → add ${CLAUDE_PLUGIN_ROOT}/agents/review/designer.md

Phase 3: Run Expert Review

If specific reviewer from Phase 0: Spawn single reviewer agent.

Otherwise: Spawn 3 reviewer agents in parallel:

Task [reviewer-1] model: sonnet: "Review this plan for [specialty concerns].
Plan:
[plan content]

Focus on: [specific area based on reviewer type]"

Task [reviewer-2] model: sonnet: "Review this plan for [specialty concerns]..."

Task [reviewer-3] model: sonnet: "Review this plan for [specialty concerns]..."

Phase 4: Consolidate and Present

Transform findings into Socratic questions:

See ${CLAUDE_PLUGIN_ROOT}/references/review-patterns.md for approach.

Instead of presenting critiques:

  • Turn findings into exploratory questions
  • “What if we…” not “You should…”
  • Collaborative spirit, not adversarial

Example transformations:

  • Reviewer: “This is overengineered” → “We have three layers here. What if we started with one?”
  • Reviewer: “Missing error handling” → “What happens if the API call fails? Should we handle that now or later?”
  • Reviewer: “Security concern” → “This stores the token in localStorage. Is that acceptable for this use case?”

Present questions one at a time:

  • Wait for user response
  • If user wants to keep something, they probably have context
  • Track decisions as you go

Phase 5: Apply Decisions

For each decision:

  • Note what was changed
  • Note what was kept and why

If plan came from a file:

  • Update the file with changes
  • Commit: git commit -m "docs: update <plan> based on review"

Phase 6: Summary and Next Steps

## Review Summary

**Reviewed:** [plan name/source]
**Reviewers:** [list]

### Changes Made
- [Change 1]
- [Change 2]

### Kept As-Is
- [Decision 1]: [reason]

### Open Questions
- [Any unresolved items]

Show remaining arc (if reviewing implementation plan):

/arc:ideate     → Design doc (on main) ✓
     ↓
/arc:detail     → Implementation plan ✓
     ↓
/arc:review     → Review implementation plan ✓ YOU ARE HERE
     ↓
/arc:implement  → Execute task-by-task

Offer next steps based on what was reviewed:

If reviewed a design doc:

  • “Ready to create an implementation plan?” → /arc:detail
  • “Done for now” → end

If reviewed an implementation plan:

  • “Ready to implement?” → /arc:implement
  • “Done for now” → end

Phase 7: Cleanup

Kill orphaned subagent processes:

After spawning reviewer agents, some may not exit cleanly. Run cleanup:

${CLAUDE_PLUGIN_ROOT}/scripts/cleanup-orphaned-agents.sh

<arc_log> After completing this skill, append to the activity log. See: ${CLAUDE_PLUGIN_ROOT}/references/arc-log.md

Entry: /arc:review — [Plan name] reviewed </arc_log>

<success_criteria> Review is complete when:

  • Plan located (conversation, file, or user-provided)
  • Project type detected and reviewers selected
  • Parallel expert review completed (3 agents)
  • All findings presented as Socratic questions
  • User made decisions on each finding
  • Plan updated (if from file)
  • Summary presented
  • Remaining arc shown (based on plan type)
  • User chose next step (detail/implement or done)
  • Progress journal updated
  • Orphaned agents cleaned up </success_criteria>