charted-review
npx skills add https://github.com/marmicode/skills --skill charted-review
Agent 安装分布
Skill 文档
Context
- designDocPath: $ARGUMENTS[0]
Goal
Review the design doc at ${designDocPath} by dispatching four expert sub-agents in parallel, then synthesize their feedback into a single, actionable review.
Step 1 – Set up agents
If the sub-agents from Step 2 are not already configured, ask me to install them. If I confirm, copy sub-agents from the ./assets/agents folder to the {workspaceRoot}/.cursor/agents folder if using Cursor. Adapt the paths if I am not using Cursor.
Step 2 â Parallel Expert Reviews
Spawn the following four sub-agents simultaneously (all in one message). Each agent receives the full design doc content and must return a structured review.
| Sub-agent type | Focus area |
|---|---|
accessibility-expert |
Accessibility of the proposed UI â ARIA, keyboard navigation, screen-reader support, color contrast, focus management, semantic HTML. |
security-analyst |
Security implications â input validation, injection risks, auth/authz gaps, data exposure, secure defaults. |
ux-expert |
Usability â interaction patterns, error/empty/loading states, information architecture, responsiveness, cognitive load. |
xp-coach |
Engineering practices â testability, incremental delivery, PR plan quality, simplicity, YAGNI, refactoring opportunities. |
Each sub-agent prompt must include:
- The full content of the design doc.
- Instructions to return a review with exactly these sections:
- Praise â what the design does well (keep brief).
- Concerns â numbered list. Each concern has a short title, an explanation, and a concrete suggestion.
- Verdict â one of:
approve,request-changes, orneeds-discussion.
Step 3 â Synthesize & Detect Conflicts
Collect the four reviews. Present a summary table to the user:
| Expert | Verdict | # Concerns |
|---|
Then list all concerns grouped by expert.
After listing, identify conflicts â cases where two or more experts give contradictory guidance (e.g., one says “add a confirmation dialog” and another says “reduce interaction steps”, or one says “split into more PRs” and another says “too many PRs already”).
Step 4 â Challenge Round (if conflicts exist)
For each conflict:
- Clearly describe the disagreement to both involved experts.
- Re-launch each conflicting sub-agent with:
- The original design doc.
- Their own original review.
- The opposing expert’s concern that contradicts theirs.
- Instructions to either revise their position or defend it with stronger justification.
- Run conflicting pairs in parallel when they are independent.
After the challenge round, check if the experts now agree.
Step 5 â Resolution
- If all conflicts are resolved: present the final consolidated review with the agreed-upon changes.
- If any conflict remains unresolved: present the remaining disagreement(s) to the user in a clear format:
Unresolved: {short title}
{Expert A} argues: {summary of position}
{Expert B} argues: {summary of position}
What is your call?
Wait for the user’s decision on each unresolved conflict before producing the final consolidated review.
Step 6 â Final Output
Produce a consolidated review with:
- Overall Verdict â
approve,request-changes, orapprove-with-nitsbased on the aggregated outcome. - Action Items â a numbered checklist of concrete changes to make to the design doc, ordered by priority.
- Resolved Conflicts â brief note on how each conflict was settled (expert concession or user decision).