running-design-reviews
3
总安装量
3
周安装量
#62524
全站排名
安装命令
npx skills add https://github.com/liqiongyu/lenny_skills_plus --skill running-design-reviews
Agent 安装分布
claude-code
3
codex
3
opencode
3
gemini-cli
3
qoder
3
trae
3
Skill 文档
Running Design Reviews
Scope
Covers
- Planning a design review with a clear decision and requested feedback type(s)
- Running a live demoâcentered critique (or async review when needed)
- Capturing feedback without âdesign-by-committeeâ
- Synthesizing feedback using Value â Ease of Use â Delight prioritization
- Recording decisions, tradeoffs, and follow-ups so the review changes the work
When to use
- âPrepare and run a design critique for this Figma prototype.â
- âWe need a structured design review agenda and feedback log.â
- âHelp us review this flow and decide what to change before we ship.â
- âTurn messy comments into prioritized feedback + next steps.â
When NOT to use
- You donât have a defined problem, target user, or goal yet (use
problem-definitionfirst). - You need build-ready interaction specs / acceptance criteria (use
writing-specs-designs). - You need evidence from users rather than expert critique (use
usability-testing). - Youâre doing launch planning, comms, rollout/rollback (use
shipping-products).
Inputs
Minimum required
- Design artifact(s): link(s) or screenshots (e.g., Figma/prototype) + what parts are in scope
- The decision needed (what will change after the review)
- Target user + job-to-be-done (1â2 sentences)
- Success criteria (1â3) and constraints (time, platform, accessibility, tech)
- Review format + logistics: live vs async, time box, attendees/roles
Missing-info strategy
- Ask up to 5 questions from references/INTAKE.md, then proceed.
- If answers arenât available, make explicit assumptions and clearly label them.
- Do not request secrets or credentials.
Outputs (deliverables)
Produce a Design Review Pack in Markdown (in-chat by default; write to files if requested), in this order:
- Design review brief / pre-read (context, decision, requested feedback, links)
- Agenda + facilitation script (timed, prompts, roles)
- Feedback log (captured + categorized + prioritized)
- Decision record (decisions, tradeoffs, owners, due dates)
- Follow-up message + next review plan (what changed, whatâs next)
- Risks / Open questions / Next steps (always included)
Templates: references/TEMPLATES.md
Workflow (7 steps)
1) Classify the review and lock the decision
- Inputs: Request + artifact(s) + constraints.
- Actions: Identify the review type (concept / flow / content / visual polish / ship-readiness). Write the decision statement (âAfter this review we will decide ___â).
- Outputs: Review type + decision statement + scope boundary (in/out).
- Checks: Everyone can answer: âWhat will change after this review?â
2) Set the requested feedback (and what NOT to comment on)
- Inputs: Decision statement + stage of design.
- Actions: Specify 1â3 feedback questions (e.g., âIs the value proposition clear?â, âWhere does the flow break?â, âWhat edge cases are missing?â). Explicitly defer aesthetics/minutiae until Value/Ease are validated.
- Outputs: Requested feedback list + âout of scopeâ feedback.
- Checks: Feedback questions map directly to the decision.
3) Assign roles (incl. a sponsor) and prepare a live demo
- Inputs: Attendees list + timeline/risk.
- Actions: Assign: Presenter, Facilitator, Note-taker, and a Sponsor/DRI (senior owner who focuses on âwhyâ + core concept). Decide whether leadership must review all user-facing screens before ship (for high-craft products).
- Outputs: Roles list + demo plan (what will be shown, in what order).
- Checks: Decision rights are clear; the review is anchored in a live demo, not a slide deck.
4) Produce the pre-read (context first, then artifacts)
- Inputs: references/TEMPLATES.md (brief template) + project context.
- Actions: Write a 1â2 page brief: problem â user â success criteria â constraints â options considered â risks/tradeoffs â open questions â links.
- Outputs: Shareable pre-read + âhow to reviewâ instructions.
- Checks: A reviewer can give useful feedback asynchronously without a live context dump.
5) Run the review (big picture â Value â Ease â Delight)
- Inputs: Agenda + demo + notes/feedback log.
- Actions: Start with goals/feelings (âWhatâs bothering us overall?â), then evaluate:
- Value: is it solving the right problem?
- Ease: can users do it without friction?
- Delight: polish, aesthetics, extra joy (only after 1â2) Capture feedback as observations + impact + suggestion, not opinions.
- Outputs: Filled feedback log with categories and severities.
- Checks: The review does not get stuck in minutiae before Value/Ease are resolved.
6) Synthesize + prioritize feedback into a change plan
- Inputs: Feedback log.
- Actions: Deduplicate comments; resolve conflicts by returning to goals and constraints; prioritize by user impact and risk. Convert top items into explicit changes with owners and due dates.
- Outputs: Prioritized change list + updated feedback log status/owners.
- Checks: Top 3 issues are clear; each has a proposed action and owner.
7) Decide, document tradeoffs, and close the loop
- Inputs: Proposed change plan + remaining open questions.
- Actions: Record decisions and rationale; list tradeoffs and risks; define what must be re-reviewed. Send a follow-up summary and schedule the next review or ship gate.
- Outputs: Decision record + follow-up message + Risks/Open questions/Next steps.
- Checks: Decisions and action items are captured in writing; no critical decision is left implicit.
Quality gate (required)
- Use references/CHECKLISTS.md and score with references/RUBRIC.md.
- Always include: Risks, Open questions, Next steps.