product-customer-discovery

📁 piperubio/ai-agents 📅 2 days ago
3
总安装量
2
周安装量
#55864
全站排名
安装命令
npx skills add https://github.com/piperubio/ai-agents --skill product-customer-discovery

Agent 安装分布

amp 2
github-copilot 2
codex 2
kimi-cli 2
gemini-cli 2
cursor 2

Skill 文档

Product Customer Discovery

Purpose

  • Reduce product/market risk by learning how target users behave today, what problems they truly have, and what they already do to solve them.
  • Turn qualitative conversations into decisions: who to build for (ICP), what to solve (problem framing), and what to test next (experiments).

Quick triggers

Use this skill when the user asks for:

  • “product customer discovery”, “user research”, “problem interviews”, “exploratory interviews”
  • “write an interview script/guide”, “what questions should I ask users”
  • “define ICP/personas/segments”, “JTBD”, “pain points”, “opportunity sizing (qual)”
  • “synthesize interview notes”, “extract themes/insights”, “create a discovery report”

Inputs to ask for (minimum)

  1. Product/service and stage (idea, MVP, growth) + decision(s) discovery must unblock
  2. Target audience hypotheses (who) and problem hypotheses (what/why)
  3. Constraints: timeline, number of interviews, geography/language, incentives, recruiting channels

Outputs (suggested)

  • Discovery plan: goals, hypotheses, target segments, recruiting criteria, timeline
  • Interview guide: opening, context questions, story prompts, probing, wrap-up
  • Synthesis: themes + evidence (quotes), JTBD/pains/gains, opportunity areas, risks/unknowns
  • Next steps: prioritized experiments (e.g., landing page, concierge test, prototype test)

Core workflow (end-to-end)

  1. Align on outcomes: confirm what decision will be made from the research and what “good evidence” looks like.
  2. Define hypotheses: write 5–10 falsifiable statements (ICP, problem, willingness, constraints, alternatives).
  3. Select participants: define inclusion/exclusion criteria, quotas across segments, and screening questions.
  4. Design the interview:
    • prefer “tell me about the last time…” over “would you use…”
    • focus on current behavior, existing alternatives, constraints, and consequences
  5. Run interviews:
    • start with rapport + consent; keep it conversational
    • ask for specific incidents; probe for frequency, severity, triggers, and workarounds
    • capture verbatims and observable facts; separate facts from interpretations
  6. Synthesize:
    • affinity-map notes into themes; label with evidence + confidence
    • map to JTBD (situation → motivation → desired outcome) and pains/gains
    • identify “strong signals” (repeated patterns, costly workarounds, high stakes)
  7. Decide & recommend:
    • rank opportunities by severity, frequency, reachable audience, and differentiation
    • propose next experiments with clear success metrics and cheapest test first
  8. Share readout: present insights, what changed vs. assumptions, open questions, and the plan.

Interview guide template (outline)

  1. Intro: who you are, purpose, confidentiality, recording consent, timebox
  2. Background: role, context, responsibilities, tools/workflow
  3. Story prompts (core): “Walk me through the last time you…”
  4. Probing:
    • triggers: “what started this?”
    • frequency: “how often?”
    • severity: “what happens if you don’t solve it?”
    • alternatives: “what did you try? why that? what did it cost?”
    • decision-making: “who’s involved? what’s the budget/approval path?”
  5. Wrap-up: biggest pain, ideal outcome, who else to talk to, follow-up permission

Quality checklist

  • Goals and hypotheses are explicit and falsifiable
  • Participant criteria and screening reduce bias (no “friends and fans” only)
  • Questions avoid leading language and future hypotheticals
  • Notes separate verbatims/facts from interpretations
  • Insights are backed by evidence, not anecdotes
  • Recommendations include next experiments and success metrics

Common mistakes (avoid)

  • Asking for feature opinions instead of behavior (“Would you use X?”)
  • Interviewing only easy-to-reach users and generalizing
  • Treating one loud quote as a “theme” without triangulation
  • Skipping the decision step (insights without a recommendation and next tests)