designing-surveys
npx skills add https://github.com/liqiongyu/lenny_skills_plus --skill designing-surveys
Agent 安装分布
Skill 文档
Designing Surveys
Scope
Covers
- Designing product surveys that answer a specific decision (not âgeneral feedbackâ)
- Choosing the right audience, sampling, and timing (including âbest customersâ cohorts)
- Writing clear, unbiased questions and using good scales (CSAT vs NPS guidance)
- Building an instrument that works on mobile (logic, required fields, option visibility)
- Planning analysis and turning results into decisions and follow-ups
When to use
- âDesign a customer survey forâ¦â
- âCreate an onboarding survey to profile users / separate buyer vs user.â
- âWe need a CSAT/NPS/PMF survey.â
- âDraft a cancellation / churn survey.â
- âHelp me write survey questions and an analysis plan.â
When NOT to use
- You need deep âwhyâ stories and context (use
conducting-user-interviews) - You need to measure causal impact of a change (use an experiment/A/B test, not a survey)
- Your reachable sample is extremely small (n < ~30) and you need directional insight â interviews may be better
- The topic is high-risk (legal/medical/safety) or requires formal survey science review; involve an expert
Inputs
Minimum required
- Product + target user(s)/segment(s)
- The decision to make (what will change based on the survey) + deadline
- Survey type (e.g., onboarding profiling, CSAT, NPS, PMF, churn, feature discovery)
- Distribution channel(s) (in-product, email, customer success, etc.) + sampling constraints
- Privacy/compliance constraints (what data you can/canât collect)
Missing-info strategy
- Ask up to 5 questions from references/INTAKE.md.
- If still missing, proceed with explicit assumptions and list Open questions that would change the design.
Outputs (deliverables)
Produce a Survey Pack in Markdown (in-chat; or as files if the user requests):
- Context snapshot (decision, audience, channel, constraints)
- Survey brief (goal, target population, sampling, timing, success criteria)
- Questionnaire (questions with rationale + response types; question IDs)
- Survey instrument table (copy/paste-ready for building in a survey tool)
- Analysis + reporting plan (segments, cuts, coding plan, decision thresholds)
- Launch plan + QA checklist (pilot, mobile QA, bias checks, comms, follow-ups)
- Risks / Open questions / Next steps (always included)
Templates: references/TEMPLATES.md
Expanded heuristics: references/WORKFLOW.md
Workflow (7 steps)
1) Intake + decision framing
- Inputs: User context; references/INTAKE.md.
- Actions: Clarify the decision, timeline, primary audience, and distribution channel(s). Name the âunknownsâ the survey must resolve.
- Outputs: Context snapshot + survey goal.
- Checks: You can state the decision in one sentence (âWe are deciding whether to⦠by â).
2) Define the audience + sampling plan (who, when, how many)
- Inputs: Context snapshot.
- Actions: Choose primary segment(s) and a sampling frame. Prefer behavior/recency-based cohorts (e.g., âsigned up 3â6 months ago and activeâ) when you need accurate recall.
- Outputs: Sampling plan (in brief) + segment cuts.
- Checks: You can explain why each segment is included and what decision it informs.
3) Choose the measurement design (metrics, scales, prioritization)
- Inputs: Survey goal + audience.
- Actions: Pick the core metric(s) (often CSAT); add 1â2 diagnostic questions that force prioritization (e.g., âpick top 3 barriersâ) and frequency/impact weighting.
- Outputs: Measurement plan (metric + diagnostics) + draft question list.
- Checks: Every question maps to a decision, hypothesis, or segment cut; no ânice-to-haveâ questions.
4) Draft the questionnaire (sections, wording, and logic)
- Inputs: Measurement plan; templates.
- Actions: Write questions using neutral wording, single concepts per question, and consistent scales. Add segmentation/profile questions only if you will use them in analysis.
- Outputs: Questionnaire with question IDs, response types, options, and skip logic notes.
- Checks: No double-barreled or leading questions; completion time target ⤠3â6 minutes for most surveys.
5) Build the instrument table + QA it (mobile + bias)
- Inputs: Questionnaire draft.
- Actions: Convert to an instrument table for implementation (IDs, types, options, required, logic). Check mobile rendering (all scale points visible) and option order/randomization.
- Outputs: Survey instrument table + QA checklist items.
- Checks: Scale labels are unambiguous; required questions are minimal; âOther (free text)â exists when appropriate.
6) Plan the launch (pilot, comms, monitoring, follow-ups)
- Inputs: Instrument + sampling plan.
- Actions: Define pilot (small n), launch dates, reminders, incentives, and a monitoring plan. If the goal is message validation, consider a behavioral âsurveyâ via ad/landing tests instead of asking opinions.
- Outputs: Launch plan + monitoring metrics (response rate, drop-off, segment mix).
- Checks: You have a plan for low response rate and for closing the loop with respondents.
7) Analysis + report plan + quality gate
- Inputs: Final instrument + goals.
- Actions: Define how youâll analyze (segments, cuts, coding of open-ended), the decision thresholds, and how results will be communicated. Run references/CHECKLISTS.md and score references/RUBRIC.md. Add Risks/Open questions/Next steps.
- Outputs: Final Survey Pack.
- Checks: A stakeholder can review async and decide âship / adjust / investigateâ without another meeting.
Quality gate (required)
- Use references/CHECKLISTS.md and references/RUBRIC.md.
- Always include: Risks, Open questions, Next steps.
Examples
Example 1 (Onboarding): âDesign an onboarding survey to identify the buyer vs user and route leads appropriately.â
Expected: short profiling questions (3â4 screens), clear segmentation fields, and a follow-up plan to avoid irrelevant outreach.
Example 2 (Product friction): âDesign a CSAT survey to find the top 3 productivity blockers for active users and how often they occur.â
Expected: CSAT + forced-ranking diagnostics + frequency weighting, plus an analysis plan that yields a ranked backlog of issues.
Boundary example: âWe want to know if feature X caused retention to improveâsend a survey.â
Response: push back; recommend experiment/instrumentation for causality, and use a survey only for qualitative context (or run interviews).