deep-research
13
总安装量
5
周安装量
#25083
全站排名
安装命令
npx skills add https://github.com/srstomp/pokayokay --skill deep-research
Agent 安装分布
claude-code
3
trae
2
antigravity
2
windsurf
2
codex
2
Skill 文档
Deep Research
Comprehensive investigation that informs major decisions. Produces structured reports for stakeholders.
Integrates with:
ohnoâ Creates research task, tracks progress, logs findingsproject-harnessâ Works within session workflowspikeâ May spawn focused spikes for specific technical questions- Domain skills â May invoke for specialized analysis
When to Use Deep Research vs Spike
ââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââ
â SPIKE (hours) â DEEP RESEARCH (days) â
ââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââ¤
â "Can we use X?" â "Which of X, Y, Z should we use?" â
â Single question â Multiple related questions â
â 2-4h time box â 1-5 days investigation â
â Team/self audience â Stakeholder audience â
â Decision + brief report â Comprehensive report + recommendations â
â PoC optional â Usually no code â
ââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââ
Use deep-research when:
- Decision impacts architecture, budget, or team direction
- Multiple viable options need systematic comparison
- Stakeholders need comprehensive rationale
- Industry context or competitive landscape matters
Use spike instead when:
- Single yes/no question
- Technical feasibility is the only concern
- Time-boxed answer is sufficient
Research Types
| Type | Purpose | Duration | Key Output |
|---|---|---|---|
| Technology Evaluation | Compare frameworks, libraries, services | 2-3 days | Comparison matrix + recommendation |
| Competitive Analysis | How others solve this problem | 1-2 days | Pattern synthesis + insights |
| Architecture Exploration | Design patterns for complex requirements | 2-4 days | Options analysis + trade-offs |
| Best Practices | Industry standards for X | 1-2 days | Consolidated guidelines |
| Vendor Evaluation | SaaS/tool selection | 2-3 days | Evaluation matrix + recommendation |
See references/research-types.md for detailed type-specific guidance.
Research Workflow
âââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââ
â 1. SCOPE Define questions, criteria, constraints â
â â â
â 2. GATHER Identify sources, collect documentation, find examples â
â â â
â 3. ANALYZE Compare options against criteria, identify trade-offs â
â â â
â 4. SYNTHESIZE Form recommendations, document rationale â
â â â
â 5. PRESENT Structured report for stakeholders â
âââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââââ
Phase 1: Scope
Frame the investigation with clear boundaries:
## Research Definition
**Topic**: Authentication solutions for multi-tenant SaaS
**Primary Questions**:
1. Which auth providers support enterprise SSO + social login?
2. What are the cost implications at 10k, 100k, 1M users?
3. How complex is integration with our existing stack?
**Evaluation Criteria** (weighted):
- Enterprise features: 30%
- Developer experience: 25%
- Cost scalability: 25%
- Community/support: 20%
**Constraints**:
- Must support SAML and OIDC
- Self-hosted option preferred
- Budget: <$10k/month at scale
**Out of Scope**:
- Custom auth implementation
- Compliance certification research
Scoping Questions:
- What decision will this research inform?
- Who are the stakeholders?
- What criteria matter most?
- What constraints are non-negotiable?
- What’s explicitly out of scope?
Phase 2: Gather
Build a comprehensive source base:
Source Priority (high to low):
- Official documentation, API references
- Engineering blogs from adopters at scale
- GitHub repos, sample implementations
- Conference talks, case studies
- Community discussions, Stack Overflow
- Marketing materials (verify claims)
See references/source-quality.md for evaluation criteria.
Gathering Patterns:
- Start with official docs for feature matrix
- Search for “[tool] at scale” case studies
- Check GitHub issues for pain points
- Look for migration stories (to AND from)
- Find pricing calculators, estimate at your scale
Phase 3: Analyze
Compare systematically against criteria:
## Comparison Matrix: Auth Providers
| Criteria | Auth0 | Clerk | Supabase Auth | WorkOS |
|----------|-------|-------|---------------|--------|
| Enterprise SSO | â All major | â All major | Limited | â All major |
| Social Login | â Extensive | â Good | â Good | â Good |
| Self-hosted | â No | â No | â Yes | â No |
| Pricing (100k MAU) | $2,400/mo | $1,200/mo | $0 (self) | $4,500/mo |
| Next.js Integration | Good | Excellent | Good | Good |
| **Score** | 78/100 | 85/100 | 72/100 | 80/100 |
Analysis Patterns:
- Create comparison matrix for each criterion
- Document evidence for each claim
- Note gaps in information
- Identify hidden costs, complexity
- Consider migration/lock-in implications
Phase 4: Synthesize
Form actionable recommendations:
## Synthesis
### Key Findings
1. All evaluated providers meet baseline requirements
2. Clerk offers best DX for our stack (Next.js + React)
3. Supabase Auth is cost-effective but enterprise features lag
4. Auth0 has most mature enterprise features but highest cost
### Trade-off Analysis
- **Clerk**: Best DX, good features, moderate cost
- Risk: Smaller company, less enterprise track record
- **Auth0**: Enterprise-proven, comprehensive features
- Risk: Cost scales aggressively, migration complexity
### Recommendation
**Primary**: Clerk for new development
**Rationale**: Best DX alignment, adequate enterprise features, competitive cost
**Contingency**: Auth0 if enterprise requirements expand significantly
Phase 5: Present
Deliver structured report:
# Research Report: Authentication Solutions
## Executive Summary
[1-2 paragraphs: context, key findings, recommendation]
## Background
[Why this research was needed, decision context]
## Methodology
[How research was conducted, sources used]
## Options Evaluated
[Brief description of each option]
## Comparison
[Matrix and detailed analysis]
## Recommendation
[Primary recommendation with rationale]
## Trade-offs & Risks
[What we're accepting with this choice]
## Follow-up Actions
[Spikes, implementation tasks, or additional research]
## Appendix
[Detailed data, sources, pricing breakdowns]
See assets/templates/research-report.md for full template.
ohno Integration
Creating Research Tasks
# Create research task
ohno add "Research: Auth solutions for multi-tenant SaaS" \
--type research \
--estimate 2d \
--tags research,auth,architecture
# Track progress with subtasks
ohno add "Gather Auth0 documentation" --parent <research-id>
ohno add "Gather Clerk documentation" --parent <research-id>
ohno add "Create comparison matrix" --parent <research-id>
ohno add "Write research report" --parent <research-id>
Progress Checkpoints
Day 1: Scope defined, sources identified
Day 2: Gathering complete, analysis started
Day 3: Synthesis forming, draft report
Day 4: Report complete, review
Day 5: Finalize and present
Creating Follow-up Tasks
Research often spawns additional work:
# Spike for technical validation
ohno add "Spike: Clerk integration with our Next.js setup" --type spike
# Implementation tasks
ohno add "Implement Clerk auth integration" --type task
ohno add "Create SSO onboarding flow" --type task
Output Structure
.claude/
âââ research/
â âââ auth-solutions-2026-01-18/
â â âââ report.md â Main research report
â â âââ comparison-matrix.md â Detailed comparisons
â â âââ sources.md â Source log with notes
â â âââ raw-notes/ â Gathering artifacts
âââ PROJECT.md
âââ tasks.db
Anti-Patterns
During Scoping
| Anti-Pattern | Example | Fix |
|---|---|---|
| No clear question | “Research auth” | Define decision to be informed |
| Unbounded scope | “Evaluate all options” | Set explicit constraints |
| Missing criteria | “Find the best one” | Define weighted evaluation criteria |
| No stakeholders | Research for its own sake | Identify who needs this |
During Gathering
| Anti-Pattern | Example | Fix |
|---|---|---|
| Marketing as evidence | “Vendor says it’s fast” | Find independent benchmarks |
| Single source | Only official docs | Diversify source types |
| Confirmation bias | Only positive reviews | Actively seek criticisms |
| Dated information | 3-year-old blog posts | Verify currency of sources |
During Synthesis
| Anti-Pattern | Example | Fix |
|---|---|---|
| Analysis paralysis | “Need more data” | Decide with available info |
| No recommendation | “It depends” | Make a call, document trade-offs |
| Hidden agenda | Cherry-picked evidence | Present all options fairly |
| Vague conclusion | “Consider X” | Specific, actionable recommendation |
Quality Checklist
Research Definition
- Primary questions clearly stated
- Evaluation criteria weighted
- Constraints documented
- Out of scope defined
- Timeline realistic for scope
Gathering
- Multiple source types used
- Official docs reviewed
- Real-world case studies found
- Pain points researched
- Sources documented with dates
Analysis
- All options evaluated fairly
- Comparison matrix complete
- Evidence cited for claims
- Gaps in knowledge noted
- Trade-offs explicit
Synthesis
- Clear recommendation made
- Rationale documented
- Risks acknowledged
- Alternatives noted
- Follow-up actions defined
Presentation
- Executive summary present
- Appropriate depth for audience
- Sources cited
- Appendix with details
- Report filed in
.claude/research/
References
- references/research-types.md â Detailed guidance per research type
- references/source-quality.md â Evaluating source quality
- references/synthesis-patterns.md â Forming recommendations
- assets/templates/research-report.md â Full report template
- assets/templates/comparison-matrix.md â Matrix template