deep-research
npx skills add https://github.com/baphomet480/claude-skills --skill deep-research
Agent 安装分布
Skill 文档
Deep Research
Produce Gemini Deep Research-quality output: rich artifacts with embedded screenshots, mermaid diagrams, comparison tables, and narrative synthesis. Tuned for developer decisions â framework selection, architecture patterns, dependency evaluation, competitive analysis.
When to Use This Skill
- “Research the current state of X”
- “Compare Framework A vs Framework B”
- “What are the best approaches for…”
- “Deep dive into…”
- Any request where the answer requires synthesizing information from many sources
Do NOT use for: quick factual lookups, single-source answers, or “find me a CSS button” (use design-lookup instead).
Input Protocol â Before Any Search
- Decompose the topic into 3-5 research axes.
- Example: “Compare Next.js vs Remix” â Performance, DX, Ecosystem, Deployment, Community
- Identify the decision context â what is the user actually deciding?
- Framework choice? Architecture pattern? Build vs buy? Migration risk?
- Draft a research plan â present 3-5 axes with planned queries to the user.
- Save it as an artifact (e.g.,
research_plan.md). - Proceed on approval, or refine if the user redirects scope.
- Save it as an artifact (e.g.,
Phase 1: Breadth Scan
Goal: Map the landscape. Find what exists before reading anything.
- Run 5-8 parallel searches across different axes. Use at least two tools:
tavily_searchâ broad topic queriessearch_webâ alternate search perspectivetavily_researchâ delegate an entire sub-question (powerful for “state of X” queries)
- Dev-specific breadth:
search_codeorsearch_repositoriesâ find relevant GitHub repos- Search npm trends, bundle sizes, download counts when evaluating packages
- Search for migration stories: “migrating from X to Y” experience reports
- Collect 15-25 candidate URLs, not 5. Score each by authority tier (see references/research-heuristics.md).
- Do not stop at snippets. Snippets are for candidate selection only.
Output: Candidate source list with tier ratings. Present to user if interactive, or proceed if autonomous.
Phase 2: Deep Read
Goal: Extract actual content â implementation details, code examples, benchmarks, data.
- Select the top 8-12 sources from Phase 1 (prioritize S and A tier).
- Full extraction â get the complete page content:
tavily_extractorread_url_contentfor text-heavy pagestavily_crawlto follow documentation multi-page structuresbrowser_subagentto screenshot key pages (UIs, dashboards, architecture diagrams)get_file_contents(GitHub MCP) to read actual source code from repos
- Analyze each source:
- Extract specific claims, numbers, patterns, code examples
- Note the authority tier and any bias (is this the framework’s own marketing?)
- Tag findings by research axis
- Self-correction: If a source is fluff (marketing-only, thin tutorial, SEO filler):
- Discard it
- Run a refined follow-up search with more specific terms
- Try adding: “benchmark”, “technical deep dive”, “lessons learned”, “postmortem”
Output: Annotated source notes organized by axis.
Phase 3: Synthesis
Goal: Build the research briefing artifact. This is the main deliverable.
- Choose the report template from references/report-templates.md:
- Comprehensive Brief â for landscape/state-of-the-art research
- Comparison Brief â for head-to-head evaluations
- Write the report as a rich markdown artifact:
- Narrative prose in the executive summary â not bullets, not lists. Write as if briefing a tech lead.
- Comparison tables with real data extracted from sources
- Mermaid diagrams for architecture, decision trees, ecosystem maps
- Embedded screenshots captured via
browser_subagentduring Phase 2 - Code examples pulled from actual repos or docs
- Use
generate_imagefor custom visualizations when no screenshot captures the concept
- Cite every claim â link to the source URL inline. Use the format:
[Source Name](URL). - Gap analysis â explicitly call out:
- What couldn’t be determined and why
- Conflicting information between sources
- Areas where only low-tier sources were found
Output: The research artifact (e.g., research_report.md).
Phase 4: Iteration
Goal: Fill gaps identified in Phase 3.
- Review the gap analysis section of your report.
- For each fillable gap:
- Run 1-2 targeted searches with refined queries
- Extract and read the results
- Update the report artifact in-place
- Max 3 total iterations (Phase 1-3 = round 1, then up to 2 more targeted rounds).
- After final iteration, mark remaining gaps as “Unresolved” with explanation.
Tool Strategy
| Purpose | Primary | Fallback |
|---|---|---|
| Topic discovery | tavily_search |
search_web |
| Delegated deep research | tavily_research |
Manual multi-search |
| Full page extraction | tavily_extract |
read_url_content |
| Multi-page docs | tavily_crawl |
tavily_map + manual |
| Visual evidence | browser_subagent (screenshot) |
generate_image |
| GitHub analysis | search_code, get_file_contents |
read_url_content on raw GitHub |
| Architecture diagrams | Mermaid in markdown | generate_image |
| Data visualization | Markdown tables | generate_image for charts |
Quality Gates
Before delivering the report, verify:
- Source diversity â at least 1 S-tier and 2 A-tier sources cited (or explicitly flagged as unavailable)
- Visual richness â at least 1 screenshot/image AND 1 diagram/table embedded
- Narrative quality â executive summary reads as prose, not bullet points
- Citation completeness â every factual claim links to a source
- Gap transparency â gaps and conflicts are explicitly documented
- Actionable output â recommendations section exists with ranked, specific advice
Anti-Patterns
- Snippet-only research â stopping at search result descriptions without full extraction
- Text-wall reports â no visuals, no tables, no diagrams. The whole point is richness.
- Source-by-source organization â findings must be grouped thematically by research axis, not by URL
- Single-tool reliance â use at least 2 different search/extraction tools for source diversity
- Uncited claims â every substantive finding must link to its source
- Marketing echo â repeating a framework’s own marketing claims without independent verification
- Premature stopping â delivering after 3-5 sources when the topic warrants 15+
References
- Source authority scoring and query patterns: references/research-heuristics.md
- Report structure templates: references/report-templates.md