research

📁 benredmond/apex 📅 Jan 26, 2026
4
总安装量
2
周安装量
#50696
全站排名
安装命令
npx skills add https://github.com/benredmond/apex --skill research

Agent 安装分布

opencode 2
kilo 2
antigravity 2
claude-code 2
github-copilot 2

Skill 文档

This is the first phase of the APEX workflow. It gathers all intelligence needed for planning and implementation.

Please provide:

  • Task description (e.g., “implement dark mode toggle”)
  • Linear/JIRA ticket ID (e.g., “APE-59”)
  • Path to task file (e.g., “./tickets/feature.md”)
  • Existing APEX task ID

I’ll analyze patterns, explore the codebase, find similar tasks, and create a detailed research document. Immediately begin research – skip this message.

Text description: Create a task entry with intent, inferred type, generated identifier, and tags Ticket ID (APE-59): Fetch ticket details (if available), then create a task entry with identifier set to ticket ID File path: Read file fully, parse content, then create a task entry Database ID: Look up existing task by ID to retrieve it

Store taskId and identifier for all subsequent operations.

  1. Add Specificity: Transform vague terms into concrete requirements

    • “improve performance” → “reduce API response time below 200ms”
    • “fix the bug” → “prevent null pointer when user has no profile”
  2. Structure Requirements: Break into testable acceptance criteria

    • Given [context], When [action], Then [expected result]
  3. Include Testing: How will we verify success?

    • Unit test expectations
    • Integration test scenarios
    • Manual verification steps
  4. Pattern Enhancement: Check existing patterns or similar past tasks

    • What worked before?
    • What failed and why?

Vague Goals:

  • “improve”, “enhance”, “optimize” without metrics
  • “fix the bug” without reproduction steps
  • “make it better” without criteria

Unclear Scope:

  • No defined boundaries (what’s in/out)
  • Multiple interpretations possible
  • Triage scan surfaces multiple plausible entrypoints/tests

Technical Choices:

  • Triage scan shows multiple candidate libraries/approaches
  • Architecture decisions user should make
  • Technology/library selection needed

Missing Constraints:

  • No performance requirements
  • No security requirements specified
  • No compatibility requirements
# Check each category
if has_vague_goals(enhanced_prompt):
    ambiguities.append({"type": "vague_goal", "question": "..."})

if has_unclear_scope(enhanced_prompt, triage_scan):
    ambiguities.append({"type": "unclear_scope", "question": "..."})

if needs_technical_choice(triage_scan):
    ambiguities.append({"type": "technical_choice", "question": "..."})

if missing_constraints(enhanced_prompt):
    ambiguities.append({"type": "missing_constraint", "question": "..."})

return ambiguities
</assessment-logic>

<decision>
- **0 ambiguities**: PROCEED to spawn parallel research agents
- **1+ ambiguities**: ASK USER before spawning deep research agents

**Question Format**:

Before I spawn deep research agents, I need to clarify:

[For each ambiguity, ONE focused question]

  1. [Category]: [Specific question]?
    • Option A: [Choice with implication]
    • Option B: [Choice with implication]
    • Option C: [Let me know your preference]
</decision>

<max-rounds>
Maximum 1 clarification round. After user responds:
- Incorporate answers into enhanced_prompt
- Proceed to spawn parallel research agents (do NOT ask more questions)
</max-rounds>
</step>

<step id="5" title="Create task file">
<instructions>
Create `./apex/tasks/[identifier].md` with frontmatter:

```markdown
---
id: [database_id]
identifier: [identifier]
title: [Task title]
created: [ISO timestamp]
updated: [ISO timestamp]
phase: research
status: active
---

# [Title]

<research>
<!-- Will be populated by this skill -->
</research>

<plan>
<!-- Populated by /apex:plan -->
</plan>

<implementation>
<!-- Populated by /apex:implement -->
</implementation>

<ship>
<!-- Populated by /apex:ship -->
</ship>

Discover relevant patterns, find similar tasks, identify predicted failures, generate execution strategy. Return: Context pack with pattern intelligence.

Extract concrete implementation patterns from THIS codebase with file:line references. Return: YAML with primary patterns, conventions, reusable snippets, testing patterns.

Find official documentation, best practices, security concerns, recent changes. Return: YAML with official_docs, best_practices, security_concerns, recent_changes.

Analyze git history for similar changes, regressions, ownership. Return: Structured git intelligence.

Search project docs for:

  • Architecture context and design decisions
  • Past decisions and rationale (ADRs)
  • Historical learnings and gotchas
  • Related documentation that may need updating

Return: YAML with architecture_context, past_decisions, historical_learnings, docs_to_update.

Search past task files (apex/tasks/*.md) for:

  • Problems solved and how they were fixed
  • Decisions made with rationale
  • Gotchas and surprising discoveries
  • Related tasks via related_tasks links

Return: YAML with top 5 relevant learnings ranked by relevance score.

Trace execution flow, dependencies, state transitions, integration points.

Surface forward-looking risks, edge cases, monitoring gaps, mitigations.

CRITICAL: Wait for ALL agents to complete before proceeding.

Task: [Title] Agents Deployed: [N] Files Analyzed: [X]

Baseline Metrics

  • Complexity estimate: [1-10]
  • Risk level: [Low/Medium/High]
  • Pattern coverage: [X patterns found, Y% high-trust]

Pattern Intelligence

  • High-trust patterns (★★★★☆+): [N] patterns applicable
  • Similar past tasks: [N] found, [X]% success rate
  • Predicted failure points: [N] identified

Historical Intelligence

  • Related commits: [N] in last 9 months
  • Previous attempts: [List any failed/reverted changes]
  • Key maintainers: [Names/areas]

Execution Strategy

  • Recommended approach: [Brief]
  • Parallelization opportunities: [Yes/No]
  • Estimated scope: [Small/Medium/Large]

Key Insights

  1. [Most important finding]
  2. [Second most important]
  3. [Third most important]
</display-format>
</step>

<step id="9" title="Technical Adequacy Gate (Phase 2)">
<purpose>
Verify we have sufficient intelligence to architect a solution.
</purpose>

<scoring-dimensions>
**Technical Context (30% weight)**:
- [ ] Primary files identified with line numbers
- [ ] Dependencies mapped
- [ ] Integration points documented
- [ ] Current behavior understood

**Risk Assessment (20% weight)**:
- [ ] Security concerns identified
- [ ] Performance implications assessed
- [ ] Breaking change potential evaluated
- [ ] Rollback strategy considered

**Dependency Mapping (15% weight)**:
- [ ] Upstream dependencies known
- [ ] Downstream consumers identified
- [ ] External API constraints documented
- [ ] Version compatibility checked

**Pattern Availability (35% weight)**:
- [ ] Relevant patterns found (confidence ≥ 0.5)
- [ ] Similar past tasks reviewed
- [ ] Implementation patterns from codebase extracted
- [ ] Anti-patterns identified to avoid
</scoring-dimensions>

<confidence-calculation>
```python
def calculate_adequacy(checklist_results):
    weights = {
        "technical_context": 0.30,
        "risk_assessment": 0.20,
        "dependency_mapping": 0.15,
        "pattern_availability": 0.35
    }

    score = sum(
        weights[dim] * (checked / total)
        for dim, (checked, total) in checklist_results.items()
    )

    return score  # 0.0 to 1.0

If INSUFFICIENT:

## Insufficient Context

Adequacy Score: [X]% (threshold: 60%)

**Gaps Identified**:
- [Dimension]: [What's missing]

**Recovery Options**:
1. Spawn additional agents for [specific gap]
2. Ask user for [specific information]
3. Proceed with documented limitations

Which approach should I take?

Solution A: [Approach name]

  • Philosophy, implementation path, pros, cons, risk level

Solution B: [Different paradigm]

  • Philosophy, implementation path, pros, cons, risk level

Solution C: [Alternative architecture]

  • Philosophy, implementation path, pros, cons, risk level

Comparative Analysis: Winner with reasoning, runner-up with why not

<research>
<metadata>
  <timestamp>[ISO]</timestamp>
  <agents-deployed>[N]</agents-deployed>
  <files-analyzed>[X]</files-analyzed>
  <confidence>[0-10]</confidence>
  <adequacy-score>[0.0-1.0]</adequacy-score>
  <ambiguities-resolved>[N]</ambiguities-resolved>
</metadata>

<context-pack-refs>
  <!-- Shorthand for downstream phases -->
  ctx.patterns = pattern-library section
  ctx.impl = codebase-patterns section
  ctx.web = web-research section
  ctx.history = git-history section
  ctx.docs = documentation section (from documentation-researcher)
  ctx.learnings = past-learnings section (from learnings-researcher)
  ctx.risks = risks section
  ctx.exec = recommendations.winner section
</context-pack-refs>

<executive-summary>
[2-3 paragraphs synthesizing ALL findings]
</executive-summary>

<web-research>
  <official-docs>[Key findings with URLs]</official-docs>
  <best-practices>[Practices with sources]</best-practices>
  <security-concerns>[Issues with severity and mitigation]</security-concerns>
  <gap-analysis>[Codebase vs recommendations]</gap-analysis>
</web-research>

<codebase-patterns>
  <primary-pattern location="file:line">[Description with code snippet]</primary-pattern>
  <conventions>[Naming, structure, types, error handling]</conventions>
  <reusable-snippets>[Copy-pasteable code with sources]</reusable-snippets>
  <testing-patterns>[How similar features are tested]</testing-patterns>
  <inconsistencies>[Multiple approaches found]</inconsistencies>
</codebase-patterns>

<pattern-library>
  <pattern id="PAT:X:Y" confidence="★★★★☆" uses="N" success="X%">[Relevance]</pattern>
  <anti-patterns>[Patterns to avoid with reasons]</anti-patterns>
</pattern-library>

<documentation>
  <architecture-context>[Relevant architecture docs found]</architecture-context>
  <past-decisions>[ADRs and design decisions]</past-decisions>
  <historical-learnings>[Gotchas and lessons from docs]</historical-learnings>
  <docs-to-update>[Files that may need updating after this task]</docs-to-update>
</documentation>

<past-learnings>
  <count>[Number of relevant learnings found]</count>
  <coverage>[EXCELLENT|GOOD|SPARSE|NONE]</coverage>
  <learnings>
    <learning task-id="[ID]" relevance="[0.0-1.0]">
      <title>[Task title]</title>
      <summary>[Why this is relevant and what's useful]</summary>
      <problems>[Problems solved, if any]</problems>
      <decisions>[Decisions made, if any]</decisions>
      <gotchas>[Gotchas discovered, if any]</gotchas>
    </learning>
  </learnings>
  <patterns-across>[Common themes from multiple past tasks]</patterns-across>
</past-learnings>

<git-history>
  <similar-changes>[Commits with lessons]</similar-changes>
  <evolution>[How code got here]</evolution>
</git-history>

<risks>
  <risk probability="H|M|L" impact="H|M|L">[Description with mitigation]</risk>
</risks>

<recommendations>
  <solution id="A" name="[Name]">
    <philosophy>[Core principle]</philosophy>
    <path>[Implementation steps]</path>
    <pros>[Advantages]</pros>
    <cons>[Disadvantages]</cons>
    <risk-level>[Low|Medium|High]</risk-level>
  </solution>
  <solution id="B" name="[Name]">...</solution>
  <solution id="C" name="[Name]">...</solution>
  <winner id="[A|B|C]" reasoning="[Why]"/>
</recommendations>

<task-contract version="1">
  <intent>[Single-sentence intent]</intent>
  <in-scope>[Explicit inclusions]</in-scope>
  <out-of-scope>[Explicit exclusions]</out-of-scope>
  <acceptance-criteria>
    <criterion id="AC-1">Given..., When..., Then...</criterion>
  </acceptance-criteria>
  <non-functional>
    <performance>[Performance constraints]</performance>
    <security>[Security constraints]</security>
    <compatibility>[Compatibility constraints]</compatibility>
  </non-functional>
  <amendments>
    <!-- Append amendments in plan/implement/ship with explicit rationale and version bump -->
  </amendments>
</task-contract>

<next-steps>
Run `/apex:plan [identifier]` to create architecture from these findings.
</next-steps>
</research>