troubleshoot

📁 laststance/skills 📅 14 days ago
4
总安装量
4
周安装量
#49432
全站排名
安装命令
npx skills add https://github.com/laststance/skills --skill troubleshoot

Agent 安装分布

opencode 4
antigravity 4
claude-code 4
github-copilot 4
windsurf 4
codex 4

Skill 文档

Troubleshoot — Hypothesis-Driven Debugging & Fix

Systematic issue diagnosis with root cause tracing, validated fixes, and prevention guidance.

<essential_principles>

Serena Think Checkpoints (Mandatory)

These three tools MUST be called at the specified points. Never skip them.

Checkpoint Tool When Purpose
Information Gate mcp__serena__think_about_collected_information After Phase 1 (Reproduce) and Phase 2 (Hypothesize) Verify sufficient evidence before proceeding
Adherence Gate mcp__serena__think_about_task_adherence Before each code edit in Phase 4 (Fix) Confirm fix aligns with identified root cause
Completion Gate mcp__serena__think_about_whether_you_are_done Before exiting Phase 5 (Verify) Confirm fix is verified with evidence

Always Active

  • Hypothesis before fix: Never guess-fix. Always form a hypothesis, gather evidence, then apply the fix. “🤔 I think X because Y” → verify → fix
  • Introspection markers: Make debugging reasoning visible throughout:
    • 🤔 Reasoning — “🤔 The stack trace points to a null ref in…”
    • 🎯 Decision — “🎯 Root cause identified: missing null check at…”
    • ⚡ Performance — “⚡ This N+1 query causes the slowdown”
    • 📊 Quality — “📊 This fix also addresses the underlying design issue”
    • 💡 Insight — “💡 This pattern is error-prone; consider refactoring”
  • Validate every fix: Run lint/typecheck/test after each change. No unverified fixes
  • Destructive changes require confirmation: Deleting files, resetting state, dropping data
  • No project-specific rules: This skill works across all projects and AI agents

</essential_principles>

Phase 1: Reproduce

Understand and reproduce the issue before diagnosing.

  1. Parse the error description from user input
  2. 🤔 Identify the error type: bug / build / test / performance / deployment
  3. Collect evidence:
    • Read error messages, stack traces, logs
    • Run the failing command to see the actual output
    • Check git diff or git log for recent changes that may have caused it
  4. Confirm reproduction: “I can reproduce this by running X → error Y”

If cannot reproduce: Ask user for more context before proceeding.

Tools: Bash, Read, Grep, Glob

🔶 think_about_collected_information — Is the reproduction evidence sufficient?

Phase 2: Hypothesize

Form hypotheses about the root cause — do not jump to fixing.

  1. 🤔 List 2-3 candidate hypotheses based on evidence:
    🤔 Hypothesis A: Missing dependency after package update
    🤔 Hypothesis B: Type mismatch from recent refactor
    🤔 Hypothesis C: Environment variable not set
    
  2. 🎯 Rank by likelihood based on evidence strength
  3. Start investigating the most likely hypothesis first

🔶 think_about_collected_information — Are hypotheses grounded in evidence?

Phase 3: Investigate

Systematically verify or eliminate each hypothesis.

  1. Read the relevant source code (trace from error location outward)
  2. Follow the call chain: caller → function → dependencies
  3. Check external library behavior with Context7 if the issue involves a framework/library
  4. Narrow down to the root cause with evidence:
    🎯 Root cause: X confirmed. Evidence: [specific line/behavior]
    

Tools: Read, Grep, Glob, mcp__serena__find_symbol, mcp__context7__query-docs

Phase 4: Fix

Apply the fix with adherence checks before each edit.

For each code change:

  1. 🔶 think_about_task_adherence — Is this edit aligned with the identified root cause?
  2. 📊 Describe the fix approach before editing:
    📊 Fix: Change X to Y in file:line because [reason]
    
  3. Apply the minimal fix (don’t refactor unrelated code)
  4. If fix requires destructive changes → confirm with user first

Tools: Edit, Write, Bash

Phase 5: Verify

Prove the fix works with concrete evidence. No fix is complete without verification.

Standard Verification (always)

  1. Re-run the reproduction: Execute the same command/action from Phase 1
    • Confirm the error no longer occurs
    • Record the output as evidence
  2. Quality checks — run in parallel where possible:
    pnpm lint & pnpm typecheck & pnpm test & wait
    
  3. Evidence collection — at least one of:
    • Console/log output showing the fix works
    • Test results (new or existing tests passing)
    • Screenshot of corrected behavior
    • User confirmation request (“Can you verify X now works?”)

If any check fails → return to Phase 3 with new evidence.

--frontend-verify (when flag is provided)

Visual verification across platforms. Auto-detect platform from package.json:

Dependency Platform Preflight Verification Tool
(default) Web kill-port <port> && pnpm dev Claude Chrome MCP (tabs_context_mcp, read_page, take_screenshot)
electron Electron pnpm electron:dev Electron MCP (mcp__electron__take_screenshot)
expo / react-native Mobile mcp__ios-simulator__open_simulator iOS Simulator MCP (screenshot, ui_tap, ui_swipe)
commander / inquirer / oclif CLI shell session Shellwright MCP (TUI/CLI operation and output verification)

Frontend Verify Workflow:

  1. Preflight: Start dev server / app, confirm MCP connection
  2. Before screenshot: Take screenshot of the broken state (if reproducible in UI)
  3. After fix screenshot: Take screenshot of the corrected state
  4. Compare: Present before/after to user for confirmation
  5. Judge: All pass → continue. Any fail → return to Phase 4

Completion Gate

🔶 think_about_whether_you_are_done — Is the fix verified with evidence?

Phase 6: Report

Summarize findings for the user.

  1. Root Cause: What was wrong and why
  2. Fix Applied: What was changed (files, lines)
  3. Evidence: Verification results (logs, screenshots, test output)
  4. Prevention: 💡 How to avoid this in the future (optional, only if insightful)

Examples

/troubleshoot "TypeError: Cannot read properties of undefined"
/troubleshoot build is failing after upgrading React
/troubleshoot tests pass locally but fail in CI --frontend-verify
/troubleshoot API response time doubled since last deploy

Phase Flow Diagram

[Reproduce] → 🔶 info gate
     ↓
[Hypothesize] → 🔶 info gate
     ↓
[Investigate]
     ↓
   [Fix] → 🔶 adherence gate (per edit)
     ↓
 [Verify] → 🔶 completion gate
     ↓
 [Report]

Boundaries

Will:

  • Systematically trace root causes with evidence-based reasoning
  • Apply minimal, validated fixes with hypothesis-driven debugging
  • Verify fixes with concrete evidence (logs, screenshots, test results)
  • Explain the debugging process transparently with introspection markers

Will Not:

  • Apply fixes without understanding the root cause
  • Make speculative changes hoping something works
  • Mark a fix as complete without verification evidence
  • Modify production systems without explicit user confirmation