test-analyze
npx skills add https://github.com/laurigates/claude-plugins --skill test-analyze
Agent 安装分布
Skill 文档
Test Analysis and Fix Planning
Analyzes test results from any testing framework, uses Zen planner to create a systematic fix strategy, and delegates fixes to appropriate subagents.
Usage
/test:analyze <results-path> [--type <test-type>] [--focus <area>]
Parameters
-
<results-path>: Path to test results directory or file (required)- Examples:
./test-results/,./coverage/,pytest-report.xml
- Examples:
-
--type <test-type>: Type of tests (optional, auto-detected if omitted)accessibility– Playwright a11y, axe-coreunit– Jest, pytest, cargo testintegration– API tests, database testse2e– Playwright, Cypress, Seleniumsecurity– OWASP ZAP, Snyk, TruffleHogperformance– Lighthouse, k6, JMeter
-
--focus <area>: Specific area to focus on (optional)- Examples:
authentication,api,ui-components,database
- Examples:
Examples
# Analyze Playwright accessibility test results
/test:analyze ./test-results/ --type accessibility
# Analyze unit test failures with focus on auth
/test:analyze ./coverage/junit.xml --type unit --focus authentication
# Auto-detect test type and analyze all issues
/test:analyze ./test-output/
# Analyze security scan results
/test:analyze ./security-report.json --type security
Command Flow
-
Analyze Test Results
- Parse test result files (XML, JSON, HTML, text)
- Extract failures, errors, warnings
- Categorize issues by type and severity
- Identify patterns and root causes
-
Plan Fixes with PAL Planner
- Use
mcp__pal__plannerfor systematic planning - Break down complex fixes into actionable steps
- Identify dependencies between fixes
- Estimate effort and priority
- Use
-
Delegate to Subagents
- Accessibility issues â
code-reviewagent (WCAG compliance) - Security vulnerabilities â
security-auditagent - Performance problems â
system-debuggingagent - Code quality issues â
code-refactoringagent - Test infrastructure â
test-architectureagent - Integration failures â
system-debuggingagent - Documentation gaps â
documentationagent
- Accessibility issues â
-
Execute Plan
- Sequential execution based on dependencies
- Verification after each fix
- Re-run tests to confirm resolution
Subagent Selection Logic
The command uses this decision tree to delegate:
-
Accessibility violations (WCAG, ARIA, contrast) â
code-reviewagent with accessibility focus -
Security issues (XSS, SQLi, auth bypass) â
security-auditagent with OWASP analysis -
Performance bottlenecks (slow queries, memory leaks) â
system-debuggingagent with profiling -
Code smells (duplicates, complexity, coupling) â
code-refactoringagent with SOLID principles -
Flaky tests (race conditions, timing issues) â
test-architectureagent with stability analysis -
Build/CI failures (pipeline errors, dependency issues) â
cicd-pipelinesagent with workflow optimization
Output
The command produces:
-
Summary Report
- Total issues found
- Breakdown by category/severity
- Top priorities
-
Fix Plan (from PAL planner)
- Step-by-step remediation strategy
- Dependency graph
- Effort estimates
-
Subagent Assignments
- Which agent handles which issues
- Rationale for delegation
- Execution order
-
Actionable Next Steps
- Commands to run
- Files to modify
- Verification steps
Notes
- Works with any test framework that produces structured output
- Auto-detects common test result formats (JUnit XML, JSON, TAP)
- Preserves test evidence for debugging
- Can be chained with
/git:smartcommitfor automated fixes - Respects TDD workflow (RED â GREEN â REFACTOR)
Related Commands
/test:run– Run tests with framework detection/code:review– Manual code review for test files/docs:update– Update test documentation/git:smartcommit– Commit fixes with conventional messages
Prompt:
Analyze test results from {{ARG1}} and create a systematic fix plan.
{{#if ARG2}} Test type: {{ARG2}} {{else}} Auto-detect test type from file formats and content. {{/if}}
{{#if ARG3}} Focus area: {{ARG3}} {{/if}}
Step 1: Analyze Test Results
Read the test result files from {{ARG1}} and extract:
- Failed tests with error messages
- Warnings and deprecations
- Performance metrics (if available)
- Coverage gaps (if available)
- Categorize by: severity (critical/high/medium/low), type (functional/security/performance/accessibility)
Step 2: Use PAL Planner
Call mcp__pal__planner with model “gemini-2.5-pro” to create a systematic fix plan:
- Step 1: Summarize findings and identify root causes
- Step 2: Prioritize issues (impact à effort matrix)
- Step 3: Break down fixes into actionable tasks
- Step 4: Identify dependencies between fixes
- Step 5: Assign each fix category to appropriate subagent
- Continue planning steps as needed for complex scenarios
Step 3: Subagent Delegation Strategy
Based on the issue categories, delegate to:
-
Accessibility violations (WCAG, ARIA, color contrast, keyboard nav) â Use
Tasktool withsubagent_type: code-reviewâ Focus: WCAG 2.1 compliance, semantic HTML, ARIA best practices -
Security vulnerabilities (XSS, SQLi, CSRF, auth issues) â Use
Tasktool withsubagent_type: security-auditâ Focus: OWASP Top 10, input validation, authentication -
Performance issues (slow tests, memory leaks, timeouts) â Use
Tasktool withsubagent_type: system-debuggingâ Focus: Profiling, bottleneck identification, optimization -
Code quality (duplicates, complexity, maintainability) â Use
Tasktool withsubagent_type: code-refactoringâ Focus: SOLID principles, DRY, code smells -
Flaky/unreliable tests (race conditions, timing, dependencies) â Use
Tasktool withsubagent_type: test-architectureâ Focus: Test stability, isolation, determinism -
CI/CD failures (build errors, pipeline issues) â Use
Tasktool withsubagent_type: cicd-pipelinesâ Focus: GitHub Actions, dependency management, caching -
Documentation gaps (missing docs, outdated examples) â Use
Tasktool withsubagent_type: documentationâ Focus: API docs, test documentation, migration guides
Step 4: Create Execution Plan
For each subagent assignment:
- Context: What files/areas need attention
- Objective: Specific fix goal
- Success Criteria: How to verify the fix
- Dependencies: What must be done first
- Verification: Commands to re-run tests
Step 5: Present Summary
Provide:
- ð Issue Breakdown: Count by category and severity
- ð¯ Priorities: Top 3-5 issues to fix first
- ð¤ Subagent Plan: Which agents will handle what
- â Next Steps: Concrete actions to take
- ð Verification: How to confirm fixes worked
{{#if ARG3}} Additional focus on {{ARG3}}: Prioritize issues related to this area and provide extra context for relevant subagents. {{/if}}
Documentation-First Reminder: Before implementing fixes, research relevant documentation using context7 to verify:
- Test framework best practices
- Accessibility standards (WCAG 2.1)
- Security patterns (OWASP)
- Performance optimization techniques
TDD Workflow: Follow RED â GREEN â REFACTOR:
- Verify tests fail (RED) â (already done)
- Implement minimal fix (GREEN)
- Refactor for quality
- Re-run tests to confirm
Do you want me to proceed with the analysis and planning, or would you like to review the plan first?