validator-workflow
npx skills add https://github.com/darraghh1/my-claude-setup --skill validator-workflow
Agent 安装分布
Skill 文档
Validator Phase Workflow
You have been assigned a phase to validate after a builder reports completion. Your spawn prompt contains the phase file path and plan folder. This skill teaches you how to handle validation end-to-end.
Why This Workflow Exists
The user experienced validators that ran shallow checks, missed pattern deviations, and didn’t catch issues introduced by auto-fixes. Each step below prevents a specific failure:
| Step | Prevents |
|---|---|
| Read phase completely | Reviewing against wrong acceptance criteria |
| Run /code-review | Self-review blind spots â builder never reviews its own code |
| Conditional verification | Frontend bugs missed without E2E, DB bugs missed without PgTAP |
| Actionable FAIL reports | Fix builders guessing at what’s broken, producing more failures |
Step 1: Read the Phase
Read the phase file from your spawn prompt. Extract:
skill:field from frontmatter â determines which extra tests to run (Step 3)- Acceptance criteria â your success metrics for the verdict
- Implementation steps â what was supposed to be built
- Files created/modified â scope of your review
Step 2: Run Code Review
Invoke the code review skill against the phase:
Skill({ skill: "code-review", args: "[phase-file-path]" })
This forks a sub-agent that:
- Reads the phase document and extracts all implementation steps
- Finds reference implementations from the codebase (ground truth)
- Reviews each file against phase spec AND codebase patterns
- Auto-fixes Critical/High/Medium issues directly in source files
- Writes a review file to
{plan-folder}/reviews/code/phase-{NN}.md - Returns a verdict with issue counts and what was fixed
Step 3: Run Verification
Skip verification entirely if the code review verdict is “Ready” with zero auto-fixes â the builder already passed tests + typecheck before reporting, and no source files changed since.
Run verification if the code review auto-fixed any issues (files were modified):
Always Run
pnpm run typecheck
pnpm test
Both must pass. If auto-fixes introduced issues, fix them.
Conditional Testing by Phase Type
The phase’s skill: frontmatter field determines which additional tests to run:
| Phase Skill | Extra Test | Command |
|---|---|---|
react-form-builder |
E2E tests | pnpm test:e2e (scoped) |
vercel-react-best-practices |
E2E tests | pnpm test:e2e (scoped) |
web-design-guidelines |
E2E tests | pnpm test:e2e (scoped) |
playwright-e2e |
E2E tests | pnpm test:e2e (scoped) |
postgres-expert |
DB tests | pnpm test:db |
server-action-builder |
Unit tests sufficient | â |
service-builder |
Unit tests sufficient | â |
E2E Test Scoping
When running E2E tests, scope them to the feature being validated:
- Extract keywords from the phase title/slug (e.g.,
notes,billing,auth) - Glob for
e2e/**/*{keyword}*.spec.ts - If matches found:
pnpm test:e2e -- [matched spec files] - If no matches:
pnpm test:e2e(full suite)
Graceful Skip
If a test command doesn’t exist (exit code from missing script), skip it and note in the report:
E2E tests: skipped (pnpm test:e2e not configured)
This allows projects without E2E or DB tests to pass validation without false failures.
Step 4: Determine Verdict
Based on the code review results and verification (if it ran):
- PASS: Code review verdict is “Ready”, no unfixed Critical/High issues, and all verification passed (or was skipped because no files changed)
- FAIL: Any unfixed Critical/High issues, or (if verification ran) typecheck errors, test failures, E2E failures, or DB test failures
Step 5: Report to Orchestrator
SendMessage({
type: "message",
recipient: "team-lead",
content: "Phase [NN] validation: [PASS|FAIL]\n\nCode review: [verdict]\nReview file: [path]\nVerification: [pass|skipped (no changes)]\nE2E tests: [pass|fail|skipped|N/A]\nDB tests: [pass|fail|skipped|N/A]\n\n[If FAIL: specific issues with file:line references and exact fixes needed]",
summary: "Phase NN: PASS|FAIL"
})
Step 6: Go Idle
Wait for the next validation assignment or a shutdown request.
FAIL Reports Must Be Actionable
When reporting FAIL, include enough detail for a fresh builder to fix the issues without guessing:
- File:line references for each issue
- Which pattern was violated (cite the reference file)
- Exact fix needed (not “consider improving” â state what must change)
- Which test failed (test name, assertion, expected vs actual)
Vague FAIL reports cause fix builders to guess, producing more failures. Specific reports enable one-shot fixes.
IMPORTANT: Before using the Write tool on any existing file, you MUST Read it first or the write will silently fail. Prefer Edit for modifying existing files.
Resuming After Context Compact
If your context was compacted mid-validation:
TaskListâ find thein_progressor firstpendingtaskTaskGeton that task â read the self-contained description- Continue from that task â don’t restart the validation
- The task list is your source of truth, not your memory