qa-manual-istqb
npx skills add https://github.com/fugazi/test-automation-skills-agents --skill qa-manual-istqb
Agent 安装分布
Skill 文档
ISTQB Manual & Automation QA Toolkit
Complete ISTQB Foundation Level (CTFL) aligned workflow for QA test engineers covering: Test Planning â Test Analysis â Test Design â Test Implementation â Test Execution â Test Completion
When to Use This Skill
- Creating or reviewing test plans and test strategies
- Generating test conditions and test cases from requirements
- Applying test design techniques (EP, BVA, decision tables, state transitions, use cases)
- Writing bug reports and managing defect lifecycle
- Building regression suites with risk-based selection
- Creating traceability matrices (requirements â tests â defects)
- Conducting exploratory testing sessions with charters
- Estimating test effort using ISTQB techniques
- Reviewing testware through static testing practices
- Implementing automation with Playwright (TypeScript)
Prerequisites
| Requirement | Notes |
|---|---|
| Node.js 18+ | Required for CLI script and Playwright |
| Playwright | npm init playwright@latest for automation |
| Text editor | For creating/editing markdown and CSV artifacts |
| Git | Recommended for testware version control |
Quick Start (choose one)
- Create a test plan: use
assets/templates/test-plan.mdas a starting point. - Create a test summary report: use
assets/templates/test-summary-report.md. - Generate test cases: use
assets/templates/test-cases.csvand fill it from the test basis (requirements, user stories, acceptance criteria). - Create a bug report: use
assets/templates/bug-report.md. - Create a bug log: use
assets/templates/bug-log.csv. - Create traceability: use
assets/templates/traceability-matrix.csv. - Build a regression suite definition: use
assets/templates/regression-suite.md. - Scaffold Playwright tests: use
assets/templates/playwright-spec.tsand adapt to the system under test. - Run exploratory testing: use
assets/templates/exploratory-charter.mdto timebox and capture outcomes.
If running locally, generate artifacts with the bundled CLI:
node scripts/qa_artifacts.mjs list
node scripts/qa_artifacts.mjs create test-plan --out specs --project "My App" --release "R1"
node scripts/qa_artifacts.mjs create test-cases --out specs --feature "Checkout"
node scripts/qa_artifacts.mjs create bug-report --out specs/bugs --title "Search returns 500"
Inputs to collect (ask if missing)
- Test basis: requirements, user stories, acceptance criteria, designs, risk register, defect history.
- Scope: in-scope/out-of-scope features, target platforms/browsers/devices, locales, accessibility, integrations.
- Quality risks: what can fail, impact, likelihood, regulatory/compliance, critical user journeys.
- Constraints: deadlines, environments, data availability, tooling, access/roles, CI/CD expectations.
- Definitions: severity vs priority scale, test levels and test types to cover, entry/exit criteria.
Workflows
1) Create a test plan (and/or test strategy)
- Identify test objectives, scope, assumptions, and constraints from the test basis.
- Define test levels and types (functional + change-related + key non-functional, as applicable).
- Choose test design techniques per area (see
references/test-design-techniques.md). - Specify environments, test data, tooling, and configuration management needs.
- Define entry/exit criteria, deliverables, and reporting cadence/metrics.
- Add a risk matrix and mitigation actions; prioritize testing accordingly (risk-based testing).
Use: assets/templates/test-plan.md (detailed sections + checklists).
2) Generate test conditions and test cases
- Convert the test basis into test conditions (what to test) before writing step-by-step cases.
- For each condition, pick a technique:
- Equivalence partitions and boundary values for inputs/validation.
- Decision tables for rule combinations.
- State transitions for lifecycle/flows.
- Use-case/scenario tests for end-to-end journeys.
- Exploratory testing sessions to learn quickly (see
assets/templates/exploratory-charter.md).
- Write test cases that are atomic, unambiguous, and traceable to requirement/user story IDs.
- Add expected results that are observable and measurable (define the test oracle).
- Add priority and risk tags to support risk-based regression selection.
- Mark automation candidates using stability + value criteria (see
references/automation-playwright-best-practices.md).
Use: assets/templates/test-cases.csv.
3) Implement automation test scripts (Playwright-first)
- Keep tests readable and aligned with test cases (include test case IDs in titles).
- Prefer stable locators (e.g.,
getByTestId) over brittle selectors. - Avoid arbitrary sleeps; rely on Playwright auto-waits and explicit assertions.
- Make tests independent (setup preconditions explicitly; avoid order dependencies).
- Use tagging in test titles (e.g.,
@smoke,@regression) so suites are runnable via--grep. - Capture artifacts for triage (screenshots/video/trace) when debugging failures.
Use: assets/templates/playwright-spec.ts and references/automation-playwright-best-practices.md.
4) Build and maintain regression suites
- Define suite tiers (example): smoke (critical paths), sanity (build verification), regression (broad), full (release).
- Select tests using risk + frequency + criticality + defect history (not only coverage).
- Tag tests consistently and document selection rules and ownership.
- Review the suite regularly: remove obsolete coverage, add coverage for escaped defects and high-risk changes.
Use: assets/templates/regression-suite.md and references/regression-suite-strategy.md.
5) Create bug reports and document bugs effectively
- Reproduce reliably; reduce to minimal steps; note variability (frequency) and scope.
- Capture environment details (build/app version, OS, browser/device, account/role, data conditions).
- Describe expected vs actual behavior; include impact; set severity and priority consistently.
- Attach evidence (screenshots, console logs, network traces, Playwright trace) and link related tests/requirements.
- Track lifecycle: triage notes, owner, fix version, verification steps and results, closure criteria.
Use: assets/templates/bug-report.md and references/bug-report-quality.md.
6) Conduct static testing (reviews)
- Schedule reviews early (shift-left): requirements, designs, test plans, test cases.
- Use checklists for consistency (see
references/static-testing.md). - Document findings with severity and actionability.
- Track defects found in static testing separately (prevention vs detection).
Use: references/static-testing.md for review checklists and techniques.
7) Estimate test effort
- Identify factors: scope, complexity, risk, team experience, tool maturity.
- Apply estimation techniques (see
references/test-estimation.md):- Expert judgment / historical data
- Test point analysis
- Work breakdown structure
- Add contingency for risks and unknowns.
- Review and refine estimates as the project progresses.
Use: references/test-estimation.md for techniques and formulas.
8) Monitor test progress and metrics
- Track execution metrics: planned vs executed vs passed vs failed vs blocked.
- Monitor defect metrics: found vs fixed vs open, by severity/priority.
- Calculate test coverage: requirements covered, risk areas tested.
- Report status regularly and escalate blockers early.
Use: references/test-monitoring-metrics.md for metrics definitions and dashboards.
Quality Gates (self-check)
- Test plan includes scope, approach, risks, environments, entry/exit criteria, deliverables, and metrics.
- Test cases are traceable, atomic, deterministic, and include clear oracles and data.
- Automation is maintainable (stable locators, minimal flake, independent tests, clear assertions).
- Regression is risk-based, tagged, and curated with clear add/remove rules.
- Bug reports are reproducible, actionable, and contain evidence + environment + impact.
- Static testing reviews are documented with findings tracked to resolution.
- Estimates include contingency and are refined as scope clarifies.
Troubleshooting
| Problem | Cause | Solution |
|---|---|---|
| Test cases lack traceability | Missing requirement IDs | Add requirement_id column; link to user stories/ACs |
| Bug reports get rejected | Insufficient reproduction steps | Use minimal steps; include exact data and environment |
| Regression suite too slow | Too many tests, no prioritization | Apply risk-based selection; tier into smoke/sanity/full |
| Flaky automated tests | Unstable locators or timing | Use data-testid; avoid sleeps; use Playwright auto-waits |
| Test estimates are wrong | Scope creep, missing risks | Add contingency; re-estimate when scope changes |
| Reviews find no defects | Superficial review | Use checklists; allocate sufficient time; rotate reviewers |
| Unclear test oracles | Missing expected results | Define oracles from requirements, rules, or reference systems |
Bundled Resources
Templates (assets/templates/)
| Template | Purpose |
|---|---|
test-plan.md |
ISTQB-aligned test plan structure |
test-summary-report.md |
End-of-cycle summary and sign-off |
test-cases.csv |
Test case repository with traceability |
test-conditions.md |
Test conditions derived from test basis |
traceability-matrix.csv |
Requirements â tests â defects mapping |
bug-report.md |
Detailed defect report |
bug-log.csv |
Defect tracking log |
regression-suite.md |
Suite definition and selection rules |
exploratory-charter.md |
Session-based exploratory testing |
playwright-spec.ts |
Playwright test scaffold |
test-environment-checklist.md |
Environment readiness verification |
risk-assessment-matrix.md |
Quality risk identification and prioritization |
References (references/)
| Reference | Content |
|---|---|
test-design-techniques.md |
EP, BVA, decision tables, state transitions, use cases |
experience-based-techniques.md |
Error guessing, checklist-based, exploratory |
static-testing.md |
Reviews, walkthroughs, inspections |
test-levels-types.md |
Unit, integration, system, acceptance; functional, non-functional |
test-estimation.md |
Estimation techniques and factors |
test-monitoring-metrics.md |
Progress tracking and quality metrics |
risk-based-testing.md |
Risk identification, analysis, mitigation |
istqb-glossary.md |
Key ISTQB terminology |
test-process-and-deliverables.md |
Test process phases and outputs |
automation-playwright-best-practices.md |
Playwright implementation guidance |
regression-suite-strategy.md |
Suite management and optimization |
bug-report-quality.md |
Effective defect reporting |
defect-lifecycle.md |
Defect states and workflow |
Scripts (scripts/)
| Script | Purpose |
|---|---|
qa_artifacts.mjs |
CLI tool to generate QA artifacts from templates |