typescript-e2e-testing
npx skills add https://github.com/bmad-labs/skills --skill typescript-e2e-testing
Agent 安装分布
Skill 文档
E2E Testing Skill
E2E testing validates complete workflows from user perspective, using real infrastructure via Docker.
Workflows
For comprehensive step-by-step guidance, use the appropriate workflow:
| Workflow | When to Use |
|---|---|
| Setup E2E Test | Setting up E2E infrastructure for a new or existing project |
| Writing E2E Test | Creating new E2E test cases with proper GWT pattern |
| Review E2E Test | Reviewing existing tests for quality and correctness |
| Running E2E Test | Executing tests with proper verification |
| Debugging E2E Test | Systematically fixing failing tests |
| Optimize E2E Test | Improving test suite performance |
Workflow Selection Guide
IMPORTANT: Before starting any E2E testing task, identify the user’s intent and load the appropriate workflow.
Detect User Intent â Select Workflow
| User Says / Wants | Workflow to Load | File |
|---|---|---|
| “Set up E2E tests”, “configure docker-compose”, “add E2E to project”, “create test helpers” | Setup | workflows/setup/workflow.md |
| “Write E2E tests”, “add integration tests”, “test this endpoint”, “create e2e-spec” | Writing | workflows/writing/workflow.md |
| “Review E2E tests”, “check test quality”, “audit tests”, “is this test correct?” | Reviewing | workflows/review/workflow.md |
| “Run E2E tests”, “execute tests”, “start docker and test”, “check if tests pass” | Running | workflows/running/workflow.md |
| “Fix E2E tests”, “debug tests”, “tests are failing”, “flaky test”, “connection error” | Debugging | workflows/debugging/workflow.md |
| “Speed up E2E tests”, “optimize tests”, “tests are slow”, “reduce test time” | Optimizing | workflows/optimize/workflow.md |
Workflow Execution Protocol
- ALWAYS load the workflow file first – Read the full workflow before taking action
- Follow each step in order – Complete checkpoints before proceeding
- Load knowledge files as directed – Each workflow specifies which
references/files to read - Verify compliance after completion – Re-read relevant reference files to ensure quality
Important: Each workflow includes instructions to load relevant knowledge from the references/ folder before and after completing tasks.
Knowledge Base Structure
references/
âââ common/ # Shared testing fundamentals
â âââ knowledge.md # Core E2E concepts and test pyramid
â âââ rules.md # Mandatory testing rules (GWT, timeouts, logging)
â âââ best-practices.md # Test design and cleanup patterns
â âââ test-case-creation-guide.md # GWT templates for all scenarios
â âââ nestjs-setup.md # NestJS app bootstrap and Jest config
â âââ debugging.md # VS Code config and log analysis
â âââ examples.md # Comprehensive examples by category
â
âââ kafka/ # Kafka-specific testing
â âââ knowledge.md # Why common approaches fail, architecture
â âââ rules.md # Kafka-specific testing rules
â âââ test-helper.md # KafkaTestHelper implementation
â âââ docker-setup.md # Redpanda/Kafka Docker configs
â âââ performance.md # Optimization techniques
â âââ isolation.md # Pre-subscription pattern details
â âââ examples.md # Kafka test examples
â
âââ postgres/ # PostgreSQL-specific testing
â âââ knowledge.md # PostgreSQL testing concepts
â âââ rules.md # Cleanup, transaction, assertion rules
â âââ test-helper.md # PostgresTestHelper implementation
â âââ examples.md # CRUD, transaction, constraint examples
â
âââ mongodb/ # MongoDB-specific testing
â âââ knowledge.md # MongoDB testing concepts
â âââ rules.md # Document cleanup and assertion rules
â âââ test-helper.md # MongoDbTestHelper implementation
â âââ docker-setup.md # Docker and Memory Server setup
â âââ examples.md # Document and aggregation examples
â
âââ redis/ # Redis-specific testing
â âââ knowledge.md # Redis testing concepts
â âââ rules.md # TTL and pub/sub rules
â âââ test-helper.md # RedisTestHelper implementation
â âââ docker-setup.md # Docker configuration
â âââ examples.md # Cache, session, rate limit examples
â
âââ api/ # API testing (REST, GraphQL, gRPC)
âââ knowledge.md # API testing concepts
âââ rules.md # Request/response assertion rules
âââ test-helper.md # Auth and Supertest helpers
âââ examples.md # REST, GraphQL, validation examples
âââ mocking.md # MSW and Nock external API mocking
Quick Reference by Task
Tip: For detailed step-by-step guidance, use the Workflows section above.
Setup New E2E Structure
Workflow: Setup E2E Test
- Read
references/common/knowledge.md– Understand E2E fundamentals - Read
references/common/nestjs-setup.md– Project setup - Read technology-specific
docker-setup.mdfiles as needed
Write Test Cases
Workflow: Writing E2E Test
- MANDATORY: Read
references/common/rules.md– GWT pattern, timeouts - Read
references/common/test-case-creation-guide.md– Templates - Read technology-specific files:
- Kafka:
references/kafka/knowledge.mdâtest-helper.mdâisolation.md - PostgreSQL:
references/postgres/rules.mdâtest-helper.md - MongoDB:
references/mongodb/rules.mdâtest-helper.md - Redis:
references/redis/rules.mdâtest-helper.md - API:
references/api/rules.mdâtest-helper.md
- Kafka:
Review Test Quality
Workflow: Review E2E Test
- Read
references/common/rules.md– Check against mandatory patterns - Read
references/common/best-practices.md– Quality standards - Read technology-specific
rules.mdfiles
Run E2E Tests
Workflow: Running E2E Test
- Verify Docker infrastructure is running
- Run tests sequentially with
npm run test:e2e > /tmp/e2e-${E2E_SESSION}-output.log 2>&1 - Follow failure protocol if tests fail
Debug Failing Tests
Workflow: Debugging E2E Test
- Read
references/common/debugging.md - Create
/tmp/e2e-${E2E_SESSION}-failures.mdtracking file - Fix ONE test at a time
Optimize Test Performance
Workflow: Optimize E2E Test
- Read
references/common/best-practices.md– Performance patterns - Read
references/kafka/performance.mdfor Kafka tests - Measure baseline before making changes
Examples
- Read
references/common/examples.mdfor general patterns - Read technology-specific
examples.mdfor detailed scenarios
Core Principles
0. Context Efficiency (Temp File Output)
ALWAYS redirect E2E test output to temp files, NOT console. E2E output is verbose and bloats agent context.
IMPORTANT: Redirect output to temp files only (NO console output). Use unique session ID to prevent conflicts.
# Generate unique session ID at start of debugging session
export E2E_SESSION=$(date +%s)-$$
# Standard pattern - redirect to file only (no console output)
npm run test:e2e > /tmp/e2e-${E2E_SESSION}-output.log 2>&1
# Read summary only (last 50 lines)
tail -50 /tmp/e2e-${E2E_SESSION}-output.log
# Get failure details
grep -B 2 -A 15 "FAIL\|â" /tmp/e2e-${E2E_SESSION}-output.log
# Cleanup when done
rm -f /tmp/e2e-${E2E_SESSION}-*.log /tmp/e2e-${E2E_SESSION}-*.md
Temp Files (with ${E2E_SESSION} unique per agent):
/tmp/e2e-${E2E_SESSION}-output.log– Full test output/tmp/e2e-${E2E_SESSION}-failures.log– Filtered failure output/tmp/e2e-${E2E_SESSION}-failures.md– Tracking file for one-by-one fixing/tmp/e2e-${E2E_SESSION}-debug.log– Debug runs/tmp/e2e-${E2E_SESSION}-verify.log– Verification runs
1. Real Infrastructure
Test against actual services via Docker. Never mock databases or message brokers for E2E tests.
2. GWT Pattern (Mandatory)
ALL E2E tests MUST follow Given-When-Then:
it('should create user and return 201', async () => {
// GIVEN: Valid user data
const userData = { email: 'test@example.com', name: 'Test' };
// WHEN: Creating user
const response = await request(httpServer)
.post('/users')
.send(userData)
.expect(201);
// THEN: User created with correct data
expect(response.body.data.email).toBe('test@example.com');
});
3. Test Isolation
Each test MUST be independent:
- Clean database state in
beforeEach - Use unique identifiers (consumer groups, topics)
- Wait for async operations to complete
4. Specific Assertions
Assert exact values, not just existence:
// WRONG
expect(response.body.data).toBeDefined();
// CORRECT
expect(response.body).toMatchObject({
code: 'SUCCESS',
data: { email: 'test@example.com', name: 'Test' }
});
Project Structure
project-root/
âââ src/
âââ test/
â âââ e2e/
â â âââ feature.e2e-spec.ts
â â âââ setup.ts
â â âââ helpers/
â â âââ test-app.helper.ts
â â âââ postgres.helper.ts
â â âââ mongodb.helper.ts
â â âââ redis.helper.ts
â â âââ kafka.helper.ts
â âââ jest-e2e.config.ts
âââ docker-compose.e2e.yml
âââ .env.e2e
âââ package.json
Essential Jest Configuration
// test/jest-e2e.config.ts
const config: Config = {
preset: 'ts-jest',
testEnvironment: 'node',
testMatch: ['**/*.e2e-spec.ts'],
testTimeout: 25000,
maxWorkers: 1, // CRITICAL: Sequential execution
clearMocks: true,
forceExit: true,
detectOpenHandles: true,
};
Technology-Specific Timeouts
| Technology | Wait Time | Strategy |
|---|---|---|
| Kafka | 10-20s max (polling) | Smart polling with 50ms intervals |
| PostgreSQL | <1s | Direct queries |
| MongoDB | <1s | Direct queries |
| Redis | <100ms | In-memory operations |
| External API | 1-5s | Network latency |
Failure Resolution Protocol
CRITICAL: Fix ONE test at a time. NEVER run full suite repeatedly while fixing.
When E2E tests fail:
- Initialize session (once at start):
export E2E_SESSION=$(date +%s)-$$ - Create tracking file:
/tmp/e2e-${E2E_SESSION}-failures.mdwith all failing tests - Select ONE failing test – work on only this test
- Run ONLY that test (never full suite):
npm run test:e2e -- -t "test name" > /tmp/e2e-${E2E_SESSION}-debug.log 2>&1 tail -50 /tmp/e2e-${E2E_SESSION}-debug.log - Fix the issue – analyze error, make targeted fix
- Verify fix – run same test 3-5 times:
for i in {1..5}; do npm run test:e2e -- -t "test name" > /tmp/e2e-${E2E_SESSION}-run$i.log 2>&1 && echo "Run $i: PASS" || echo "Run $i: FAIL"; done - Mark as FIXED in tracking file
- Move to next failing test – repeat steps 3-7
- Run full suite ONLY ONCE after ALL individual tests pass
- Cleanup:
rm -f /tmp/e2e-${E2E_SESSION}-*.log /tmp/e2e-${E2E_SESSION}-*.md
WHY: Running full suite wastes time and context. Each failing test pollutes output, making debugging harder.
Common Patterns
Database Cleanup (PostgreSQL/MongoDB)
beforeEach(async () => {
await new Promise(r => setTimeout(r, 500)); // Wait for in-flight
await repository.clear(); // PostgreSQL
// OR
await model.deleteMany({}); // MongoDB
});
Kafka Test Helper Pattern
// Use pre-subscription + buffer clearing (NOT fromBeginning: true)
const kafkaHelper = new KafkaTestHelper();
await kafkaHelper.subscribeToTopic(outputTopic, false);
// In beforeEach: kafkaHelper.clearMessages(outputTopic);
Redis Cleanup
beforeEach(async () => {
await redis.flushdb();
});
External API Mock (MSW)
mockServer.use(
http.post('https://api.external.com/endpoint', () => {
return HttpResponse.json({ status: 'success' });
})
);
Async Event Verification (Kafka)
// Use smart polling instead of fixed waits
await kafkaHelper.publishEvent(inputTopic, event, event.id);
const messages = await kafkaHelper.waitForMessages(outputTopic, 1, 20000);
expect(messages[0].value).toMatchObject({ id: event.id });
Debugging Commands
All commands redirect output to temp files only (no console output).
# Initialize session (once at start)
export E2E_SESSION=$(date +%s)-$$
# Run specific test (no console output)
npm run test:e2e -- -t "should create user" > /tmp/e2e-${E2E_SESSION}-output.log 2>&1 && tail -50 /tmp/e2e-${E2E_SESSION}-output.log
# Run specific file
npm run test:e2e -- test/e2e/user.e2e-spec.ts > /tmp/e2e-${E2E_SESSION}-output.log 2>&1 && tail -50 /tmp/e2e-${E2E_SESSION}-output.log
# Run full suite
npm run test:e2e > /tmp/e2e-${E2E_SESSION}-output.log 2>&1 && tail -50 /tmp/e2e-${E2E_SESSION}-output.log
# Get failure details from last run
grep -B 2 -A 15 "FAIL\|â" /tmp/e2e-${E2E_SESSION}-output.log
# Debug with breakpoints (requires console for interactive debugging)
node --inspect-brk node_modules/.bin/jest --config test/jest-e2e.config.ts --runInBand
# View application logs (limited)
tail -100 logs/e2e-test.log
grep -i error logs/e2e-test.log | tail -50
# Cleanup session files
rm -f /tmp/e2e-${E2E_SESSION}-*.log /tmp/e2e-${E2E_SESSION}-*.md
Anti-Patterns to Avoid
- Multiple WHEN actions – Split into separate tests
- Conditional assertions – Create deterministic test cases
- Shared state between tests – Clean in beforeEach
- Mocking databases – Use real connections
- Skipping cleanup – Always clean before AND after
- Fixing multiple tests at once – Fix one at a time
- Generic assertions – Assert specific values
- fromBeginning: true for Kafka – Use pre-subscription + buffer clearing