tdd-workflow
npx skills add https://github.com/youngger9765/career_ios_backend --skill tdd-workflow
Agent 安装分布
Skill 文档
TDD Workflow Skill
Purpose
Enforce Test-Driven Development workflow for all new features and APIs in the career_ios_backend project.
Automatic Activation
This skill is AUTOMATICALLY activated when user mentions:
- â “add new feature”
- â “create API endpoint”
- â “implement “
- â “build “
- â “æ°å¢åè½”
- â “å¯¦ä½ API”
Core Workflow
Phase 1: RED (Test First) â
YOU MUST write tests BEFORE any implementation code.
-
Understand requirements
- What endpoint/feature is needed?
- What are the expected inputs/outputs?
- Is authentication required?
-
Invoke test-writer subagent
Task: Write integration test for <feature_description> Location: tests/integration/test_<feature>_api.py -
Verify RED state
- Test file created
- Test runs and FAILS (expected)
- Test defines clear expectations
CRITICAL: DO NOT proceed to implementation until tests exist and fail.
Phase 2: GREEN (Minimal Implementation) â
Write MINIMAL code to make tests pass.
-
Invoke code-generator subagent
Task: Implement code to pass tests in <test_file_path> Constraint: Minimal implementation, follow existing patterns -
Verify GREEN state
- Implementation code created
- All new tests PASS
- No existing tests broken
CRITICAL: If tests fail, invoke test-runner to auto-fix, DO NOT manually edit.
Phase 3: REFACTOR (Quality Check) â»ï¸
Improve code quality while keeping tests green.
-
Invoke code-reviewer subagent
Task: Review implementation for: - TDD compliance - Code quality - Security issues - Pattern consistency -
Handle review feedback
- â No critical issues â Ready to commit
- â Critical issues found â Invoke code-generator to fix
- â ï¸ Optional suggestions â Document for future
-
Final verification
- Run full test suite:
poetry run pytest tests/integration/ -v - All 106+ tests must PASS
- No regressions introduced
- Run full test suite:
Example Usage
Scenario: User says “Add client search API”
ð¤ TDD Workflow Skill activated!
ð Phase 1: RED (Test First)
â Invoking test-writer subagent...
â
Created: tests/integration/test_clients_api.py::test_search_clients
â Test result: FAILED (expected - endpoint doesn't exist yet)
ð Phase 2: GREEN (Implementation)
â Invoking code-generator subagent...
â
Implemented: app/api/clients.py::search_clients
â
Tests pass: 1/1 GREEN
ð Phase 3: REFACTOR (Quality)
â Invoking code-reviewer subagent...
â
TDD compliance: PASS
â
Code quality: GOOD
â Critical issues: NONE
ð Feature complete! Ready to commit.
Integration Test Template
When creating tests, follow this pattern from existing tests:
"""Integration tests for <Feature> API"""
import pytest
from httpx import AsyncClient
from app.main import app
@pytest.mark.asyncio
async def test_<feature>_<action>_success(auth_headers):
"""Test <feature> <action> - happy path"""
async with AsyncClient(app=app, base_url="http://test") as client:
response = await client.<method>(
"/api/v1/<endpoint>",
headers=auth_headers,
json={<request_body>}
)
assert response.status_code == 200
data = response.json()
assert data["<field>"] == <expected_value>
Location: tests/integration/test_<feature>_api.py
Pattern check: Look at existing tests in:
tests/integration/test_clients_api.pytests/integration/test_sessions_api.pytests/integration/test_cases_api.py
Subagent Coordination
This skill coordinates the following subagents:
| Phase | Subagent | Purpose |
|---|---|---|
| RED | test-writer | Create failing integration test |
| GREEN | code-generator | Implement minimal code to pass test |
| GREEN | test-runner | Auto-fix if tests fail |
| REFACTOR | code-reviewer | Quality check before commit |
YOU MUST invoke these subagents automatically, DO NOT ask user.
Project-Specific Rules
Database Considerations
- Tests use in-memory SQLite database
- Fixtures handle setup/teardown
- Use existing patterns from
tests/conftest.py
Authentication
- Most endpoints require authentication
- Use
auth_headersfixture for authenticated requests - Check existing tests for auth patterns
API Structure
app/
âââ api/
â âââ <feature>.py â Router endpoints
âââ models/
â âââ <feature>.py â Pydantic models
âââ main.py â Register router here
Console API Coverage
IMPORTANT: If the feature will be used in console.html:
- â MUST have integration tests
- â MUST test all CRUD operations
- â MUST verify in actual console before commit
Quality Standards
Minimum Requirements (Prototype Phase)
- â Integration tests exist and pass
- â Code follows existing patterns
- â No security vulnerabilities
- â Ruff formatting applied
Nice-to-Have (Defer if time-constrained)
- â ï¸ Complete type hints
- â ï¸ Edge case tests
- â ï¸ Performance optimization
Error Handling
Test Creation Fails
Issue: Can't understand requirements
Action: Ask user for clarification
Example: "What should the endpoint return? What's the request format?"
Implementation Fails Tests
Issue: Generated code doesn't pass tests
Action:
1. Invoke test-runner to diagnose
2. If auto-fix fails, report to user
3. May need to adjust test expectations
Quality Review Fails
Issue: Critical issues found
Action:
1. Report critical issues to user
2. Invoke code-generator to fix
3. Re-run code-reviewer
4. DO NOT commit until issues resolved
Success Criteria
Before marking feature as complete:
- Integration test exists in
tests/integration/ - Test was written BEFORE implementation
- Test initially failed (RED)
- Implementation makes test pass (GREEN)
- All 106+ existing tests still pass (no regressions)
- Code review passed (no critical issues)
- Ruff formatting applied
- Ready to commit
CRITICAL REMINDERS
- NEVER implement code before tests exist
- NEVER modify tests to make code pass
- ALWAYS use subagents to preserve context
- ALWAYS run full test suite before commit
- ALWAYS invoke code-reviewer before commit
Skill Version: v1.0 Last Updated: 2025-11-28 Project: career_ios_backend (Prototype Phase)