implement-feature-complete
npx skills add https://github.com/dawiddutoit/custom-claude --skill implement-feature-complete
Agent 安装分布
Skill 文档
Works with Clean Architecture projects, Python codebases, TDD workflows, and quality gates.
Complete Feature Implementation
Quick Start
Orchestrate the complete lifecycle of implementing a feature following project standards. This meta-skill ensures no step is forgotten, all quality gates pass, and the feature is production-ready.
Most common use case:
User: "Add ability to search code by file type"
â Follow 10-stage workflow (TDD, naming, DRY, implementation, refactor, testing)
â Use checklist template to track progress
â Validate at each stage before proceeding
â Feature is production-ready with unit/integration/E2E tests + monitoring
Result: Fully tested feature in 2-3 hours
When to Use This Skill
Use when:
- Implementing new features – “Add ability to X”
- Significant changes – Multi-layer modifications affecting domain/application/infrastructure
- Onboarding to project – Learning the complete feature workflow
- Quality-critical work – Features requiring full test coverage and monitoring
- Production deployments – Changes that must be production-ready
Trigger phrases: “Implement feature X”, “Add functionality for Y”, “Complete workflow for Z”, “End-to-end implementation”
What This Skill Does
Orchestrates complete feature implementation through 10 stages:
- Planning & Location – Determine Clean Architecture layer placement
- TDD – Red – Write comprehensive failing tests first
- Consistent Naming – Follow project conventions
- Extract Common (DRY) – Centralize shared logic
- Implementation – Green – Minimum code to pass tests
- Refactor – Clean – Quality gates enforcement
- Integration Testing – Test with real dependencies (Neo4j, file system)
- E2E Testing – Test through MCP interface
- Real Usage Validation – Manual testing in Claude Code
- Production Monitoring – Verify OTEL traces and production readiness
10-Stage Workflow
Stage 1: Planning & Location (5-10 min)
Determine architectural placement:
- Identify Clean Architecture layer (domain/application/infrastructure/interface)
- Locate existing similar features for consistency
- Plan dependencies and interfaces
- Identify required tests (unit, integration, E2E)
Key decisions:
- Domain layer (value objects, entities) vs Application layer (use cases)
- Repository pattern needed?
- ServiceResult pattern for error handling
- Dependencies to inject
Exit criteria: Clear plan documented, layers identified, test strategy defined
Stage 2: TDD – Red (Write Failing Test) (10-15 min)
Write comprehensive failing tests BEFORE implementation:
- Unit tests for business logic (domain/application)
- Constructor validation tests
- Edge case tests (empty inputs, invalid states)
- Error handling tests (ServiceResult failures)
Template usage:
# Use test-implement-constructor-validation skill
# Use setup-pytest-fixtures for factory patterns
Exit criteria: All tests written, all tests failing (red), coverage plan complete
Stage 3: Consistent Naming (5 min)
Follow project conventions:
- Review existing code for naming patterns
- Match verb conventions (find, search, get, create, update, delete)
- Align with project vocabulary (e.g., “code” not “file”, “repository” not “repo”)
- Check abbreviations match project style
Exit criteria: Naming consistent with existing codebase, no new patterns introduced
Stage 4: Extract Common (DRY Principle) (10 min)
Identify and centralize shared logic:
- Search for similar implementations:
Grepfor existing patterns - Extract to shared utilities if used 3+ times
- Refactor existing code to use new shared logic
- Update tests to cover shared utilities
Exit criteria: No duplicate logic, shared code extracted, all tests still failing (red)
Stage 5: Implementation – Green (Make Test Pass) (20-30 min)
Minimal implementation to pass tests:
- Implement domain layer (value objects, entities)
- Implement application layer (use cases, handlers)
- Implement infrastructure layer (repositories, external adapters)
- Use ServiceResult pattern for error handling
- Inject dependencies properly
Key patterns:
- Use Protocol for interfaces (domain layer)
- Concrete implementation in infrastructure
- Fail-fast validation (no try/except for imports, validation)
- Type hints everywhere
Exit criteria: All tests passing (green), no shortcuts, minimal code
Stage 6: Refactor – Clean (Quality Gates) (10-15 min)
Enforce quality standards:
# Run quality gates
pytest tests/
mypy src/
ruff check src/
Fix issues:
- Type errors (mypy)
- Linting issues (ruff)
- Test failures
- Code duplication
- Missing docstrings
Exit criteria: All quality gates passing, code clean, tests green
Stage 7: Integration Testing (Real Dependencies) (15-20 min)
Test with real Neo4j and file system:
- Create integration tests in
tests/integration/ - Use real Neo4j database (test fixtures)
- Test real file system operations
- Verify end-to-end data flow
Template:
# tests/integration/test_feature_integration.py
@pytest.mark.integration
async def test_feature_with_real_neo4j(neo4j_container):
# Test with real database
pass
Exit criteria: Integration tests passing, real dependencies work correctly
Stage 8: E2E Testing (MCP Interface) (15-20 min)
Test through actual MCP tool interface:
- Create E2E test in
tests/e2e/ - Test through MCP tool call (as Claude Code would use it)
- Verify JSON schema validation
- Test error scenarios
Template:
# tests/e2e/test_feature_mcp.py
async def test_feature_via_mcp_tool(mcp_server):
result = await mcp_server.call_tool("tool_name", {"param": "value"})
assert result.success
Exit criteria: E2E tests passing, MCP interface validated
Stage 9: Real Usage Validation (Manual Testing) (10-15 min)
Test in Claude Code environment:
- Start MCP server:
python -m src.project_watch_mcp - Use feature in Claude Code conversation
- Monitor logs:
tail -f logs/project-watch-mcp.log - Verify OTEL traces show expected spans
- Test error scenarios manually
Validation checklist:
- Feature works as expected
- Logs show proper trace spans
- Errors handled gracefully
- Performance acceptable
Exit criteria: Feature works in real Claude Code session, logs clean, traces visible
Stage 10: Production Monitoring (OTEL Traces) (5-10 min)
Verify production readiness:
- Check OTEL traces include all spans
- Verify error handling traces (ServiceResult.failure paths)
- Confirm performance metrics logged
- Validate trace context propagation
Use observe-analyze-logs skill for trace analysis
Exit criteria: All traces present, error paths traced, production-ready
Usage Examples
Example 1: Search by File Type Feature
Request: “Add ability to search code by file type (.py, .ts, etc.)”
Workflow:
- Planning: Application layer (SearchByFileTypeHandler), infrastructure (Neo4j query)
- TDD: Write failing tests for handler, repository, validation
- Naming: “search_by_file_type” (matches existing “search_code”)
- DRY: Reuse existing file extension parsing logic
- Implementation: Handler + repository method + Neo4j query
- Refactor: Pass quality gates (mypy, ruff, pytest)
- Integration: Test with real Neo4j database
- E2E: Test via MCP tool call
- Real Usage: Test in Claude Code, monitor logs
- Monitoring: Verify OTEL traces complete
Result: Production-ready feature in 2 hours
Example 2: Add Repository Method
Request: “Add method to get file metadata by path”
Workflow: Same 10 stages, focused on repository layer
- Stage 1: Infrastructure layer (repository method)
- Stage 2: Write failing repository tests
- Stage 5: Implement Cypher query
- Stage 7: Integration test with real Neo4j
Result: Tested repository method in 1 hour
Expected Outcomes
Successful Feature Implementation
Indicators:
- All 10 stages completed
- All tests passing (unit, integration, E2E)
- Quality gates green (mypy, ruff, pytest)
- Feature works in Claude Code
- OTEL traces visible in logs
- No regressions introduced
Deliverables:
- Production-ready feature
- Comprehensive test coverage (80%+)
- Clean code (no type errors, no linting issues)
- Integration with existing system
- Monitoring and observability
Stage Failure Example
If quality gates fail at Stage 6:
â Stage 6 Failed: Quality Gates
Issues:
- mypy: 3 type errors in src/application/handlers/search_by_file_type.py
- ruff: 2 linting issues (unused imports, line length)
- pytest: 1 test failure (edge case not handled)
Next steps:
1. Fix type errors (add type hints)
2. Fix linting (remove imports, break lines)
3. Fix test failure (handle empty file type list)
4. Re-run quality gates
5. Proceed to Stage 7 when green
Requirements
Tools needed:
- pytest (unit tests)
- mypy (type checking)
- ruff (linting)
- Neo4j (integration tests)
- MCP server (E2E tests)
Knowledge needed:
- Clean Architecture (layer boundaries)
- TDD workflow (red-green-refactor)
- ServiceResult pattern (error handling)
- Dependency injection (Container pattern)
- OTEL tracing (observability)
Project files:
ARCHITECTURE.md– Clean Architecture referencetests/conftest.py– Pytest fixturessrc/container.py– DI container
Troubleshooting
Issue: Quality gates failing at Stage 6
Symptom: mypy/ruff/pytest errors prevent progression
Diagnosis:
# Check specific errors
mypy src/
ruff check src/
pytest tests/ -v
Solutions:
- Type errors: Add missing type hints, fix return types
- Linting: Fix imports, line length, unused variables
- Test failures: Fix edge cases, update test expectations
Issue: Integration tests failing at Stage 7
Symptom: Tests pass with mocks, fail with real Neo4j
Diagnosis: Data model mismatch or query issues
Solutions:
- Verify Neo4j schema matches expectations
- Check Cypher query syntax
- Validate test fixtures create correct data
- Use
neo4j_containerfixture correctly
Issue: E2E tests failing at Stage 8
Symptom: MCP tool call fails or returns unexpected results
Diagnosis: Interface contract mismatch
Solutions:
- Verify tool schema matches implementation
- Check input validation (Pydantic models)
- Validate tool registration in MCP server
- Test tool directly with
mcp-cli
Issue: Feature doesn’t work in Claude Code (Stage 9)
Symptom: Manual testing fails, logs show errors
Diagnosis: Use analyze-logs skill to investigate traces
Solutions:
- Check logs for error traces
- Verify MCP server started correctly
- Validate Claude Code can reach server
- Test with simpler inputs first
Supporting Files
-
references/stage-guide.md – Detailed guidance for each stage:
- Stage-specific anti-patterns and red flags
- Detailed troubleshooting for each stage
- Code templates and examples
- Quality gate details (mypy, ruff, pytest configurations)
- Integration and E2E test patterns
- OTEL trace validation techniques
-
references/workflow-checklist.md – Copy-paste checklist:
- 10-stage checklist with exit criteria
- Quality gate commands
- Manual testing checklist
- Production readiness validation
Red Flags to Avoid
- Skipping TDD – Writing implementation before tests leads to poor test coverage
- Skipping quality gates – Type errors and linting issues compound over time
- Mock-only testing – Integration and E2E tests catch real-world issues
- No manual testing – Automated tests don’t catch UX issues
- Missing OTEL traces – Production issues are hard to debug without traces
- Inconsistent naming – Makes codebase harder to navigate
- Copy-paste code – Violates DRY, creates maintenance burden
- Partial implementation – “Working on my machine” doesn’t mean production-ready
- Ignoring stage failures – Each stage builds on previous, failures cascade
- No monitoring – Can’t validate production behavior without traces
Key principle: Each stage builds on the previous. Don’t skip stages or shortcuts will compound into production issues.
Remember: 2-3 hours for complete feature is faster than debugging production issues for days.