write-tests
12
总安装量
12
周安装量
#26595
全站排名
安装命令
npx skills add https://github.com/meta-pytorch/openenv --skill write-tests
Agent 安装分布
cursor
12
claude-code
12
gemini-cli
11
replit
11
antigravity
11
github-copilot
11
Skill 文档
/write-tests
Write failing tests that encode acceptance criteria.
Usage
/write-tests
/write-tests Add logout button to header
When to Use
- After creating a todo that requires implementation
- Before running
/implement - When you have clear acceptance criteria
When NOT to Use
- Implementation already exists (tests would pass immediately)
- You’re exploring or prototyping (not TDD mode)
- Just adding to existing test coverage
What It Does
- Analyzes the current todo/requirement
- Reads existing tests to understand patterns
- Writes test files that verify acceptance criteria
- Verifies tests FAIL (proves they test something real)
- Returns test file paths for
/implement
Output
The tester agent will produce:
## Tests Written
### Files Created/Modified
- `tests/test_client.py`
### Tests Added
| Test | Verifies |
|------|----------|
| `test_client_reset_returns_observation` | Reset returns valid observation |
| `test_client_step_advances_state` | Step mutates state correctly |
| `test_client_handles_invalid_action` | Error handling for bad input |
### Verification
All tests FAIL as expected (no implementation yet).
### Next Step
Run `/implement` to make these tests pass.
Rules
- Read existing tests first to understand patterns and conventions
- Test behavior, not implementation – write from user’s perspective
- Integration tests first, then unit tests if needed
- Each test verifies ONE thing clearly
- Run tests to verify they fail before returning
Anti-patterns (NEVER do these)
- Writing tests that pass without implementation
- Testing implementation details instead of behavior
- Writing overly complex test setups
- Adding implementation code (that’s
/implement‘s job) - Writing tests that duplicate existing coverage
Completion Criteria
Before returning, verify:
- Tests compile/run successfully (pytest can collect them)
- Tests FAIL (no implementation yet)
- Test names clearly describe what they verify
- Tests follow existing project patterns (see
tests/for examples)