write-pytest
npx skills add https://github.com/tytodd/skills --skill write-pytest
Agent 安装分布
Skill 文档
Write Tests
This skill guides writing a thorough, well-structured test suite for existing code using pytest.
Workflow
Step 1: Confirm Test Cases with the User
Before writing any code, identify the concrete examples you plan to use as test cases and confirm them with the user. List:
- The functions/classes/modules being tested
- The specific inputs and expected outputs for each test
- Edge cases and error conditions you intend to cover
Do not proceed to writing tests until the user approves the test cases.
Step 2: Plan the Test Approach
Make a written plan before implementing. For each test, identify whether it requires a non-standard environment:
| Pattern | When to use |
|---|---|
unittest.mock.patch / MagicMock |
Isolating a unit from external calls (filesystem, network, other modules) |
monkeypatch (pytest fixture) |
Temporarily replacing env vars, attributes, or functions in-process |
| Docker / testcontainers | Tests that require a real running service (database, Redis, etc.) |
| Custom mock module | When a dependency is unavailable or would cause side effects in test |
ALWAYS check with the user before using any non-standard pattern. Explain:
- Which pattern you plan to use
- Why it is necessary for that specific test
- Whether a simpler alternative exists
Step 3: Write the Tests
Follow these pytest conventions:
- Place tests in a
tests/directory mirroring the source layout - One test file per source module (e.g.,
src/foo.pyâtests/test_foo.py) - Use fixtures for shared setup/teardown â prefer
conftest.pyfor fixtures shared across files - Name tests
test_<what_it_does>â names should read like a sentence - Group related tests in a class only when shared state or setup justifies it
- Use
pytest.markto annotate tests:@pytest.mark.slowâ tests that take more than a few seconds@pytest.mark.integrationâ tests that touch external systems@pytest.mark.unitâ pure unit tests (optional but useful)
# Example structure
import pytest
@pytest.fixture
def sample_data():
return {"key": "value"}
@pytest.mark.slow
def test_heavy_operation(sample_data):
...
def test_expected_behavior(sample_data):
result = my_function(sample_data)
assert result == expected
Step 4: Run the Tests and Interpret Failures
Always run tests with:
uv run pytest
To run a subset (e.g., skip slow tests):
uv run pytest -m "not slow"
Evaluate every failure carefully. For each failing test, reason through:
- Is the source code wrong? â The implementation has a bug.
- Did I misunderstand the intended behavior? â The test expectation is wrong.
- Is the test environment misconfigured? â A fixture, mock, or dependency is broken.
If it is unclear whether the source code or the test expectation is wrong, ask the user to clarify how the code is supposed to behave before making any changes.
Do not blindly fix tests to pass â a green suite built on wrong assumptions is worse than a red one.
Key Principles
- Keep tests fast by default; isolate slow/integration tests with markers
- Prefer explicit over implicit â assert specific values, not just truthiness
- One logical assertion per test where practical
- Never silence or skip failures without understanding them first
- Tests are documentation â readable test names and clear assertions matter