write-pytest

📁 tytodd/skills 📅 8 days ago
2
总安装量
2
周安装量
#67447
全站排名
安装命令
npx skills add https://github.com/tytodd/skills --skill write-pytest

Agent 安装分布

openclaw 2
antigravity 2
openhands 2
claude-code 2
github-copilot 2
codex 2

Skill 文档

Write Tests

This skill guides writing a thorough, well-structured test suite for existing code using pytest.

Workflow

Step 1: Confirm Test Cases with the User

Before writing any code, identify the concrete examples you plan to use as test cases and confirm them with the user. List:

  • The functions/classes/modules being tested
  • The specific inputs and expected outputs for each test
  • Edge cases and error conditions you intend to cover

Do not proceed to writing tests until the user approves the test cases.

Step 2: Plan the Test Approach

Make a written plan before implementing. For each test, identify whether it requires a non-standard environment:

Pattern When to use
unittest.mock.patch / MagicMock Isolating a unit from external calls (filesystem, network, other modules)
monkeypatch (pytest fixture) Temporarily replacing env vars, attributes, or functions in-process
Docker / testcontainers Tests that require a real running service (database, Redis, etc.)
Custom mock module When a dependency is unavailable or would cause side effects in test

ALWAYS check with the user before using any non-standard pattern. Explain:

  1. Which pattern you plan to use
  2. Why it is necessary for that specific test
  3. Whether a simpler alternative exists

Step 3: Write the Tests

Follow these pytest conventions:

  • Place tests in a tests/ directory mirroring the source layout
  • One test file per source module (e.g., src/foo.py → tests/test_foo.py)
  • Use fixtures for shared setup/teardown — prefer conftest.py for fixtures shared across files
  • Name tests test_<what_it_does> — names should read like a sentence
  • Group related tests in a class only when shared state or setup justifies it
  • Use pytest.mark to annotate tests:
    • @pytest.mark.slow — tests that take more than a few seconds
    • @pytest.mark.integration — tests that touch external systems
    • @pytest.mark.unit — pure unit tests (optional but useful)
# Example structure
import pytest

@pytest.fixture
def sample_data():
    return {"key": "value"}

@pytest.mark.slow
def test_heavy_operation(sample_data):
    ...

def test_expected_behavior(sample_data):
    result = my_function(sample_data)
    assert result == expected

Step 4: Run the Tests and Interpret Failures

Always run tests with:

uv run pytest

To run a subset (e.g., skip slow tests):

uv run pytest -m "not slow"

Evaluate every failure carefully. For each failing test, reason through:

  1. Is the source code wrong? — The implementation has a bug.
  2. Did I misunderstand the intended behavior? — The test expectation is wrong.
  3. Is the test environment misconfigured? — A fixture, mock, or dependency is broken.

If it is unclear whether the source code or the test expectation is wrong, ask the user to clarify how the code is supposed to behave before making any changes.

Do not blindly fix tests to pass — a green suite built on wrong assumptions is worse than a red one.

Key Principles

  • Keep tests fast by default; isolate slow/integration tests with markers
  • Prefer explicit over implicit — assert specific values, not just truthiness
  • One logical assertion per test where practical
  • Never silence or skip failures without understanding them first
  • Tests are documentation — readable test names and clear assertions matter