skip to content

pytest β€” Testing Framework

Write and run Python tests with pytest. Covers test discovery, assertions, fixtures, parametrize, conftest, and common patterns.

3 min read 14 snippets yesterday intermediate

pytest β€” Testing Framework#

What it is#

pytest is the de-facto standard Python testing framework. It discovers and runs tests automatically, provides rich assertion introspection, and extends via a large plugin ecosystem (pytest-asyncio, pytest-cov, pytest-mock, etc.).

Install#

pip install pytest
pip install pytest-cov       # coverage reports
pip install pytest-asyncio   # async test support

Quick example#

# test_math.py
def add(a: int, b: int) -> int:
    return a + b

def test_add():
    assert add(2, 3) == 5
    assert add(-1, 1) == 0
    assert add(0, 0) == 0
pytest test_math.py -v

Output:

============================= test session starts ==============================
platform linux -- Python 3.12.3, pytest-8.3.5
collected 1 item

test_math.py::test_add PASSED                                          [100%]

============================== 1 passed in 0.05s ===============================

When / why to use it over unittest#

  • Simpler syntax: plain assert statements give clear failure messages.
  • Fixture dependency injection is more composable than setUp/tearDown.
  • Built-in parametrize replaces manual test table loops.
  • Enormous plugin ecosystem.
  • Can run unittest-style tests too.

Common pitfalls#

[!WARNING] Test file naming β€” pytest discovers files named test_*.py or *_test.py and functions/methods named test_*. Files not matching this pattern are silently ignored.

[!WARNING] Sharing mutable fixtures β€” fixtures with scope="module" or scope="session" are shared. Mutating them in one test affects others. Use scope="function" (the default) unless you know what you’re doing.

[!TIP] pytest -k "add" runs only tests whose name contains β€œadd”. pytest -x stops on first failure. pytest -s shows print() output in real time.

Parametrize#

import pytest

@pytest.mark.parametrize("a, b, expected", [
    (1, 2, 3),
    (0, 0, 0),
    (-1, 1, 0),
    (100, -100, 0),
])
def test_add_param(a, b, expected):
    assert a + b == expected
pytest test_math.py::test_add_param -v

Output:

test_math.py::test_add_param[1-2-3] PASSED
test_math.py::test_add_param[0-0-0] PASSED
test_math.py::test_add_param[-1-1-0] PASSED
test_math.py::test_add_param[100--100-0] PASSED

4 passed in 0.04s

Richer example β€” fixtures and tmp files#

# test_fixtures.py
import pytest
from pathlib import Path

@pytest.fixture
def sample_data() -> list[dict]:
    return [{"id": i, "value": i * 10} for i in range(1, 4)]

def test_data_length(sample_data):
    assert len(sample_data) == 3

def test_data_values(sample_data):
    assert sample_data[0]["value"] == 10
    assert sample_data[-1]["value"] == 30

def test_file_roundtrip(tmp_path: Path):
    """tmp_path is a built-in pytest fixture: a temporary directory unique per test."""
    f = tmp_path / "data.txt"
    f.write_text("hello\nworld")
    lines = f.read_text().splitlines()
    assert lines == ["hello", "world"]
pytest test_fixtures.py -v

Output:

test_fixtures.py::test_data_length PASSED
test_fixtures.py::test_data_values PASSED
test_fixtures.py::test_file_roundtrip PASSED

3 passed in 0.06s

conftest.py β€” shared fixtures#

Place fixtures used across multiple test files in conftest.py at the root (or any directory):

# conftest.py
import pytest

@pytest.fixture(scope="module")
def db_connection():
    """Module-scoped fixture: one connection per test module."""
    conn = create_test_db()
    yield conn
    conn.close()

Pytest auto-discovers conftest.py β€” no import needed.

Coverage#

pytest --cov=src --cov-report=term-missing

Output:

---------- coverage: platform linux, python 3.12.3 ----------
Name              Stmts   Miss  Cover   Missing
-----------------------------------------------
src/math_utils.py    10      1    90%   42
-----------------------------------------------
TOTAL                10      1    90%

Useful markers#

@pytest.mark.skip(reason="not implemented yet")
def test_future():
    ...

@pytest.mark.xfail(reason="known bug #123")
def test_known_failure():
    assert 1 == 2

@pytest.mark.slow
def test_large_dataset():
    ...

Run only fast tests: pytest -m "not slow"