Skip to content

Pytest Fundamentals Trung cấp

Pytest = Testing Framework mạnh nhất của Python = Viết test ít code, nhiều giá trị

Learning Outcomes

Sau khi hoàn thành trang này, bạn sẽ:

  • ✅ Hiểu cơ chế test discovery của pytest
  • ✅ Viết assertions hiệu quả với pytest.raises
  • ✅ Sử dụng markers (skip, xfail, parametrize)
  • ✅ So sánh pytest vs unittest và biết khi nào dùng cái nào
  • ✅ Tổ chức test project theo best practices

Tại sao Pytest?

Pytest là testing framework phổ biến nhất trong Python ecosystem. So với unittest (built-in), pytest có nhiều ưu điểm:

Featurepytestunittest
SyntaxSimple functionsClass-based
AssertionsPlain assertself.assertEqual()
FixturesPowerful, flexiblesetUp/tearDown
Plugins800+ pluginsLimited
OutputRich, colorfulBasic
ParametrizeBuilt-inRequires subTest
python
# unittest style - verbose
import unittest

class TestMath(unittest.TestCase):
    def test_addition(self):
        self.assertEqual(1 + 1, 2)
        self.assertNotEqual(1 + 1, 3)
        self.assertTrue(1 + 1 == 2)

# pytest style - clean
def test_addition():
    assert 1 + 1 == 2
    assert 1 + 1 != 3
    assert 1 + 1 == 2  # Same assertion, simpler syntax

Test Discovery Rules

Pytest tự động tìm tests theo convention:

File Discovery

project/
├── src/
│   └── calculator.py
├── tests/                    # ✅ Thư mục tests/
│   ├── __init__.py
│   ├── test_calculator.py    # ✅ File bắt đầu bằng test_
│   ├── calculator_test.py    # ✅ File kết thúc bằng _test
│   └── conftest.py           # Fixtures shared
└── pyproject.toml

Function/Class Discovery

python
# test_calculator.py

# ✅ Function bắt đầu bằng test_
def test_add():
    assert add(1, 2) == 3

# ✅ Class bắt đầu bằng Test (không có __init__)
class TestCalculator:
    # ✅ Method bắt đầu bằng test_
    def test_subtract(self):
        assert subtract(5, 3) == 2
    
    # ❌ Không phải test (không có prefix test_)
    def helper_method(self):
        pass

# ❌ Không được discover (không có prefix test_)
def add_numbers():
    pass

Custom Discovery với pyproject.toml

toml
# pyproject.toml
[tool.pytest.ini_options]
testpaths = ["tests"]
python_files = ["test_*.py", "*_test.py", "check_*.py"]
python_classes = ["Test*", "Check*"]
python_functions = ["test_*", "check_*"]

Chạy Tests

bash
# Chạy tất cả tests
pytest

# Chạy file cụ thể
pytest tests/test_calculator.py

# Chạy test function cụ thể
pytest tests/test_calculator.py::test_add

# Chạy test class cụ thể
pytest tests/test_calculator.py::TestCalculator

# Chạy test method cụ thể
pytest tests/test_calculator.py::TestCalculator::test_subtract

# Chạy tests match pattern
pytest -k "add or subtract"  # Tests có "add" hoặc "subtract" trong tên
pytest -k "not slow"         # Exclude tests có "slow" trong tên

Assertions và pytest.raises

Plain Assertions

Pytest sử dụng Python's built-in assert với assertion introspection - khi fail, pytest hiển thị chi tiết:

python
def test_string_operations():
    result = "hello world"
    
    # ✅ Simple assertions
    assert result == "hello world"
    assert "hello" in result
    assert len(result) == 11
    assert result.startswith("hello")
    
    # ✅ Assertion với message
    assert result == "hello world", f"Expected 'hello world', got '{result}'"

def test_list_operations():
    items = [1, 2, 3, 4, 5]
    
    assert len(items) == 5
    assert 3 in items
    assert items[0] == 1
    assert items == [1, 2, 3, 4, 5]
    
    # Pytest hiển thị diff khi list không match
    # assert items == [1, 2, 3, 4, 6]  # Shows detailed diff

Assertion Introspection Output

Khi test fail, pytest hiển thị chi tiết:

python
def test_failing_assertion():
    result = {"name": "Alice", "age": 30}
    expected = {"name": "Alice", "age": 25}
    assert result == expected

# Output:
# AssertionError: assert {'name': 'Alice', 'age': 30} == {'name': 'Alice', 'age': 25}
#   Differing items:
#   {'age': 30} != {'age': 25}

Testing Exceptions với pytest.raises

python
import pytest

def divide(a: int, b: int) -> float:
    if b == 0:
        raise ValueError("Cannot divide by zero")
    return a / b

def test_divide_by_zero():
    # ✅ Basic exception check
    with pytest.raises(ValueError):
        divide(10, 0)

def test_divide_by_zero_message():
    # ✅ Check exception message
    with pytest.raises(ValueError) as exc_info:
        divide(10, 0)
    
    assert str(exc_info.value) == "Cannot divide by zero"
    assert exc_info.type is ValueError

def test_divide_by_zero_match():
    # ✅ Check message với regex
    with pytest.raises(ValueError, match=r"Cannot divide.*zero"):
        divide(10, 0)

def test_no_exception_expected():
    # ✅ Không raise exception
    result = divide(10, 2)
    assert result == 5.0

Testing Multiple Exception Types

python
import pytest

def parse_config(value: str) -> dict:
    if not value:
        raise ValueError("Empty config")
    if not value.startswith("{"):
        raise TypeError("Config must be JSON object")
    import json
    return json.loads(value)

def test_parse_config_errors():
    # Test ValueError
    with pytest.raises(ValueError, match="Empty"):
        parse_config("")
    
    # Test TypeError
    with pytest.raises(TypeError, match="JSON object"):
        parse_config("not json")
    
    # Test JSONDecodeError (from json.loads)
    import json
    with pytest.raises(json.JSONDecodeError):
        parse_config("{invalid}")
})

pytest.raises với Expected Failures

python
import pytest

def test_expected_failure():
    # Nếu KHÔNG raise exception → test FAIL
    with pytest.raises(ValueError):
        # Code này PHẢI raise ValueError
        raise ValueError("Expected error")
    
    # ❌ Test này sẽ FAIL vì không có exception
    # with pytest.raises(ValueError):
    #     result = 1 + 1  # Không raise gì cả

Markers: skip, xfail, parametrize

@pytest.mark.skip - Bỏ qua Test

python
import pytest
import sys

# Skip unconditionally
@pytest.mark.skip(reason="Feature not implemented yet")
def test_future_feature():
    pass

# Skip conditionally
@pytest.mark.skipif(
    sys.version_info < (3, 11),
    reason="Requires Python 3.11+ for TaskGroup"
)
def test_taskgroup():
    from asyncio import TaskGroup
    # Test TaskGroup functionality

# Skip based on platform
@pytest.mark.skipif(
    sys.platform == "win32",
    reason="Unix-only feature"
)
def test_unix_signals():
    import signal
    # Test signal handling

@pytest.mark.xfail - Expected Failure

import pytest

Test expected to fail (known bug)

@pytest.mark.xfail(reason="Bug #123 - division rounding issue") def test_known_bug(): assert 1 / 3 * 3 == 1 # Floating point precision

xfail with condition

@pytest.mark.xfail( condition=sys.platform == "win32", reason="Windows has different behavior" ) def test_platform_specific(): pass

Strict xfail - FAIL if test passes unexpectedly

@pytest.mark.xfail(strict=True, reason="Should fail until bug fixed") def test_strict_xfail(): assert False # If this passes, test suite fails ) assert False # If this passes, test suite fails


### `@pytest.mark.parametrize` - Data-Driven Tests 🔥

```python
import pytest

# Basic parametrize
@pytest.mark.parametrize("input,expected", [
    (1, 2),
    (2, 4),
    (3, 6),
    (0, 0),
    (-1, -2),
])
def test_double(input, expected):
    assert input * 2 == expected

# Multiple parameters
@pytest.mark.parametrize("a,b,expected", [
    (1, 2, 3),
    (0, 0, 0),
    (-1, 1, 0),
    (100, 200, 300),
])
def test_add(a, b, expected):
    assert a + b == expected

# Parametrize với IDs (readable test names)
@pytest.mark.parametrize("email,is_valid", [
    ("user@example.com", True),
    ("invalid-email", False),
    ("", False),
    ("user@.com", False),
], ids=["valid_email", "no_at_sign", "empty", "invalid_domain"])
def test_email_validation(email, is_valid):
    from myapp.validators import validate_email
    assert validate_email(email) == is_valid

Parametrize với pytest.param

python
import pytest

@pytest.mark.parametrize("input,expected", [
    pytest.param(1, 2, id="positive"),
    pytest.param(-1, -2, id="negative"),
    pytest.param(0, 0, id="zero"),
    pytest.param(
        1000000, 2000000,
        id="large_number",
        marks=pytest.mark.slow  # Mark specific case
    ),
    pytest.param(
        None, None,
        id="none_input",
        marks=pytest.mark.xfail(reason="None not supported yet")
    ),
])
def test_double_advanced(input, expected):
    assert input * 2 == expected

Stacking Parametrize (Cartesian Product)

python
import pytest

@pytest.mark.parametrize("x", [1, 2, 3])
@pytest.mark.parametrize("y", [10, 20])
def test_multiply(x, y):
    # Runs 6 tests: (1,10), (1,20), (2,10), (2,20), (3,10), (3,20)
    result = x * y
    assert result == x * y

Custom Markers

python
# conftest.py
import pytest

def pytest_configure(config):
    config.addinivalue_line("markers", "slow: marks tests as slow")
    config.addinivalue_line("markers", "integration: integration tests")
    config.addinivalue_line("markers", "unit: unit tests")

# test_example.py
import pytest

@pytest.mark.slow
def test_slow_operation():
    import time
    time.sleep(2)
    assert True

@pytest.mark.integration
def test_database_connection():
    # Test actual database
    pass

@pytest.mark.unit
def test_pure_function():
    assert 1 + 1 == 2
bash
# Chạy chỉ slow tests
pytest -m slow

# Chạy không có slow tests
pytest -m "not slow"

# Chạy unit tests only
pytest -m unit

# Combine markers
pytest -m "unit and not slow"

Pytest vs Unittest: Khi nào dùng cái nào?

Dùng Pytest khi:

  • ✅ Bắt đầu project mới
  • ✅ Cần fixtures phức tạp
  • ✅ Muốn parametrized tests
  • ✅ Cần plugin ecosystem (pytest-cov, pytest-mock, pytest-asyncio)
  • ✅ Team quen với pytest

Dùng Unittest khi:

  • ✅ Không muốn thêm dependency
  • ✅ Codebase đã dùng unittest
  • ✅ Cần tương thích với CI/CD cũ
  • ✅ Team quen với xUnit style

Migration từ Unittest sang Pytest

python
# unittest style
import unittest

class TestCalculator(unittest.TestCase):
    def setUp(self):
        self.calc = Calculator()
    
    def tearDown(self):
        self.calc.reset()
    
    def test_add(self):
        self.assertEqual(self.calc.add(1, 2), 3)
    
    def test_divide_by_zero(self):
        with self.assertRaises(ValueError):
            self.calc.divide(1, 0)

# pytest style (equivalent)
import pytest

@pytest.fixture
def calc():
    calculator = Calculator()
    yield calculator
    calculator.reset()

def test_add(calc):
    assert calc.add(1, 2) == 3

def test_divide_by_zero(calc):
    with pytest.raises(ValueError):
        calc.divide(1, 0)

💡 BEST PRACTICE

Pytest có thể chạy unittest tests! Bạn có thể migrate dần dần:

bash
# Pytest chạy được cả unittest tests
pytest tests/  # Chạy cả pytest và unittest tests

Cấu trúc Test Project

Layout Chuẩn

my_project/
├── src/
│   └── my_package/
│       ├── __init__.py
│       ├── calculator.py
│       ├── validators.py
│       └── models.py
├── tests/
│   ├── __init__.py
│   ├── conftest.py              # Shared fixtures
│   ├── unit/                    # Unit tests
│   │   ├── __init__.py
│   │   ├── test_calculator.py
│   │   └── test_validators.py
│   ├── integration/             # Integration tests
│   │   ├── __init__.py
│   │   └── test_api.py
│   └── e2e/                     # End-to-end tests
│       ├── __init__.py
│       └── test_workflows.py
├── pyproject.toml
└── pytest.ini                   # (optional, prefer pyproject.toml)

conftest.py - Shared Fixtures

python
# tests/conftest.py
import pytest
from my_package.models import User, Database

@pytest.fixture
def sample_user():
    """Create a sample user for testing."""
    return User(name="Test User", email="test@example.com")

@pytest.fixture(scope="session")
def database():
    """Database connection shared across all tests."""
    db = Database.connect("test_db")
    yield db
    db.disconnect()

@pytest.fixture(autouse=True)
def reset_state():
    """Reset global state before each test."""
    # Setup
    yield
    # Teardown - runs after each test
    GlobalState.reset()

pyproject.toml Configuration

toml
[tool.pytest.ini_options]
minversion = "7.0"
testpaths = ["tests"]
python_files = ["test_*.py"]
python_functions = ["test_*"]
python_classes = ["Test*"]

# Markers
markers = [
    "slow: marks tests as slow (deselect with '-m \"not slow\"')",
    "integration: integration tests",
    "unit: unit tests",
]

# Plugins
addopts = [
    "-ra",                    # Show extra test summary
    "-q",                     # Quiet mode
    "--strict-markers",       # Error on unknown markers
    "--strict-config",        # Error on config issues
    "-x",                     # Stop on first failure (optional)
    "--tb=short",             # Shorter traceback
]

# Coverage (if using pytest-cov)
# addopts = ["--cov=src", "--cov-report=term-missing"]

# Async support (if using pytest-asyncio)
asyncio_mode = "auto"

# Filter warnings
filterwarnings = [
    "error",                  # Treat warnings as errors
    "ignore::DeprecationWarning",
]

Production Pitfalls ⚠️

1. Test Isolation - Tests Ảnh hưởng Lẫn nhau

python
# ❌ BAD: Global state không được reset
counter = 0

def test_increment():
    global counter
    counter += 1
    assert counter == 1  # Fails if test_increment_again runs first!

def test_increment_again():
    global counter
    counter += 1
    assert counter == 1  # Fails if test_increment runs first!

# ✅ GOOD: Dùng fixture để isolate
@pytest.fixture
def counter():
    return {"value": 0}

def test_increment(counter):
    counter["value"] += 1
    assert counter["value"] == 1

def test_increment_again(counter):
    counter["value"] += 1
    assert counter["value"] == 1  # Fresh counter mỗi test

2. Flaky Tests - Tests Không Ổn định

python
# ❌ BAD: Test phụ thuộc vào timing
import time

def test_timeout():
    start = time.time()
    do_something()
    elapsed = time.time() - start
    assert elapsed < 1.0  # Flaky! Có thể fail trên CI

# ✅ GOOD: Mock time hoặc dùng tolerance
from unittest.mock import patch

def test_timeout_mocked():
    with patch('time.time') as mock_time:
        mock_time.side_effect = [0, 0.5]  # Controlled timing
        start = time.time()
        do_something()
        elapsed = time.time() - start
        assert elapsed < 1.0

# ✅ GOOD: Dùng tolerance
def test_timeout_with_tolerance():
    start = time.time()
    do_something()
    elapsed = time.time() - start
    assert elapsed < 2.0  # Generous tolerance for CI

3. Test Order Dependency

python
# ❌ BAD: test_b phụ thuộc vào test_a
def test_a_create_user():
    global user_id
    user_id = create_user("Alice")
    assert user_id is not None

def test_b_get_user():
    # Fails if test_a doesn't run first!
    user = get_user(user_id)
    assert user.name == "Alice"

# ✅ GOOD: Mỗi test tự setup
@pytest.fixture
def created_user():
    user_id = create_user("Alice")
    yield user_id
    delete_user(user_id)  # Cleanup

def test_get_user(created_user):
    user = get_user(created_user)
    assert user.name == "Alice"

4. Không Assert Đủ

python
# ❌ BAD: Chỉ check không raise exception
def test_process_data():
    result = process_data([1, 2, 3])
    # Test passes nhưng không verify result!

# ✅ GOOD: Assert cụ thể
def test_process_data():
    result = process_data([1, 2, 3])
    assert result is not None
    assert len(result) == 3
    assert result[0] == 2  # Verify transformation
    assert all(isinstance(x, int) for x in result)

5. Hardcoded Test Data

python
# ❌ BAD: Hardcoded paths
def test_read_config():
    config = read_config("/home/dev/project/config.json")
    assert config["debug"] is True

# ✅ GOOD: Dùng fixtures và tmp_path
@pytest.fixture
def config_file(tmp_path):
    config_path = tmp_path / "config.json"
    config_path.write_text('{"debug": true}')
    return config_path

def test_read_config(config_file):
    config = read_config(config_file)
    assert config["debug"] is True

Useful Pytest Options

bash
# Verbose output
pytest -v

# Very verbose (show each assertion)
pytest -vv

# Show print statements
pytest -s

# Stop on first failure
pytest -x

# Stop after N failures
pytest --maxfail=3

# Run last failed tests
pytest --lf

# Run failed tests first
pytest --ff

# Show slowest N tests
pytest --durations=10

# Parallel execution (requires pytest-xdist)
pytest -n auto

# Coverage report (requires pytest-cov)
pytest --cov=src --cov-report=html

# Generate JUnit XML (for CI)
pytest --junitxml=report.xml

Bảng Tóm tắt

python
# === TEST DISCOVERY ===
# Files: test_*.py, *_test.py
# Classes: Test* (no __init__)
# Functions: test_*

# === ASSERTIONS ===
assert value == expected
assert value != unexpected
assert value in collection
assert condition, "Error message"

# === EXCEPTIONS ===
with pytest.raises(ValueError):
    risky_function()

with pytest.raises(ValueError, match=r"pattern"):
    risky_function()

# === MARKERS ===
@pytest.mark.skip(reason="...")
@pytest.mark.skipif(condition, reason="...")
@pytest.mark.xfail(reason="...")
@pytest.mark.parametrize("input,expected", [...])

# === RUN COMMANDS ===
pytest                          # All tests
pytest -k "pattern"             # Filter by name
pytest -m "marker"              # Filter by marker
pytest -x                       # Stop on first fail
pytest --lf                     # Last failed only
pytest -v                       # Verbose