Testing Quick Reference
Quick Start
# Run all tests
pytest
# Run with coverage
pytest --cov=src --cov-report=html
# Run specific test type
pytest -m unit # Unit tests only
pytest -m integration # Integration tests only
pytest -m e2e # End-to-end tests only
Test Commands
Basic Execution
pytest # Run all tests
pytest -v # Verbose output
pytest -x # Stop on first failure
pytest -s # Show print statements
pytest --lf # Run last failed tests
pytest --ff # Run failed first, then others
Coverage
pytest --cov=src # Basic coverage
pytest --cov=src --cov-report=html # HTML report
pytest --cov=src --cov-report=term # Terminal report
pytest --cov=src --cov-report=xml # XML report (for CI)
Test Selection
pytest tests/unit/ # Run specific directory
pytest tests/unit/test_file.py # Run specific file
pytest tests/unit/test_file.py::test_function # Run specific test
pytest -k "vector" # Run tests matching pattern
pytest -m "unit and not slow" # Run by markers
Debugging
pytest --pdb # Drop into debugger on failure
pytest --pdb -x # Debug first failure
pytest --showlocals # Show local variables
pytest --tb=short # Short traceback
pytest --tb=long # Long traceback
Performance
pytest --durations=10 # Show 10 slowest tests
pytest -n auto # Parallel execution (requires pytest-xdist)
pytest --profile # Profile test execution
Test Markers
@pytest.mark.unit # Unit test
@pytest.mark.integration # Integration test
@pytest.mark.e2e # End-to-end test
@pytest.mark.slow # Slow test (> 1s)
@pytest.mark.requires_db # Requires database
@pytest.mark.requires_vector_db # Requires vector database
@pytest.mark.requires_external_service # Requires external service
@pytest.mark.property # Property-based test
@pytest.mark.asyncio # Async test
Writing Tests
Unit Test Template
import pytest
from src.domain.vector_search.entities import Document
@pytest.mark.unit
class TestDocument:
"""Document entity unit tests"""
def test_update_content_changes_content(self):
"""Test that updating content changes the document content"""
# Arrange
doc = Document(id="1", content="original", ...)
# Act
doc.update_content("new content")
# Assert
assert doc.content == "new content"
Integration Test Template
import pytest
@pytest.mark.integration
@pytest.mark.requires_db
class TestDocumentRepository:
"""Document repository integration tests"""
async def test_save_and_retrieve_document(self, test_db_session):
"""Test saving and retrieving a document"""
# Arrange
repository = SQLDocumentRepository(test_db_session)
doc = Document(...)
# Act
await repository.save(doc)
retrieved = await repository.find_by_id(doc.id)
# Assert
assert retrieved.id == doc.id
E2E Test Template
import pytest
from httpx import AsyncClient
@pytest.mark.e2e
class TestDocumentWorkflow:
"""Document workflow end-to-end tests"""
async def test_complete_document_lifecycle(self, client: AsyncClient):
"""Test complete document lifecycle"""
# Create
response = await client.post("/api/v1/documents/", json={...})
assert response.status_code == 201
doc_id = response.json()["id"]
# Search
response = await client.post("/api/v1/documents/search", json={...})
assert response.status_code == 200
# Delete
response = await client.delete(f"/api/v1/documents/{doc_id}")
assert response.status_code == 204
Property-Based Test Template
import pytest
from hypothesis import given, strategies as st
@pytest.mark.property
class TestVectorProperties:
"""Vector property-based tests"""
@given(st.lists(st.floats(allow_nan=False), min_size=1))
def test_vector_dimension_count_equals_list_length(self, dimensions):
"""Property: Vector dimension count equals list length"""
vector = Vector(dimensions)
assert vector.dimension_count == len(dimensions)
Fixtures
Common Fixtures
@pytest.fixture
def mock_settings():
"""Mock application settings"""
return {...}
@pytest.fixture
async def test_db_session():
"""Test database session"""
# Setup
session = create_test_session()
yield session
# Teardown
await session.rollback()
await session.close()
@pytest.fixture
def mock_vector_db():
"""Mock vector database"""
return MockVectorDB()
Coverage Goals
| Layer |
Target |
Current |
| Domain Layer |
90% |
- |
| Application Layer |
85% |
- |
| Infrastructure Layer |
70% |
- |
| Presentation Layer |
75% |
- |
| Overall |
80% |
- |
CI/CD Pipeline
Triggers
- Push to
main or develop branches
- Pull requests to
main or develop branches
- Manual workflow dispatch
Jobs
- Test: Run all tests with coverage
- Lint: Code quality checks (flake8, black, isort, mypy)
- Security: Security scans (safety, bandit)
- Build Status: Aggregate results
Artifacts
- Coverage reports (30 days retention)
- Test logs (7 days retention)
- Security reports (30 days retention)
Troubleshooting
Tests fail in CI but pass locally
# Check Python version
python --version
# Install exact dependencies
pip install -r requirements.txt
# Run with CI settings
pytest -v --cov=src --cov-report=term
Coverage not uploading
# Verify coverage.xml exists
ls -la coverage.xml
# Check Codecov token
# GitHub Settings → Secrets → CODECOV_TOKEN
# Manual upload
bash <(curl -s https://codecov.io/bash)
Slow tests
# Find slow tests
pytest --durations=10
# Run without slow tests
pytest -m "not slow"
# Parallel execution
pytest -n auto
Best Practices
- ✅ Write tests before or with code (TDD)
- ✅ Keep unit tests fast (< 100ms)
- ✅ Use appropriate markers
- ✅ Mock external dependencies in unit tests
- ✅ Use descriptive test names
- ✅ Follow AAA pattern (Arrange, Act, Assert)
- ✅ Keep tests independent
- ✅ Maintain high coverage (≥ 80%)
- ✅ Review coverage reports regularly
- ✅ Fix failing tests immediately
Resources