The RAG System uses GitHub Actions for continuous integration and continuous deployment. This guide explains how the CI/CD pipeline works and how to use it effectively.
The CI/CD pipeline consists of four main jobs that run in parallel:
┌─────────────────────────────────────────────────────────┐
│ GitHub Actions │
├─────────────────────────────────────────────────────────┤
│ │
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
│ │ Test │ │ Lint │ │ Security │ │
│ │ Job │ │ Job │ │ Job │ │
│ └────┬─────┘ └────┬─────┘ └────┬─────┘ │
│ │ │ │ │
│ └─────────────┴──────────────┘ │
│ │ │
│ ┌──────▼──────┐ │
│ │Build Status │ │
│ │ Job │ │
│ └─────────────┘ │
│ │
└─────────────────────────────────────────────────────────┘
Runs the complete test suite with coverage measurement.
Matrix Strategy: Tests run on Python 3.11 and 3.12
Steps:
Test Execution Order:
Unit Tests → Integration Tests → E2E Tests
Coverage Requirements:
Performs code quality checks.
Tools:
Configuration:
Scans for security vulnerabilities.
Tools:
Reports:
bandit-report.json: Detailed security scan resultsAggregates results from all jobs and determines overall build status.
Behavior:
tests/unit/@pytest.mark.unitDependencies: Mocked
pytest tests/unit -v --cov=src --cov-report=term -m unit
tests/integration/@pytest.mark.integrationDependencies: Test databases/services
pytest tests/integration -v --cov=src --cov-report=term -m integration
tests/e2e/@pytest.mark.e2eDependencies: Full system
pytest tests/e2e -v --cov=src --cov-report=term -m e2e
Use markers to organize and selectively run tests:
import pytest
@pytest.mark.unit
def test_vector_creation():
"""Unit test for vector creation"""
pass
@pytest.mark.integration
@pytest.mark.requires_db
def test_document_repository():
"""Integration test requiring database"""
pass
@pytest.mark.e2e
@pytest.mark.slow
def test_complete_workflow():
"""End-to-end test of complete workflow"""
pass
@pytest.mark.property
def test_vector_properties():
"""Property-based test"""
pass
# Run only unit tests
pytest -m unit
# Run only integration tests
pytest -m integration
# Run only e2e tests
pytest -m e2e
# Exclude slow tests
pytest -m "not slow"
# Run tests requiring database
pytest -m requires_db
# Run property-based tests
pytest -m property
# Combine markers
pytest -m "unit and not slow"
Coverage is configured in .coveragerc and codecov.yml.
Key Settings:
src/pytest --cov=src --cov-report=term
Output:
Name Stmts Miss Branch BrPart Cover
----------------------------------------------------------------------
src/domain/vector_search/entities.py 45 2 8 1 94%
src/application/handlers.py 67 5 12 2 89%
----------------------------------------------------------------------
TOTAL 512 23 89 7 93%
pytest --cov=src --cov-report=html
Open htmlcov/index.html in browser for interactive report.
pytest --cov=src --cov-report=xml
Generates coverage.xml for Codecov upload.
Codecov tracks coverage for each architectural layer:
src/domain/): Target 90%src/application/): Target 85%src/infrastructure/): Target 70%src/presentation/): Target 75%src/config/): Target 80%src/shared/): Target 80%CODECOV_TOKENAdd to your README.md:
[](https://codecov.io/gh/YOUR_USERNAME/YOUR_REPO)
Replace YOUR_USERNAME, YOUR_REPO, and YOUR_TOKEN with your values.
# Install test dependencies
pip install pytest pytest-asyncio pytest-cov hypothesis httpx
# Run all tests
pytest
# Run with verbose output
pytest -v
# Run with coverage
pytest --cov=src
# Run specific test file
pytest tests/unit/domain/test_entities.py
# Run specific test function
pytest tests/unit/domain/test_entities.py::test_vector_creation
# Run tests in parallel (requires pytest-xdist)
pytest -n auto
# Stop on first failure
pytest -x
# Show local variables in tracebacks
pytest --showlocals
# Run only failed tests from last run
pytest --lf
# Run failed tests first, then others
pytest --ff
# Generate all report types
pytest --cov=src --cov-report=html --cov-report=term --cov-report=xml
# Run with Python debugger
pytest --pdb
# Drop into debugger on failure
pytest --pdb --maxfail=1
# Print output (disable capture)
pytest -s
# Show extra test summary info
pytest -ra
Possible Causes:
Solutions:
# Test with specific Python version
pyenv install 3.11
pyenv local 3.11
pytest
# Check for missing dependencies
pip freeze > current-deps.txt
diff requirements.txt current-deps.txt
# Run tests with same settings as CI
pytest -v --cov=src --cov-report=term
Possible Causes:
CODECOV_TOKENSolutions:
coverage.xml exists after test runTry manual upload:
bash <(curl -s https://codecov.io/bash) -t YOUR_TOKEN
Possible Causes:
Solutions:
# Identify slow tests
pytest --durations=10
# Run only fast tests
pytest -m "not slow"
# Use parallel execution
pytest -n auto
# Profile test execution
pytest --profile
Possible Causes:
__init__.py filesSolutions:
# Check Python path
python -c "import sys; print('\n'.join(sys.path))"
# Run tests with explicit path
PYTHONPATH=. pytest
# Check for circular imports
pytest --collect-only
# Good: Fast unit test
@pytest.mark.unit
def test_vector_dimension_count():
vector = Vector([1.0, 2.0, 3.0])
assert vector.dimension_count == 3
# Avoid: Slow test with unnecessary delays
def test_slow():
time.sleep(5) # Don't do this!
assert True
# Mark tests appropriately
@pytest.mark.unit
@pytest.mark.fast
def test_value_object():
pass
@pytest.mark.integration
@pytest.mark.requires_db
def test_repository():
pass
@pytest.mark.e2e
@pytest.mark.slow
def test_workflow():
pass
# Good: Mock external service
@pytest.mark.unit
def test_document_handler(mock_repository):
handler = CreateDocumentHandler(mock_repository)
result = handler.handle(command)
assert result is not None
# Avoid: Real external calls in unit tests
def test_with_real_api():
response = requests.get("https://api.example.com") # Don't do this!
assert response.status_code == 200
# Cover edge cases
def test_vector_empty_dimensions():
with pytest.raises(ValueError):
Vector([])
def test_vector_invalid_dimensions():
with pytest.raises(ValueError):
Vector([1, "invalid", 3])
def test_vector_normal_case():
vector = Vector([1.0, 2.0])
assert vector.dimension_count == 2
# Good: Independent test
@pytest.fixture
def clean_database():
db = create_test_db()
yield db
db.cleanup()
def test_create_document(clean_database):
# Test uses fresh database
pass
# Avoid: Tests depending on each other
def test_step_1():
global shared_state
shared_state = "value"
def test_step_2():
# Depends on test_step_1 running first
assert shared_state == "value"
# Good: Descriptive names
def test_vector_creation_with_valid_dimensions_succeeds():
pass
def test_vector_creation_with_empty_dimensions_raises_value_error():
pass
# Avoid: Vague names
def test_vector_1():
pass
def test_vector_2():
pass
def test_hybrid_search_score_combination():
"""
Test that hybrid search correctly combines vector and text search scores.
Given:
- Vector search results with scores [0.9, 0.7, 0.5]
- Text search results with scores [0.8, 0.6, 0.4]
- Weight configuration: vector=0.7, text=0.3
When:
- Scores are combined using weighted average
Then:
- Combined scores should be [0.87, 0.67, 0.47]
- Results should be sorted by combined score
"""
# Test implementation
pass