Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
55 changes: 55 additions & 0 deletions .github/workflows/test.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,55 @@
name: Test and Quality Gates

on:
push:
branches: [ main ]
pull_request:
branches: [ main ]

jobs:
test:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: [3.9, 3.10, 3.11]

steps:
- uses: actions/checkout@v4

- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}

- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements-test.txt

- name: Run type checking
run: mypy --strict src/

- name: Run linting
run: |
black --check src/ test/
isort --check-only src/ test/
flake8 src/ test/

- name: Run tests with coverage
run: |
pytest --cov=src --cov-report=xml --cov-report=term-missing

- name: Upload coverage to Codecov
uses: codecov/codecov-action@v4
with:
file: ./coverage.xml
fail_ci_if_error: true

- name: Run benchmarks
run: pytest --benchmark-only --benchmark-json=benchmark.json

- name: Upload benchmark results
uses: actions/upload-artifact@v4
with:
name: benchmark-results
path: benchmark.json
115 changes: 114 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
@@ -1 +1,114 @@
# PorQua
# PorQua - Portfolio Optimization and Quantitative Analysis

## Test Framework

This project uses a comprehensive test framework built with pytest, ensuring high code quality and reliability.

### Test Structure

```
test/
├── conftest.py # Shared fixtures and configuration
├── test_quadratic_program.py # Core optimization tests
└── ...
```

### Running Tests

1. Install test dependencies:
```bash
pip install -r requirements-test.txt
```

2. Run tests with coverage:
```bash
pytest --cov=src --cov-report=term-missing
```

3. Run specific test categories:
```bash
# Unit tests only
pytest -m unit

# Integration tests
pytest -m integration

# Performance tests
pytest -m performance

# Property-based tests
pytest -m property
```

### Quality Gates

The following quality gates are enforced:

- Minimum 90% code coverage
- Type safety (mypy --strict)
- Code formatting (black, isort)
- Linting (flake8)
- Performance benchmarks

### Test Categories

1. **Unit Tests** (`@pytest.mark.unit`)
- Test individual components in isolation
- Fast execution
- No external dependencies

2. **Integration Tests** (`@pytest.mark.integration`)
- Test component interactions
- May use external dependencies
- Slower execution

3. **Performance Tests** (`@pytest.mark.performance`)
- Benchmark critical operations
- Track performance over time
- Generate performance reports

4. **Property-Based Tests** (`@pytest.mark.property`)
- Test invariants and properties
- Use hypothesis for test data generation
- Comprehensive edge case coverage

### CI/CD Pipeline

The GitHub Actions workflow (`/.github/workflows/test.yml`) runs:

1. Type checking
2. Linting
3. Test suite with coverage
4. Performance benchmarks
5. Codecov integration

### Contributing

1. Create a feature branch:
```bash
git checkout -b feat/tests/[component]-[feature]
```

2. Write tests first (TDD approach)

3. Run pre-commit checks:
```bash
pytest --cov=src --cov-fail-under=90
mypy --strict src/
```

4. Create a PR with:
- Issue reference (`Fixes #XYZ`)
- Codecov differential report
- Type annotation documentation

### Maintenance

- Monthly dependency updates
- Regular benchmark baseline updates
- Legacy test deprecation cycle
- Performance regression monitoring

## License

[Your License Information]
25 changes: 25 additions & 0 deletions pytest.ini
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
[pytest]
testpaths = test
python_files = test_*.py
python_classes = Test*
python_functions = test_*
addopts =
--cov=src
--cov-report=term-missing
--cov-report=html
--cov-fail-under=90
--benchmark-only
--benchmark-group-by=func
--benchmark-warmup=on
--benchmark-warmup-iterations=1
--benchmark-skip
--randomly-seed=42
-n auto
--strict-markers
markers =
unit: Unit tests
integration: Integration tests
performance: Performance tests
property: Property-based tests
slow: Tests that take longer to run
legacy: Legacy tests that need to be updated
12 changes: 12 additions & 0 deletions requirements-test.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
pytest==8.0.2
pytest-cov==4.1.0
pytest-benchmark==4.0.0
hypothesis==6.98.1
mypy==1.8.0
black==24.2.0
isort==5.13.2
flake8==7.0.0
pytest-xdist==3.5.0
pytest-randomly==3.15.0
pytest-mock==3.12.0
pytest-asyncio==0.23.5
71 changes: 71 additions & 0 deletions test/conftest.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,71 @@
import os
import pytest
import numpy as np
import pandas as pd
from typing import Dict, Any
from datetime import datetime

# Import your project modules
from src.helper_functions import to_numpy
from src.data_loader import load_data_msci
from src.constraints import Constraints
from src.covariance import Covariance
from src.optimization import *
from src.optimization_data import OptimizationData

@pytest.fixture(scope="session")
def test_data() -> Dict[str, pd.DataFrame]:
"""Load test data once for the entire test session."""
data = load_data_msci(os.path.join(os.getcwd(), f'data{os.sep}'))
return data

@pytest.fixture(scope="function")
def sample_universe(test_data) -> pd.Index:
"""Provide a sample universe for testing."""
return test_data['X'].columns

@pytest.fixture(scope="function")
def constraints(sample_universe) -> Constraints:
"""Create a fresh constraints object for each test."""
return Constraints(selection=sample_universe)

@pytest.fixture(scope="function")
def optimization_data(test_data) -> OptimizationData:
"""Create optimization data for testing."""
return OptimizationData(
X=test_data['X'],
y=test_data['y'],
align=True
)

@pytest.fixture(scope="function")
def least_squares_optimizer(sample_universe, constraints, optimization_data) -> LeastSquares:
"""Create a configured LeastSquares optimizer for testing."""
optim = LeastSquares(solver_name='cvxopt', sparse=True)
optim.constraints = constraints
optim.set_objective(optimization_data)

# Ensure P and q are numpy arrays
if 'P' in optim.objective:
optim.objective['P'] = to_numpy(optim.objective['P'])
optim.objective['q'] = to_numpy(optim.objective.get('q', np.zeros(len(sample_universe))))

optim.model_qpsolvers()
return optim

@pytest.fixture(scope="session", autouse=True)
def setup_test_environment():
"""Setup and teardown for the entire test session."""
# Setup
print("\nStarting test session...")
yield
# Teardown
print("\nTest session completed.")

@pytest.hookimpl
def pytest_sessionfinish(session, exitstatus):
"""Track coverage baseline after test session."""
if exitstatus == 0:
print("\nAll tests passed successfully!")
else:
print("\nSome tests failed. Please check the report for details.")
Loading