Test Metrics Guide
Overview
The test suite now outputs comprehensive metrics in multiple formats for analysis, monitoring, and CI/CD integration.
Available Metrics
1. HTML Report (reports/test_report.html)
- Visual test results with color coding
- Test execution timeline
- Pass/fail status for each test
- Self-contained (includes all assets)
2. JSON Report (reports/test_report.json)
- Machine-readable test results
- Full test details including:
- Test outcomes (passed/failed/skipped)
- Execution duration
- Error messages and tracebacks
- Captured output and logs
3. Coverage Reports
- HTML Coverage (
reports/coverage/index.html)- Interactive coverage report
- Line-by-line coverage visualization
- File-level coverage statistics
- JSON Coverage (
reports/coverage.json)- Machine-readable coverage data
- Suitable for CI/CD integration
4. Aggregated Metrics (reports/TEST_METRICS.md & reports/test_metrics.json)
- Executive summary
- Success rate
- Performance metrics (slowest tests)
- Breakdown by test file
- Timestamp and duration
Usage
Run Tests with Metrics
# Run all tests with metrics output
make test-with-metrics
# Or manually
docker-compose -f docker-compose.test.yml run --rm etl-tests pytest tests/ -v
docker-compose -f docker-compose.test.yml run --rm etl-tests python scripts/generate_test_metrics.py
View Reports
# HTML report (open in browser)
open reports/test_report.html
# Coverage report (open in browser)
open reports/coverage/index.html
# Metrics summary
cat reports/TEST_METRICS.md
Metrics Output Locations
All metrics are generated in the reports/ directory:
reports/
├── test_report.html # HTML test report
├── test_report.json # JSON test report (pytest-json-report)
├── TEST_METRICS.md # Aggregated metrics (markdown)
├── test_metrics.json # Aggregated metrics (JSON)
├── coverage/
│ ├── index.html # HTML coverage report
│ └── ...
└── coverage.json # JSON coverage data
Metrics Included
Summary Metrics
- Total tests executed
- Passed/Failed/Skipped counts
- Success rate percentage
- Total execution duration
- Average test duration
Performance Metrics
- Slowest tests (top 10)
- Individual test durations
- File-level execution times
Coverage Metrics
- Line coverage percentage
- Branch coverage percentage
- File-level coverage
- Missing line numbers
Breakdown Metrics
- Tests by file
- Success rate per file
- Outcome distribution
CI/CD Integration
GitHub Actions
The JSON reports can be easily integrated into CI/CD pipelines:
- name: Run tests with metrics
run: |
pytest tests/ -v --json-report --json-report-file=reports/test_report.json
- name: Upload test report
uses: actions/upload-artifact@v3
with:
name: test-report
path: reports/
- name: Generate metrics
run: python scripts/generate_test_metrics.py
Metrics Dashboard
You can build dashboards using the JSON metrics:
import json
with open('reports/test_metrics.json') as f:
metrics = json.load(f)
print(f"Success Rate: {metrics['summary']['success_rate']}%")
print(f"Total Duration: {metrics['summary']['duration_formatted']}")
Customization
Adjust Coverage Thresholds
Edit pytest.ini:
addopts =
--cov-fail-under=80 # Fail if coverage < 80%
Change Report Formats
Edit pytest.ini:
addopts =
--html=reports/custom_report.html
--json-report-file=reports/custom_report.json
Add Custom Metrics
Extend scripts/generate_test_metrics.py to add custom metrics calculations.
Example Metrics Output
TEST_METRICS.md
# Test Metrics Report
## Executive Summary
| Metric | Value |
| --------------------- | -------- |
| **Total Tests** | 54 |
| **Passed** | 54 ✅ |
| **Failed** | 0 |
| **Success Rate** | 100.0% |
| **Total Duration** | 3m 36.9s |
| **Avg Test Duration** | 4.0s |
## Performance Metrics
### Slowest Tests (Top 10)
1. `test_load_1m_rows` - 180.5s
1. `test_load_100k_rows` - 25.3s
...
test_metrics.json
{
"summary": {
"total": 54,
"passed": 54,
"failed": 0,
"success_rate": 100.0,
"duration_seconds": 216.86
},
"slowest_tests": [{ "test": "test_load_1m_rows", "duration_seconds": 180.5 }]
}
Benefits
✅ Visibility - Clear view of test health
✅ Performance Tracking - Identify slow tests
✅ Coverage Monitoring - Track code coverage trends
✅ CI/CD Integration - Machine-readable formats
✅ Historical Analysis - Track metrics over time
✅ Debugging - Detailed error information
Troubleshooting
Reports Not Generated
# Ensure reports directory exists
mkdir -p reports
# Check pytest plugins installed
pip install pytest-html pytest-json-report pytest-cov
Coverage Not Working
# Ensure source code is in PYTHONPATH
export PYTHONPATH=$PYTHONPATH:$(pwd)
# Check coverage configuration
pytest --cov=src --cov-report=term
All test runs now automatically generate comprehensive metrics reports!
Related Documentation
Technical Documentation
- Testing Quick Start - Quick start guide for running tests
- Testing Guide - Comprehensive testing documentation
- Test Runner Guide - Efficient test execution
- Docker Testing - Dockerized test environment
- Test Results Overview - Current test execution results
- Unified Testing Convention - Testing standards
Task-Specific Documentation
- Task 1 Test Reports (source only) - Test reporting overview
- Task 1 Test Directory (source only) - Detailed test documentation
- Unified Testing Convention - Testing standards