# S3 Optimization System - Testing Suite
This directory contains a comprehensive testing suite for the S3 optimization system, designed to ensure reliability, performance, and most importantly, **cost constraint compliance**.
## π¨ Critical: No-Cost Constraint Testing
The most important aspect of this testing suite is validating that the system **NEVER** performs cost-incurring S3 operations. The `no_cost_validation/` tests are critical for customer billing protection.
## Test Structure
```
tests/
βββ conftest.py # Shared fixtures and configuration
βββ pytest.ini # Pytest configuration
βββ requirements-test.txt # Testing dependencies
βββ run_tests.py # Test runner script
βββ README.md # This file
βββ unit/ # Unit tests with mocked dependencies
β βββ analyzers/ # Analyzer unit tests
β β βββ test_base_analyzer.py
β β βββ test_general_spend_analyzer.py
β βββ services/ # Service unit tests
β βββ test_s3_service.py
βββ integration/ # Integration tests
β βββ test_orchestrator_integration.py
βββ performance/ # Performance and load tests
β βββ test_parallel_execution.py
βββ no_cost_validation/ # π¨ CRITICAL: Cost constraint tests
βββ test_cost_constraints.py
```
## Test Categories
### 1. Unit Tests (`unit/`)
- **Purpose**: Test individual components in isolation
- **Scope**: Analyzers, services, utilities
- **Dependencies**: Fully mocked AWS services
- **Speed**: Fast (< 1 second per test)
- **Coverage**: High code coverage with edge cases
**Key Features:**
- Comprehensive mocking of AWS services
- Parameter validation testing
- Error handling verification
- Performance monitoring integration testing
### 2. Integration Tests (`integration/`)
- **Purpose**: Test component interactions and data flow
- **Scope**: Orchestrator + analyzers + services
- **Dependencies**: Mocked AWS APIs with realistic responses
- **Speed**: Medium (1-10 seconds per test)
- **Coverage**: End-to-end workflows
**Key Features:**
- Complete analysis workflow testing
- Service fallback chain validation
- Session management integration
- Error propagation testing
### 3. Performance Tests (`performance/`)
- **Purpose**: Validate performance characteristics and resource usage
- **Scope**: Parallel execution, timeout handling, memory usage
- **Dependencies**: Controlled mock delays and resource simulation
- **Speed**: Slow (10-60 seconds per test)
- **Coverage**: Performance benchmarks and limits
**Key Features:**
- Parallel vs sequential execution comparison
- Timeout handling validation
- Memory usage monitoring
- Concurrent request handling
- Cache effectiveness testing
### 4. No-Cost Constraint Validation (`no_cost_validation/`) π¨
- **Purpose**: **CRITICAL** - Ensure no cost-incurring operations are performed
- **Scope**: All S3 operations across the entire system
- **Dependencies**: Cost constraint validation framework
- **Speed**: Fast (< 1 second per test)
- **Coverage**: 100% of S3 operations
**Key Features:**
- Forbidden operation detection
- Cost constraint system validation
- Data source cost verification
- End-to-end cost compliance testing
- Bypass attempt prevention
## Running Tests
### Quick Start
```bash
# Check test environment
python tests/run_tests.py --check
# Run all tests
python tests/run_tests.py --all
# Run specific test suites
python tests/run_tests.py --unit
python tests/run_tests.py --integration
python tests/run_tests.py --performance
python tests/run_tests.py --cost-validation # π¨ CRITICAL
```
### Using pytest directly
```bash
# Install test dependencies
pip install -r tests/requirements-test.txt
# Run unit tests with coverage
pytest tests/unit/ --cov=core --cov=services --cov-report=html
# Run integration tests
pytest tests/integration/ -v
# Run performance tests
pytest tests/performance/ -m performance
# Run cost validation tests (CRITICAL)
pytest tests/no_cost_validation/ -m no_cost_validation -v
```
### Test Markers
Tests are organized using pytest markers:
- `@pytest.mark.unit` - Unit tests
- `@pytest.mark.integration` - Integration tests
- `@pytest.mark.performance` - Performance tests
- `@pytest.mark.no_cost_validation` - π¨ Cost constraint tests
- `@pytest.mark.slow` - Tests that take longer to run
- `@pytest.mark.aws` - Tests requiring real AWS credentials (skipped by default)
## Test Configuration
### Environment Variables
```bash
# AWS credentials for testing (use test account only)
export AWS_ACCESS_KEY_ID=testing
export AWS_SECRET_ACCESS_KEY=testing
export AWS_DEFAULT_REGION=us-east-1
# Test configuration
export PYTEST_TIMEOUT=300
export PYTEST_WORKERS=auto
```
### Coverage Requirements
- **Minimum Coverage**: 80%
- **Target Coverage**: 90%+
- **Critical Paths**: 100% (cost constraint validation)
### Performance Benchmarks
- **Unit Tests**: < 1 second each
- **Integration Tests**: < 10 seconds each
- **Performance Tests**: < 60 seconds each
- **Full Suite**: < 5 minutes
## Key Testing Patterns
### 1. AWS Service Mocking
```python
@pytest.fixture
def mock_s3_service():
service = Mock()
service.list_buckets = AsyncMock(return_value={
"status": "success",
"data": {"Buckets": [...]}
})
return service
```
### 2. Cost Constraint Validation
```python
def test_no_forbidden_operations(cost_constraint_validator):
# Test code that should not call forbidden operations
analyzer.analyze()
summary = cost_constraint_validator.get_operation_summary()
assert len(summary["forbidden_called"]) == 0
```
### 3. Performance Testing
```python
@pytest.mark.performance
async def test_parallel_execution_performance(performance_tracker):
performance_tracker.start_timer("test")
await run_parallel_analysis()
execution_time = performance_tracker.end_timer("test")
performance_tracker.assert_performance("test", max_time=30.0)
```
### 4. Error Handling Testing
```python
async def test_service_failure_handling():
with patch('service.api_call', side_effect=Exception("API Error")):
result = await analyzer.analyze()
assert result["status"] == "error"
assert "API Error" in result["message"]
```
## Critical Test Requirements
### π¨ Cost Constraint Tests MUST Pass
The no-cost constraint validation tests are **mandatory** and **must pass** before any deployment:
1. **Forbidden Operation Detection**: Verify all cost-incurring S3 operations are blocked
2. **Data Source Validation**: Confirm all data sources are genuinely no-cost
3. **End-to-End Compliance**: Validate entire system respects cost constraints
4. **Bypass Prevention**: Ensure cost constraints cannot be circumvented
### Test Data Management
- **No Real AWS Resources**: All tests use mocked AWS services
- **Deterministic Data**: Test data is predictable and repeatable
- **Edge Cases**: Include boundary conditions and error scenarios
- **Realistic Scenarios**: Mock data reflects real AWS API responses
## Continuous Integration
### Pre-commit Hooks
```bash
# Install pre-commit hooks
pip install pre-commit
pre-commit install
# Run manually
pre-commit run --all-files
```
### CI Pipeline Requirements
1. **All test suites must pass**
2. **Coverage threshold must be met**
3. **No-cost constraint tests are mandatory**
4. **Performance benchmarks must be within limits**
5. **No security vulnerabilities in dependencies**
## Troubleshooting
### Common Issues
1. **Import Errors**
```bash
# Ensure PYTHONPATH includes project root
export PYTHONPATH="${PYTHONPATH}:$(pwd)"
```
2. **AWS Credential Errors**
```bash
# Use test credentials
export AWS_ACCESS_KEY_ID=testing
export AWS_SECRET_ACCESS_KEY=testing
```
3. **Timeout Issues**
```bash
# Increase timeout for slow tests
pytest --timeout=600
```
4. **Memory Issues**
```bash
# Run tests with memory profiling
pytest --memprof
```
### Debug Mode
```bash
# Run with debug output
pytest -v --tb=long --log-cli-level=DEBUG
# Run single test with debugging
pytest tests/unit/test_specific.py::TestClass::test_method -v -s
```
## Contributing to Tests
### Adding New Tests
1. **Choose appropriate test category** (unit/integration/performance/cost-validation)
2. **Follow naming conventions** (`test_*.py`, `Test*` classes, `test_*` methods)
3. **Use appropriate fixtures** from `conftest.py`
4. **Add proper markers** (`@pytest.mark.unit`, etc.)
5. **Include docstrings** explaining test purpose
6. **Validate cost constraints** if testing S3 operations
### Test Quality Guidelines
- **One assertion per test** (when possible)
- **Clear test names** that describe what is being tested
- **Arrange-Act-Assert** pattern
- **Mock external dependencies** completely
- **Test both success and failure paths**
- **Include edge cases and boundary conditions**
## Security Considerations
- **No real AWS credentials** in test code
- **No sensitive data** in test fixtures
- **Secure mock data** that doesn't expose patterns
- **Cost constraint validation** is mandatory
- **Regular dependency updates** for security patches
## Reporting and Metrics
### Coverage Reports
```bash
# Generate HTML coverage report
pytest --cov=core --cov=services --cov-report=html
# View report
open htmlcov/index.html
```
### Performance Reports
```bash
# Generate performance benchmark report
pytest tests/performance/ --benchmark-only --benchmark-json=benchmark.json
```
### Test Reports
```bash
# Generate comprehensive test report
python tests/run_tests.py --report
# View reports
open test_report.html
```
---
## π¨ Remember: Cost Constraint Compliance is Critical
The primary purpose of this testing suite is to ensure that the S3 optimization system **never incurs costs** for customers. The no-cost constraint validation tests are the most important tests in this suite and must always pass.
**Before any deployment or release:**
1. Run `python tests/run_tests.py --cost-validation`
2. Verify all cost constraint tests pass
3. Review any new S3 operations for cost implications
4. Update forbidden operations list if needed
**Customer billing protection is our top priority.**