# OpenF1 MCP Server - Testing Guide
## Overview
The project includes comprehensive test coverage for both the API client and the MCP server. Tests are organized into three categories:
1. **Unit Tests for OpenF1 Client** (`tests/test_openf1_client.py`)
2. **Unit Tests for MCP Server** (`tests/test_server.py`)
3. **Integration Tests** (`tests/test_integration.py`)
## Test Coverage
### OpenF1 Client Tests (22 tests)
- Context manager functionality
- Driver fetching (with and without filters)
- Team fetching
- Race fetching (with filters)
- Session fetching
- Results fetching
- Lap data fetching
- Stint data fetching
- Pit stop data fetching
- Weather data fetching
- Incident data fetching
- Car telemetry data fetching
- Position data fetching
### MCP Server Tests (20 tests)
- Server initialization
- Tool registration (11 tools)
- Tool schemas and descriptions
- Tool execution (all 11 tools)
- Error handling for unknown tools
- Tool call handling
### Integration Tests (10 tests)
- Real API connectivity
- Driver data retrieval
- Team data retrieval
- Race data retrieval
- Session data retrieval
- Results retrieval with session filtering
- Connection reuse
- API error handling
## Running Tests
### Prerequisites
Install test dependencies:
```bash
pip install -r requirements.txt
```
### Basic Commands
**Run all unit tests:**
```bash
pytest tests/
```
**Run specific test file:**
```bash
pytest tests/test_openf1_client.py
pytest tests/test_server.py
```
**Run specific test:**
```bash
pytest tests/test_openf1_client.py::test_get_drivers
```
**Run tests with verbose output:**
```bash
pytest tests/ -v
```
**Run tests with coverage report:**
```bash
pytest tests/ --cov=src --cov-report=html --cov-report=term-missing
```
**Run integration tests (requires network access):**
```bash
pytest tests/ --integration
```
**Run with the test runner script:**
```bash
python run_tests.py
python run_tests.py --integration
python run_tests.py --coverage
```
## Test Structure
### Fixtures
The tests use pytest fixtures for setup:
```python
@pytest.fixture
def server():
"""Create a test server"""
return OpenF1MCPServer()
@pytest.fixture
def client():
"""Create a test client"""
return OpenF1Client()
```
### Mocking
Unit tests use `unittest.mock` to mock API responses, ensuring:
- Fast test execution
- No dependency on API availability
- Deterministic test results
- Ability to test error conditions
```python
with patch.object(client, '_get', new_callable=AsyncMock, return_value=mock_response):
result = await client.get_drivers()
```
### Async Testing
Tests use `pytest-asyncio` to handle asynchronous code:
```python
@pytest.mark.asyncio
async def test_get_drivers(client):
"""Test fetching drivers"""
# Test code here
```
## Test Markers
The project uses pytest markers to categorize tests:
- `@pytest.mark.asyncio` - Marks asynchronous tests
- `@pytest.mark.integration` - Marks integration tests requiring API access
Run integration tests:
```bash
pytest --integration
```
Skip integration tests:
```bash
pytest # (integration tests are skipped by default)
```
## Coverage Goals
The project aims for high test coverage:
- **Client methods**: 100% coverage
- **Server tools**: 100% coverage
- **Error handling**: Comprehensive coverage
Generate coverage report:
```bash
pytest tests/ --cov=src --cov-report=html
```
Open `htmlcov/index.html` to view detailed coverage.
## Writing New Tests
When adding new tools or features:
1. **Add unit test to `test_server.py`:**
```python
@pytest.mark.asyncio
async def test_run_tool_new_tool(server):
"""Test running new_tool"""
mock_response = [{"data": "value"}]
with patch.object(OpenF1Client, 'new_method', new_callable=AsyncMock, return_value=mock_response):
result = await server._run_tool("new_tool", {}, OpenF1Client())
assert result == mock_response
```
2. **Add unit test to `test_openf1_client.py`:**
```python
@pytest.mark.asyncio
async def test_new_method(client):
"""Test new method"""
mock_response = [{"data": "value"}]
with patch.object(client, '_get', new_callable=AsyncMock, return_value=mock_response):
result = await client.new_method()
assert result == mock_response
```
3. **Add integration test to `test_integration.py`:**
```python
@pytest.mark.asyncio
@pytest.mark.integration
async def test_real_api_new_method():
"""Test new method with real API"""
async with OpenF1Client() as client:
result = await client.new_method()
assert isinstance(result, list)
```
## Continuous Integration
Tests are designed to work in CI/CD pipelines:
```bash
# Run all tests with coverage
pytest tests/ --cov=src --cov-report=xml
```
The `pytest.ini` configuration ensures consistent test behavior across environments.
## Troubleshooting
### Tests fail with import errors
Ensure the project root is in the Python path:
```bash
cd /path/to/openf1_mcp
python -m pytest tests/
```
### Integration tests fail
Check network connectivity:
```bash
ping api.openf1.org
```
Integration tests can be skipped:
```bash
pytest tests/ # (integration tests disabled by default)
```
### Slow test execution
Use parallel execution with pytest-xdist:
```bash
pip install pytest-xdist
pytest tests/ -n auto
```
## Test Results
Expected test results:
- **Unit tests**: ~45 tests, execution time < 1 second
- **Integration tests**: ~10 tests, execution time depends on API response
- **Total with coverage**: ~55 tests
All tests should pass with green status ✓