README.md•10.4 kB
# MCP KYC Server - Testing Guide
Comprehensive testing documentation for the MCP KYC Server.
## Table of Contents
- [Overview](#overview)
- [Test Structure](#test-structure)
- [Running Tests](#running-tests)
- [Test Coverage](#test-coverage)
- [Integration Tests](#integration-tests)
- [Load Testing](#load-testing)
- [Validation Scripts](#validation-scripts)
- [CI/CD Integration](#cicd-integration)
- [Troubleshooting](#troubleshooting)
## Overview
The MCP KYC Server includes a comprehensive test suite covering:
- **Unit Tests**: Individual component testing
- **Integration Tests**: End-to-end workflow testing
- **Load Tests**: Performance and scalability testing
- **Validation Scripts**: Deployment verification
### Test Statistics
- **Total Test Files**: 8
- **Unit Tests**: 200+ test cases
- **Integration Tests**: 30+ test cases
- **Target Coverage**: >80%
## Test Structure
```
tests/
├── conftest.py # Pytest configuration and fixtures
├── fixtures/ # Test data
│ ├── test_data.json # Test scenarios and data
│ └── mock_responses.json # Mock API responses
├── test_pan_verification.py # PAN verification tool tests
├── test_pan_aadhaar_link.py # PAN-Aadhaar link tool tests
├── test_tool_registry.py # Tool registry tests
├── test_api_client.py # API client tests
├── test_cache.py # Cache manager tests
├── test_rate_limiter.py # Rate limiter tests
├── integration/ # Integration tests
│ ├── test_mcp_server.py # MCP server integration
│ └── test_external_api.py # External API integration
├── load/ # Load testing
│ └── locustfile.py # Locust load test scenarios
├── validate_deployment.sh # Deployment validation
└── smoke_test.sh # Quick smoke tests
```
## Running Tests
### Prerequisites
```bash
# Install test dependencies
pip install -r requirements.txt
pip install pytest pytest-asyncio pytest-cov pytest-mock locust
# Ensure Redis is running (for integration tests)
docker-compose up -d redis
```
### Unit Tests
Run all unit tests:
```bash
pytest tests/ -v
```
Run specific test file:
```bash
pytest tests/test_pan_verification.py -v
```
Run specific test:
```bash
pytest tests/test_pan_verification.py::TestPANVerificationTool::test_successful_verification_all_match -v
```
Run with coverage:
```bash
pytest tests/ --cov=src --cov-report=html --cov-report=term
```
### Integration Tests
Run integration tests (requires running services):
```bash
# Mark integration tests
pytest tests/integration/ -v -m integration
# Run without integration marker
pytest tests/integration/ -v
```
### Async Tests
All async tests use `pytest-asyncio`:
```bash
pytest tests/ -v --asyncio-mode=auto
```
## Test Coverage
### Generating Coverage Reports
```bash
# Generate HTML coverage report
pytest tests/ --cov=src --cov-report=html
# Open report
open htmlcov/index.html # macOS
xdg-open htmlcov/index.html # Linux
start htmlcov/index.html # Windows
```
### Coverage Requirements
| Component | Target Coverage | Current |
|-----------|----------------|---------|
| Tools | >90% | ✓ |
| API Client | >85% | ✓ |
| Cache Manager | >85% | ✓ |
| Rate Limiter | >85% | ✓ |
| Tool Registry | >80% | ✓ |
| Overall | >80% | ✓ |
### Viewing Coverage
```bash
# Terminal report
pytest tests/ --cov=src --cov-report=term-missing
# Show lines not covered
pytest tests/ --cov=src --cov-report=term:skip-covered
```
## Integration Tests
### MCP Server Integration
Tests complete server workflows:
```bash
pytest tests/integration/test_mcp_server.py -v
```
**Test Scenarios:**
- Server initialization
- Tool registration
- Cache integration
- Rate limiting
- Concurrent requests
- Error handling
### External API Integration
Tests interaction with KYC API:
```bash
pytest tests/integration/test_external_api.py -v
```
**Test Scenarios:**
- API authentication
- Request/response handling
- Error mapping
- Retry logic
- Timeout handling
- Concurrent API calls
### Running Integration Tests in CI
```bash
# With environment variables
export KYC_API_KEY=test_key
export JWT_SECRET_KEY=test_secret
pytest tests/integration/ -v
```
## Load Testing
### Using Locust
Start Locust web interface:
```bash
locust -f tests/load/locustfile.py --host=http://localhost:8000
```
Then open http://localhost:8089 in your browser.
### Headless Load Testing
Run without web interface:
```bash
# 50 users, 5 users/sec spawn rate, 5 minute duration
locust -f tests/load/locustfile.py \
--host=http://localhost:8000 \
--users 50 \
--spawn-rate 5 \
--run-time 5m \
--headless
```
### Load Test Scenarios
#### 1. Standard Load Test
```bash
locust -f tests/load/locustfile.py \
--host=http://localhost:8000 \
--users 100 \
--spawn-rate 10 \
--run-time 10m \
--headless
```
#### 2. Burst Traffic Test
```bash
# Uses BurstTrafficUser class
locust -f tests/load/locustfile.py \
--host=http://localhost:8000 \
--users 200 \
--spawn-rate 50 \
--run-time 2m \
--headless \
BurstTrafficUser
```
#### 3. Cache Performance Test
```bash
# Uses CacheTestUser class
locust -f tests/load/locustfile.py \
--host=http://localhost:8000 \
--users 50 \
--spawn-rate 10 \
--run-time 5m \
--headless \
CacheTestUser
```
#### 4. Step Load Pattern
```bash
# Uses StepLoadShape
locust -f tests/load/locustfile.py \
--host=http://localhost:8000 \
--headless \
StepLoadShape
```
### Performance Benchmarks
| Metric | Target | Acceptable |
|--------|--------|------------|
| Response Time (p50) | <200ms | <500ms |
| Response Time (p95) | <500ms | <1000ms |
| Response Time (p99) | <1000ms | <2000ms |
| Requests/sec | >100 | >50 |
| Error Rate | <1% | <5% |
## Validation Scripts
### Deployment Validation
Comprehensive deployment check:
```bash
chmod +x tests/validate_deployment.sh
./tests/validate_deployment.sh
```
**Checks:**
- Docker containers
- Redis connectivity
- MCP server process
- Network ports
- Log files
- Environment variables
- Tool metadata
- Python dependencies
- Disk space
- System resources
### Smoke Tests
Quick production validation:
```bash
chmod +x tests/smoke_test.sh
./tests/smoke_test.sh
```
**Tests:**
- Health checks
- Tool availability
- Basic functionality
- Performance checks
- Configuration
- Security
### Custom Validation
Set environment variables for custom endpoints:
```bash
export MCP_SERVER_URL=http://your-server:8000
export REDIS_HOST=your-redis-host
export REDIS_PORT=6379
./tests/smoke_test.sh
```
## CI/CD Integration
### GitHub Actions
The test suite integrates with GitHub Actions (see `.github/workflows/test.yml`):
```yaml
- Run unit tests
- Run integration tests
- Generate coverage report
- Build Docker image
- Run smoke tests
```
### Running Tests in CI
```bash
# Install dependencies
pip install -r requirements.txt
pip install pytest pytest-asyncio pytest-cov
# Run tests with coverage
pytest tests/ --cov=src --cov-report=xml --cov-report=term
# Upload coverage to Codecov
codecov -f coverage.xml
```
### Pre-commit Hooks
Run tests before committing:
```bash
# Install pre-commit
pip install pre-commit
# Set up hooks
pre-commit install
# Run manually
pre-commit run --all-files
```
## Troubleshooting
### Common Issues
#### 1. Redis Connection Errors
**Problem:** Tests fail with Redis connection errors
**Solution:**
```bash
# Start Redis
docker-compose up -d redis
# Or use local Redis
redis-server
```
#### 2. Import Errors
**Problem:** `ModuleNotFoundError` when running tests
**Solution:**
```bash
# Install in development mode
pip install -e .
# Or add to PYTHONPATH
export PYTHONPATH="${PYTHONPATH}:$(pwd)"
```
#### 3. Async Test Failures
**Problem:** Async tests fail with event loop errors
**Solution:**
```bash
# Use pytest-asyncio
pip install pytest-asyncio
# Run with asyncio mode
pytest tests/ --asyncio-mode=auto
```
#### 4. Mock API Failures
**Problem:** Integration tests fail due to API mocking issues
**Solution:**
```bash
# Check mock responses are loaded
pytest tests/integration/ -v -s
# Verify fixtures
pytest tests/conftest.py::test_fixtures -v
```
#### 5. Coverage Not Generated
**Problem:** Coverage report not created
**Solution:**
```bash
# Install coverage tools
pip install pytest-cov coverage
# Run with explicit coverage
pytest tests/ --cov=src --cov-report=html --cov-report=term
```
### Debug Mode
Run tests with detailed output:
```bash
# Verbose output
pytest tests/ -vv
# Show print statements
pytest tests/ -s
# Show local variables on failure
pytest tests/ -l
# Stop on first failure
pytest tests/ -x
# Enter debugger on failure
pytest tests/ --pdb
```
### Test Markers
Use markers to run specific test groups:
```bash
# Run only integration tests
pytest tests/ -m integration
# Skip integration tests
pytest tests/ -m "not integration"
# Run slow tests
pytest tests/ -m slow
# Custom markers
pytest tests/ -m "unit and not slow"
```
## Best Practices
### Writing Tests
1. **Use descriptive names**: `test_successful_verification_all_match`
2. **One assertion per test**: Focus on single behavior
3. **Use fixtures**: Reuse common setup
4. **Mock external dependencies**: Isolate unit tests
5. **Test edge cases**: Invalid input, errors, boundaries
### Test Organization
1. **Group related tests**: Use test classes
2. **Separate unit and integration**: Different directories
3. **Use parametrize**: Test multiple inputs
4. **Document complex tests**: Add docstrings
### Performance
1. **Use fixtures wisely**: Scope appropriately
2. **Mock expensive operations**: Database, API calls
3. **Parallel execution**: Use pytest-xdist
4. **Profile slow tests**: Identify bottlenecks
## Additional Resources
- [Pytest Documentation](https://docs.pytest.org/)
- [pytest-asyncio](https://pytest-asyncio.readthedocs.io/)
- [Locust Documentation](https://docs.locust.io/)
- [Coverage.py](https://coverage.readthedocs.io/)
## Support
For issues or questions:
- Check [Troubleshooting](#troubleshooting) section
- Review test output carefully
- Check server logs
- Verify environment configuration
---
**Last Updated:** 2024-01-20
**Version:** 1.0.0