overview.mdโข15.6 kB
# AI Integration Overview
MockLoop MCP is specifically designed to enhance AI-assisted development workflows, providing seamless integration with popular AI frameworks and enabling sophisticated testing and development scenarios.
## Why AI Integration?
Modern AI development requires robust testing environments, realistic data, and the ability to simulate various scenarios. MockLoop MCP addresses these needs by:
- **Providing Realistic APIs**: Generate mock servers that behave like real services
- **Enabling Dynamic Testing**: Create and switch between test scenarios instantly
- **Offering Comprehensive Analytics**: Monitor AI system interactions with detailed logging
- **Supporting Rapid Iteration**: Update responses and configurations without restarts
## Supported AI Frameworks
MockLoop MCP provides native integration with major AI frameworks:
### ๐ LangGraph
- **Workflow Integration**: Embed mock servers in LangGraph state machines
- **Dynamic State Management**: Update mock responses based on workflow state
- **Error Simulation**: Test error handling and recovery scenarios
- **Performance Testing**: Simulate API latencies and failures
### ๐ฅ CrewAI
- **Multi-Agent Testing**: Provide different mock services for different agents
- **Role-Based Scenarios**: Create scenarios tailored to specific agent roles
- **Collaborative Workflows**: Test agent interactions through mock APIs
- **Task Validation**: Verify agent outputs against expected API responses
### ๐ฆ LangChain
- **Tool Integration**: Use MockLoop tools within LangChain applications
- **Chain Testing**: Test complex chains with realistic API responses
- **Memory Validation**: Verify chain memory against mock data
- **Retrieval Testing**: Mock external data sources for RAG applications
### ๐ง Custom Frameworks
- **Generic Integration**: REST API compatibility with any framework
- **Custom Tools**: Extend MockLoop with framework-specific tools
- **Webhook Support**: Real-time notifications for AI system events
- **Flexible Configuration**: Adapt to any AI workflow requirements
## Core AI Integration Features
### 1. Dynamic Response Management
Update API responses in real-time without restarting servers:
```python
# Update responses based on AI system state
await mockloop.manage_mock_data(
server_url="http://localhost:8000",
operation="update_response",
endpoint_path="/ai/model/status",
response_data={
"status": "training",
"progress": 0.75,
"eta_minutes": 15
}
)
```
### 2. Scenario-Based Testing
Create comprehensive test scenarios for different AI system states:
```python
# Create scenarios for different AI model states
scenarios = {
"model_training": {
"/ai/model/status": {"status": "training", "progress": 0.5},
"/ai/predictions": {"error": "Model not ready", "status": 503}
},
"model_ready": {
"/ai/model/status": {"status": "ready", "accuracy": 0.95},
"/ai/predictions": {"predictions": [...], "confidence": 0.87}
},
"model_error": {
"/ai/model/status": {"status": "error", "error": "Training failed"},
"/ai/predictions": {"error": "Service unavailable", "status": 500}
}
}
```
### 3. Performance Monitoring
Track AI system interactions with detailed analytics:
```python
# Analyze AI system API usage
logs = await mockloop.query_mock_logs(
server_url="http://localhost:8000",
analyze=True,
path_pattern="/ai/.*"
)
# Get insights on AI system behavior
print(f"AI API calls: {logs['analysis']['total_requests']}")
print(f"Average response time: {logs['analysis']['avg_response_time_ms']}ms")
print(f"Error rate: {logs['analysis']['error_rate_percent']}%")
```
### 4. Real-Time Adaptation
Adapt mock responses based on AI system behavior:
```python
# Monitor AI system requests and adapt responses
def adapt_responses(request_logs):
if request_logs['error_rate'] > 0.1:
# Switch to error recovery scenario
mockloop.manage_mock_data(
operation="switch_scenario",
scenario_name="error_recovery"
)
elif request_logs['avg_response_time'] > 1000:
# Switch to high-performance scenario
mockloop.manage_mock_data(
operation="switch_scenario",
scenario_name="fast_responses"
)
```
## AI Development Workflows
### Workflow 1: AI Model Development
1. **Setup Mock Data Sources**
```python
# Generate mocks for training data APIs
await mockloop.generate_mock_api(
spec_url_or_path="./training-data-api.yaml",
output_dir_name="training_data_mock"
)
```
2. **Create Training Scenarios**
```python
# Different data quality scenarios
scenarios = {
"high_quality_data": {...},
"noisy_data": {...},
"missing_data": {...},
"corrupted_data": {...}
}
```
3. **Train and Test Models**
```python
for scenario in scenarios:
# Switch to scenario
await mockloop.manage_mock_data(
operation="switch_scenario",
scenario_name=scenario
)
# Train model with scenario data
model = train_model(data_source="http://localhost:8000")
# Evaluate performance
metrics = evaluate_model(model)
# Log results
print(f"Scenario {scenario}: Accuracy = {metrics.accuracy}")
```
### Workflow 2: AI Agent Testing
1. **Setup Multi-Service Environment**
```python
# Generate mocks for different services
services = [
{"spec": "./user-service.yaml", "port": 8001},
{"spec": "./payment-service.yaml", "port": 8002},
{"spec": "./inventory-service.yaml", "port": 8003}
]
for service in services:
await mockloop.generate_mock_api(
spec_url_or_path=service["spec"],
output_dir_name=f"service_{service['port']}"
)
```
2. **Create Agent Test Scenarios**
```python
# Test different business scenarios
scenarios = {
"happy_path": {
# All services working normally
"user_service": {"status": "healthy"},
"payment_service": {"status": "healthy"},
"inventory_service": {"status": "healthy"}
},
"payment_failure": {
# Payment service down
"user_service": {"status": "healthy"},
"payment_service": {"status": "error", "error": "Service unavailable"},
"inventory_service": {"status": "healthy"}
}
}
```
3. **Run Agent Tests**
```python
for scenario_name, scenario_config in scenarios.items():
# Configure all services for scenario
for service, config in scenario_config.items():
await mockloop.manage_mock_data(
server_url=f"http://localhost:{service_ports[service]}",
operation="update_response",
endpoint_path="/health",
response_data=config
)
# Run agent with scenario
result = run_agent_scenario(scenario_name)
# Analyze agent behavior
logs = await mockloop.query_mock_logs(
server_url="http://localhost:8001",
analyze=True
)
```
### Workflow 3: RAG System Testing
1. **Mock Knowledge Base APIs**
```python
# Generate mock for document retrieval API
await mockloop.generate_mock_api(
spec_url_or_path="./knowledge-base-api.yaml",
output_dir_name="knowledge_base_mock"
)
```
2. **Create Document Scenarios**
```python
# Different document availability scenarios
document_scenarios = {
"comprehensive_docs": {
"/search": {"documents": [...], "total": 100},
"/document/{id}": {"content": "...", "metadata": {...}}
},
"limited_docs": {
"/search": {"documents": [...], "total": 5},
"/document/{id}": {"error": "Document not found", "status": 404}
},
"slow_retrieval": {
"/search": {"documents": [...], "delay_ms": 2000},
"/document/{id}": {"content": "...", "delay_ms": 1000}
}
}
```
3. **Test RAG Performance**
```python
for scenario in document_scenarios:
# Configure knowledge base mock
await mockloop.manage_mock_data(
operation="switch_scenario",
scenario_name=scenario
)
# Test RAG system
questions = ["What is...?", "How do I...?", "Why does...?"]
for question in questions:
answer = rag_system.ask(question)
quality = evaluate_answer_quality(question, answer)
print(f"Scenario {scenario}, Question: {question}")
print(f"Answer quality: {quality}")
```
## Integration Patterns
### Pattern 1: State-Driven Mocks
Synchronize mock responses with AI system state:
```python
class StateDrivenMock:
def __init__(self, mockloop_client, server_url):
self.mockloop = mockloop_client
self.server_url = server_url
self.current_state = "initial"
async def update_state(self, new_state, context=None):
"""Update mock responses based on AI system state"""
self.current_state = new_state
# Define state-specific responses
state_responses = {
"training": {
"/model/status": {"status": "training", "progress": context.get("progress", 0)},
"/predictions": {"error": "Model not ready", "status": 503}
},
"ready": {
"/model/status": {"status": "ready", "accuracy": context.get("accuracy", 0.9)},
"/predictions": {"predictions": context.get("predictions", [])}
}
}
# Update all endpoints for the new state
for endpoint, response in state_responses[new_state].items():
await self.mockloop.manage_mock_data(
server_url=self.server_url,
operation="update_response",
endpoint_path=endpoint,
response_data=response
)
```
### Pattern 2: Behavior-Driven Testing
Test AI system behavior under different conditions:
```python
class BehaviorDrivenTesting:
def __init__(self, mockloop_client):
self.mockloop = mockloop_client
self.test_scenarios = {}
async def define_behavior(self, behavior_name, conditions):
"""Define a behavior scenario"""
self.test_scenarios[behavior_name] = conditions
await self.mockloop.manage_mock_data(
operation="create_scenario",
scenario_name=behavior_name,
scenario_config=conditions
)
async def test_behavior(self, behavior_name, ai_system):
"""Test AI system under specific behavior conditions"""
# Switch to behavior scenario
await self.mockloop.manage_mock_data(
operation="switch_scenario",
scenario_name=behavior_name
)
# Run AI system
result = await ai_system.execute()
# Analyze behavior
logs = await self.mockloop.query_mock_logs(
server_url=ai_system.api_endpoint,
analyze=True
)
return {
"behavior": behavior_name,
"result": result,
"api_interactions": logs["analysis"],
"success": self.evaluate_behavior(result, behavior_name)
}
```
### Pattern 3: Adaptive Response Generation
Generate responses based on AI system requests:
```python
class AdaptiveResponseGenerator:
def __init__(self, mockloop_client):
self.mockloop = mockloop_client
self.request_history = []
async def monitor_and_adapt(self, server_url):
"""Monitor requests and adapt responses"""
while True:
# Get recent logs
logs = await self.mockloop.query_mock_logs(
server_url=server_url,
limit=10,
analyze=False
)
# Analyze request patterns
patterns = self.analyze_patterns(logs["logs"])
# Adapt responses based on patterns
if patterns["high_error_rate"]:
await self.switch_to_recovery_mode(server_url)
elif patterns["high_load"]:
await self.optimize_for_performance(server_url)
await asyncio.sleep(30) # Check every 30 seconds
async def switch_to_recovery_mode(self, server_url):
"""Switch to error recovery responses"""
await self.mockloop.manage_mock_data(
server_url=server_url,
operation="switch_scenario",
scenario_name="error_recovery"
)
async def optimize_for_performance(self, server_url):
"""Switch to high-performance responses"""
await self.mockloop.manage_mock_data(
server_url=server_url,
operation="switch_scenario",
scenario_name="fast_responses"
)
```
## Best Practices for AI Integration
### 1. Design for Testability
- **Modular APIs**: Design AI systems with clear API boundaries
- **Configurable Endpoints**: Make API endpoints configurable
- **Graceful Degradation**: Handle API failures gracefully
- **Comprehensive Logging**: Log all AI system decisions and API interactions
### 2. Create Realistic Scenarios
- **Real-World Data**: Use realistic data in mock responses
- **Error Conditions**: Include various error scenarios
- **Performance Variations**: Simulate different response times
- **Edge Cases**: Test boundary conditions and unusual inputs
### 3. Monitor and Analyze
- **Continuous Monitoring**: Track AI system behavior continuously
- **Performance Metrics**: Monitor response times and error rates
- **Behavior Analysis**: Analyze AI decision patterns
- **Feedback Loops**: Use insights to improve AI systems
### 4. Automate Testing
- **CI/CD Integration**: Include AI testing in deployment pipelines
- **Regression Testing**: Test AI systems against known scenarios
- **Performance Benchmarks**: Establish and monitor performance baselines
- **Automated Scenarios**: Create self-updating test scenarios
## Getting Started
Ready to integrate MockLoop MCP with your AI framework? Choose your framework:
- **[LangGraph Integration](langgraph.md)**: Build state-driven AI workflows
- **[CrewAI Integration](crewai.md)**: Test multi-agent systems
- **[LangChain Integration](langchain.md)**: Enhance chain testing
- **[Custom AI Workflows](custom-workflows.md)**: Create custom integrations
## Examples Repository
Find complete examples and templates in our examples repository:
- **AI Model Training**: Mock data sources for model training
- **Multi-Agent Systems**: Complex agent interaction scenarios
- **RAG Applications**: Knowledge base and retrieval testing
- **Chatbot Testing**: Conversation flow and response testing
- **ML Pipeline Testing**: End-to-end pipeline validation
## Community and Support
- **GitHub Discussions**: Share AI integration patterns
- **Example Contributions**: Contribute AI workflow examples
- **Framework Requests**: Request support for new AI frameworks
- **Best Practices**: Share and learn AI testing best practices
---
MockLoop MCP transforms AI development by providing realistic, controllable, and observable API environments. Start with your preferred framework and discover how mock servers can accelerate your AI development workflow!