Utilizes aiohttp for asynchronous HTTP operations in the mock server management and log analysis features
Generates Docker configurations (Dockerfile and docker-compose.yml) to containerize and run the mock API servers
Generates mock API servers using FastAPI as the framework, complete with middleware, admin UI, and logging capabilities
Planned future support for generating mock API servers from GraphQL SDL specifications
Uses Jinja templating to generate server code and admin UI components for the mock API servers
Planned future support for generating mock API servers from Postman Collections
Implements SQLite-based storage for comprehensive request/response logging with advanced querying capabilities
Takes OpenAPI/Swagger specifications as input to generate functional mock API servers that simulate the defined endpoints
Supports parsing OpenAPI specifications in YAML format to generate mock API servers
MockLoop MCP
mockloop-mcp
is a comprehensive Model Context Protocol (MCP) server designed to generate and run sophisticated mock API servers from API documentation (e.g., OpenAPI/Swagger specifications). This allows developers and AI assistants to quickly spin up mock backends for development, testing, and integration purposes with advanced logging, dynamic response management, scenario testing, and comprehensive performance analytics.
📚 Documentation: https://docs.mockloop.com 📦 PyPI Package: https://pypi.org/project/mockloop-mcp/ 🐙 GitHub Repository: https://github.com/mockloop/mockloop-mcp
Features
Core Features
- API Mock Generation: Takes an API specification (URL or local file) and generates a runnable FastAPI mock server.
- Request/Response Logging: Generated mock servers include middleware for comprehensive logging of requests and responses with SQLite storage.
- Dockerized Mocks: Generates a
Dockerfile
anddocker-compose.yml
for each mock API, allowing them to be easily run as Docker containers. - Initial Support: OpenAPI v2 (Swagger) and v3 (JSON, YAML).
Enhanced Features (v2.0)
- 🔍 Advanced Log Analysis: Query and analyze request logs with filtering, performance metrics, and intelligent insights.
- 🖥️ Server Discovery: Automatically discover running mock servers and match them with generated configurations.
- 📊 Performance Monitoring: Real-time performance metrics, error rate analysis, and traffic pattern detection.
- 🤖 AI Assistant Integration: Optimized for AI-assisted development workflows with structured data output and comprehensive analysis.
- 🎯 Smart Filtering: Advanced log filtering by method, path patterns, time ranges, and custom criteria.
- 📈 Insights Generation: Automated analysis with actionable recommendations for debugging and optimization.
Quick Start
Get started with MockLoop MCP in just a few steps:
That's it! MockLoop MCP is ready to generate mock servers from any OpenAPI specification.
Getting Started
Prerequisites
- Python 3.10+
- Pip
- Docker and Docker Compose (for running generated mocks in containers)
- An MCP client capable of interacting with this server.
Installation
Option 1: Install from PyPI (Recommended)
Option 2: Development Installation
- Clone the repository:
- Create and activate a Python virtual environment:
- Install in development mode:
Setup & Configuration
Dependencies include:
- Core:
fastapi
,uvicorn
,Jinja2
,PyYAML
,requests
,aiohttp
- MCP:
mcp[cli]
(Model Context Protocol SDK)
Running the MCP Server
Development Mode:
Production Mode:
Configuring MCP Clients
To use MockLoop MCP with your MCP client, you'll need to add it to your client's configuration.
Cline (VS Code Extension)
Add the following to your Cline MCP settings file (typically located at ~/.config/Code/User/globalStorage/saoudrizwan.claude-dev/settings/cline_mcp_settings.json
):
For virtual environment installations:
Claude Desktop
Add the following to your Claude Desktop configuration file:
For virtual environment installations:
Other MCP Clients
For other MCP clients, use the command mockloop-mcp
or python -m mockloop_mcp
depending on your installation method.
Available MCP Tools
Once mockloop-mcp
is configured and running in your MCP client, you can use the following tools:
1. generate_mock_api
Generate a FastAPI mock server from an API specification.
Parameters:
spec_url_or_path
: (string, required) URL or local file path to the API specification (e.g.,https://petstore3.swagger.io/api/v3/openapi.json
or./my_api.yaml
).output_dir_name
: (string, optional) Name for the directory where the mock server code will be generated (e.g.,my_petstore_mock
). Defaults to a name derived from the API spec.auth_enabled
: (boolean, optional) Enable authentication middleware (default: true).webhooks_enabled
: (boolean, optional) Enable webhook support (default: true).admin_ui_enabled
: (boolean, optional) Enable admin UI (default: true).storage_enabled
: (boolean, optional) Enable storage functionality (default: true).
Output:
The tool will generate a new directory (e.g., generated_mocks/my_petstore_mock/
) containing:
main.py
: The FastAPI application with admin endpoints.requirements_mock.txt
: Dependencies for the mock server.Dockerfile
: For building the mock server Docker image.docker-compose.yml
: For running the mock server with Docker Compose.logging_middleware.py
: Request/response logging with SQLite storage.templates/admin.html
: Admin UI for monitoring and management.
2. query_mock_logs
✨ NEW
Query and analyze request logs from running mock servers with advanced filtering and analysis.
Parameters:
server_url
: (string, required) URL of the mock server (e.g., "http://localhost:8000").limit
: (integer, optional) Maximum number of logs to return (default: 100).offset
: (integer, optional) Number of logs to skip for pagination (default: 0).method
: (string, optional) Filter by HTTP method (e.g., "GET", "POST").path_pattern
: (string, optional) Regex pattern to filter paths.time_from
: (string, optional) Start time filter (ISO format).time_to
: (string, optional) End time filter (ISO format).include_admin
: (boolean, optional) Include admin requests in results (default: false).analyze
: (boolean, optional) Perform analysis on the logs (default: true).
Output:
- Filtered log entries with metadata
- Performance metrics (response times, percentiles)
- Error rate analysis and categorization
- Traffic patterns and insights
- Automated recommendations for debugging
3. discover_mock_servers
✨ NEW
Discover running MockLoop servers and generated mock configurations.
Parameters:
ports
: (array, optional) List of ports to scan (default: common ports 8000-8005, 3000-3001, 5000-5001).check_health
: (boolean, optional) Perform health checks on discovered servers (default: true).include_generated
: (boolean, optional) Include information about generated but not running mocks (default: true).
Output:
- List of running mock servers with health status
- Generated mock configurations and metadata
- Server matching and correlation
- Port usage and availability information
Running a Generated Mock Server
- Navigate to the generated mock directory:
- Using Docker Compose (Recommended):The mock API will typically be available at
http://localhost:8000
(or the port specified during generation/indocker-compose.yml
). Logs will be streamed to your console. - Using Uvicorn directly (Requires Python and pip install in that environment):
- Access Enhanced Features:
- Admin UI:
http://localhost:8000/admin
- Enhanced interface with Log Analytics tab - API Documentation:
http://localhost:8000/docs
- Interactive Swagger UI - Health Check:
http://localhost:8000/health
- Server status and metrics - Log Search API:
http://localhost:8000/admin/api/logs/search
- Advanced log querying - Performance Analytics:
http://localhost:8000/admin/api/logs/analyze
- Performance insights - Scenario Management:
http://localhost:8000/admin/api/mock-data/scenarios
- Dynamic response management
- Admin UI:
Dockerfile Snippet (Example for a generated mock)
This is an example of what the generated Dockerfile
might look like:
AI Assistant Integration
MockLoop MCP is specifically designed to enhance AI-assisted development workflows with comprehensive testing and analysis capabilities:
Enhanced AI Workflow
- Generate Mock Server: AI creates an OpenAPI spec and generates a mock server using
generate_mock_api
- Start Testing: AI runs the mock server and begins making test requests
- Monitor & Analyze: AI uses
query_mock_logs
to analyze request patterns, performance, and errors - Create Scenarios: AI uses
manage_mock_data
to create dynamic test scenarios for edge cases - Performance Optimization: Based on insights, AI modifies configurations and repeats the cycle
- Discover & Manage: AI uses
discover_mock_servers
to manage multiple mock environments
Key Benefits for AI Development
- Dynamic Response Management: Modify API responses in real-time without server restart
- Scenario-Based Testing: Create and switch between different test scenarios instantly
- Advanced Performance Analytics: P95/P99 response times, error rate analysis, session tracking
- Intelligent Debugging: AI-powered insights with actionable recommendations
- Framework Integration: Native support for LangGraph, CrewAI, and LangChain workflows
- Comprehensive Monitoring: Track everything from response times to traffic patterns with session correlation
Enhanced AI Assistant Usage
Advanced Scenario Management
Future Ideas & Roadmap
Phase 2 ✅ COMPLETED
- ✅ Dynamic Mock Data Management: Real-time response updates without server restart
- ✅ Server Lifecycle Management: Comprehensive server discovery and health monitoring
- ✅ Scenario Management: Save and switch between different mock configurations with database persistence
- ✅ Enhanced Admin API: Advanced log search, mock data updates, performance analytics
- ✅ Database Migration System: Robust schema versioning and migration framework
- ✅ Performance Monitoring: Session tracking, correlation IDs, P95/P99 metrics
Phase 3 (In Development)
- Enhanced Response Mocking:
- Use
examples
orexample
fields from the OpenAPI spec for more realistic mock responses - Support for dynamic data generation (e.g., using Faker)
- Custom response mappings and scripts
- Use
- Server Lifecycle Management:
- Start/stop mock servers programmatically via MCP tools
- Container orchestration and scaling
- Multi-environment management
- Advanced Testing Features:
- Load testing and performance simulation
- Chaos engineering capabilities
- Contract testing integration
Phase 4 (Planned)
- More API Specification Formats:
- Postman Collections
- GraphQL SDL
- RAML
- API Blueprint
- gRPC Protobufs (may require conversion for FastAPI)
- Advanced Features:
- Stateful mocks with persistent data
- Advanced validation and schema enforcement
- Integration with testing frameworks
- CLI tool for standalone usage
- Real-time collaboration features
Prioritized Support Roadmap for API Formats
- OpenAPI (Swagger) - Current Focus
- Postman Collections
- GraphQL SDL
- RAML
- API Blueprint
- gRPC Protobufs
4. manage_mock_data
✨ NEW
Manage dynamic response data and scenarios for MockLoop servers without server restart.
Parameters:
server_url
: (string, required) URL of the mock server (e.g., "http://localhost:8000").operation
: (string, required) Operation to perform: "update_response", "create_scenario", "switch_scenario", "list_scenarios".endpoint_path
: (string, optional) API endpoint path for response updates.response_data
: (object, optional) New response data for endpoint updates.scenario_name
: (string, optional) Name for scenario operations.scenario_config
: (object, optional) Scenario configuration for creation.
Output:
- Success confirmation for operations
- Scenario lists and current active scenario
- Dynamic response updates without server restart
- Runtime configuration management
Framework Integration Examples
MockLoop MCP integrates seamlessly with popular AI frameworks for enhanced development workflows:
LangGraph Integration
CrewAI Integration
LangChain Integration
Changelog
Version 2.1.0 - Complete Enhancement Integration
Released: May 2025
🆕 Major Features
- Dynamic Response Management: Real-time response updates without server restart via
manage_mock_data
tool - Advanced Scenario Management: Create, switch, and manage test scenarios with persistent storage
- Enhanced Performance Monitoring: Comprehensive performance metrics with session tracking and analytics
- Database Migration System: Robust schema versioning and migration framework
- Framework Integration: Native support for LangGraph, CrewAI, and LangChain workflows
🔧 New MCP Tools
manage_mock_data
: Dynamic response management and scenario handling- Enhanced
query_mock_logs
: Advanced filtering with session and performance analytics - Enhanced
discover_mock_servers
: Comprehensive server discovery with health monitoring
📦 New Components
- Database Migration System: Full versioning and migration framework (
database_migration.py
) - HTTP Client Extensions: Enhanced MockServerClient with admin API integration (
utils/http_client.py
) - Enhanced Log Analyzer: AI-powered insights generation with performance metrics (
log_analyzer.py
) - Scenario Management: Complete scenario lifecycle management with database persistence
🚀 Enhanced Features
- Advanced Admin UI: Log Analytics tab with search, filtering, and scenario management
- Session Tracking: Comprehensive session analytics with correlation IDs
- Performance Metrics: P95/P99 response times, error rate analysis, traffic pattern detection
- Runtime Configuration: Dynamic endpoint behavior modification without restart
- Enhanced Database Schema: 20+ columns including session tracking, performance metrics, and scenario data
🔧 Technical Improvements
- Enhanced database schema with automatic migration (versions 0-6)
- Improved error handling and logging throughout the system
- Advanced SQL query optimization with proper indexing
- Concurrent access protection and transaction safety
- Backup creation before migrations for data safety
- Enhanced Docker integration with better port management
Version 2.0.0 - Enhanced AI Assistant Integration
Released: May 2025
🆕 New Features
- Advanced Log Analysis: Query and analyze request logs with filtering, performance metrics, and intelligent insights
- Server Discovery: Automatically discover running mock servers and match them with generated configurations
- Performance Monitoring: Real-time performance metrics, error rate analysis, and traffic pattern detection
- AI Assistant Integration: Optimized for AI-assisted development workflows with structured data output
🔧 New MCP Tools
query_mock_logs
: Advanced log querying with filtering and analysis capabilitiesdiscover_mock_servers
: Comprehensive server discovery and health monitoring
📦 New Components
- HTTP Client: Async HTTP client for mock server communication (
utils/http_client.py
) - Server Manager: Mock server discovery and management (
mock_server_manager.py
) - Log Analyzer: Advanced log analysis with insights generation (
log_analyzer.py
)
🚀 Enhancements
- Enhanced admin UI with auto-refresh and advanced filtering
- SQLite-based request logging with comprehensive metadata
- Performance metrics calculation (response times, percentiles, error rates)
- Traffic pattern detection (bot detection, high-volume clients)
- Automated insights and recommendations for debugging
🔧 Technical Improvements
- Added
aiohttp
dependency for async HTTP operations - Improved error handling and logging throughout the system
- Enhanced database schema with admin request filtering
- Better Docker integration and port management
Version 1.0.0 - Initial Release
Released: 2025
🆕 Initial Features
- API mock generation from OpenAPI specifications
- FastAPI-based mock servers with Docker support
- Basic request/response logging
- Admin UI for monitoring
- Authentication and webhook support
Contributing
We welcome contributions! Please see our Enhancement Plan for current development priorities and planned features.
Development Setup
- Fork the repository
- Create a feature branch
- Install dependencies:
pip install -r requirements.txt
- Make your changes
- Test with existing mock servers
- Submit a pull request
License
This project is licensed under the MIT License.
This server cannot be installed
hybrid server
The server is able to function both locally and remotely, depending on the configuration or use case.
A Model Context Protocol server that generates and runs mock API servers from API documentation like OpenAPI/Swagger specs, enabling developers and AI assistants to quickly spin up mock backends for development and testing.
Related MCP Servers
- -securityAlicense-qualityAn MCP server that exposes HTTP methods defined in an OpenAPI specification as tools, enabling interaction with APIs via the Model Context Protocol.Last updated -2PythonMIT License
- -securityFlicense-qualityA Model Context Protocol server that allows Claude to make API requests on your behalf, providing tools for testing various APIs including HTTP requests and OpenAI integrations without sharing your API keys in the chat.Last updated -Python
- -securityAlicense-qualityA server that enables interaction with any API that has a Swagger/OpenAPI specification through Model Context Protocol (MCP), automatically generating tools from API endpoints and supporting multiple authentication methods.Last updated -61TypeScriptApache 2.0
- -securityFlicense-qualityA server based on Model Context Protocol that parses Swagger/OpenAPI documents and generates TypeScript types and API client code for different frameworks (Axios, Fetch, React Query).Last updated -1431TypeScript