Model Context Protocol (MCP)
Overview
MCP is a modular framework for managing, executing, and monitoring AI model contexts, including LLM prompts, Jupyter notebooks, and Python scripts. It provides a FastAPI backend and a modern React (MUI/ReactFlow) dashboard frontend (see frontend/
).
Features
- Register and manage different types of MCPs (LLM prompts, notebooks, scripts)
- Execute MCPs and view results in a web UI
- Monitor server health and statistics
- Extensible for new MCP types
- High-performance database operations with connection pooling
- Query caching with Redis
- PostgreSQL index optimization
- System monitoring and metrics collection
- AI Co-Pilot for workflow optimization
- Dependency visualization and analysis
Setup
Prerequisites
- Python 3.9+
- PostgreSQL 12+
- Redis 6+
- (Recommended) Create and activate a virtual environment
Install dependencies
Environment Variables
- Set
MCP_API_KEY
for API authentication (optional, defaults provided) - For LLMs, set
ANTHROPIC_API_KEY
if using Claude - Set
DATABASE_URL
for PostgreSQL connection - Set
REDIS_URL
for Redis connection
Start the backend
Start the frontend
Usage
- Access the dashboard at http://localhost:5173 (default React dev server)
- Create, manage, and test MCPs from the UI
- Monitor health and stats from the sidebar
- Use the AI Co-Pilot for workflow optimization
- Visualize component dependencies
- Monitor system performance
Components
Database Management
- Connection pooling for efficient database access
- Query caching with Redis for improved performance
- PostgreSQL index optimization for faster queries
- Database monitoring and statistics
System Monitoring
- Real-time system health monitoring
- Performance metrics collection
- Alerting system with severity levels
- Prometheus metrics integration
AI Co-Pilot
- Workflow optimization suggestions
- Error resolution assistance
- Best practice recommendations
- Performance improvements
Dependency Visualizer
- Component relationship visualization
- Dependency conflict detection
- Version compatibility checking
- Visual dependency mapping
Adding New MCPs
- Implement a new MCP class in
mcp/core/
- Register it in the backend
- Add UI support in
mcp/ui/app.py
Running Tests
Project Structure
mcp/api/
- FastAPI backendfrontend/
- React (MUI/ReactFlow) frontend (primary UI)mcp/core/
- Core MCP types and logicmcp/db/
- Database management and optimizationmcp/monitoring/
- System monitoring and metricsmcp/components/
- AI Co-Pilot and Dependency Visualizertests/
- Test suite
License
MIT
API Documentation
Once the server is running, you can access:
- API documentation: http://localhost:8000/docs
- Prometheus metrics: http://localhost:8000/metrics
- Health check: http://localhost:8000/health
- Statistics: http://localhost:8000/stats
Security
- API key authentication is required for all endpoints
- Rate limiting is enabled by default
- CORS is configured to allow only specific origins
- All sensitive configuration is managed through environment variables
- Database connection pooling with health checks
- Redis connection security
Monitoring
The server includes:
- Prometheus metrics for request counts, latencies, and server executions
- Structured JSON logging
- Health check endpoint
- Server statistics endpoint
- System resource monitoring
- Database performance metrics
- Cache statistics
- Alert management
Contributing
- Fork the repository
- Create a feature branch
- Commit your changes
- Push to the branch
- Create a Pull Request
Additional Dependencies
This project requires the following additional Python packages:
Core Dependencies
- pandas
- numpy
- matplotlib
- papermill
- nbformat
- jupyter
- anthropic
Database Dependencies
- sqlalchemy
- psycopg2-binary
- redis
- alembic
Monitoring Dependencies
- prometheus-client
- psutil
- networkx
- graphviz
Install all dependencies with:
Using the Notebook MCP to Call an LLM (Claude)
The example notebook (mcp/notebooks/example.ipynb
) demonstrates:
- Data analysis and plotting
- Calling the Claude LLM via the
anthropic
Python package
To use the LLM cell, ensure you have set your ANTHROPIC_API_KEY
in your environment or .env
file.
The notebook cell for LLM looks like this:
API Key Management and Authentication
Creating and Managing API Keys
- Use the
/api/apikeys/
endpoint to create a new API key (admin or self-service). - List your API keys with
GET /api/apikeys/
. - Revoke an API key with
POST /api/apikeys/revoke
.
Authenticating with API Keys
- Pass your API key in the
X-API-KEY
header for any authenticated endpoint. - Alternatively, you can use a Bearer JWT in the
Authorization
header. - All endpoints that require authentication now support both methods.
Example:
Execution Monitor Enhancements (UI)
The Execution Monitor now includes the following panels (with mock data, ready for backend integration):
- Resource Usage Panel: View CPU and memory usage per workflow step.
- Time-Travel Debugging Panel: Select a step to view its state at execution time.
- Performance Suggestions Panel: See optimization and bottleneck suggestions.
- Real-Time Metrics Dashboard: Monitor live metrics (CPU, memory, throughput, etc.).
These panels are integrated into the workflow Execution Monitor and will display real data once backend support is available.
Deprecation Notice
The Streamlit UI (mcp/ui/
) is deprecated and will be removed in a future release. Please use the React frontend in frontend/
for all new development and usage.
UI Notice
The React frontend in frontend/
is now the only supported UI. The previous Streamlit UI has been removed. Please use the React app for all development and usage.
This server cannot be installed
remote-capable server
The server can be hosted and run remotely because it primarily relies on remote services or has no dependency on the local environment.
Ein modulares System zum Erstellen und Orchestrieren von KI-Anwendungen durch Microservices mit LLM-Interaktionen, Jupyter-Notebook-Ausführung und visuellen Workflow-Funktionen.
Related MCP Servers
- -securityFlicense-qualityEnables AI agent and task management using the CrewAI framework, allowing users to create and run agents and tasks in an automated workflow environment.Last updated -03
- AsecurityAlicenseAqualityA lightweight, modular API service that provides useful tools like weather, date/time, calculator, search, email, and task management through a RESTful interface, designed for integration with AI agents and automated workflows.Last updated -5MIT License
- AsecurityAlicenseAqualityAn MCP server that enables AI agents to interact with Modal, allowing them to deploy apps and run functions in a serverless cloud environment.Last updated -73MIT License
- -securityFlicense-qualityA lightweight Python-based server designed to run, manage and create CrewAI workflows using the Model Context Protocol for communicating with LLMs and tools like Claude Desktop or Cursor IDE.Last updated -32