Model Control Plane (MCP) Server
local-only server
The server can only run on the client’s local machine because it depends on local resources.
MCP Server with OpenAI, Git, Filesystem, and Prometheus Integration
This repository contains a Model Control Plane (MCP) server implementation that supports OpenAI services, Git repository analysis, local filesystem operations, and Prometheus integration.
Project Structure
Requirements
- Python 3.8+
- FastAPI
- Uvicorn
- OpenAI SDK
- GitPython
- Requests
- Docker and Docker Compose (for Prometheus features)
Installation
- Clone this repository
- Install the dependencies:
Environment Variables
Set the following environment variables:
For Azure OpenAI:
For Standard OpenAI:
For Prometheus:
Running the Server
Start the MCP server:
Or for more options:
The server will be available at http://localhost:8000.
Unified Testing Tool
We provide a unified testing script that gives you a user-friendly interface to all testing functionality:
This interactive script provides:
- Filesystem tests
- Git integration tests
- Memory analysis tools
- Prometheus tests & memory stress
- MCP server management
- Environment setup
Individual Tests
You can also run individual tests directly:
Test the OpenAI integration:
Test the Git integration (provide a Git repository URL):
Test the Git diff functionality (analyze requirements compatibility):
Test the filesystem functionality:
Test the langflow integration with MCP:
Test the Prometheus integration:
Advanced Git Analysis
For more advanced Git repository analysis with AI recommendations:
You can also search for specific patterns in the repository:
Or analyze the last commit diff with AI insights:
Memory Analysis Tools
MCP includes several tools for memory monitoring and analysis:
You can also simulate memory pressure for testing:
Prometheus Integration
Setup
- Start the Prometheus stack using Docker Compose:
This will start:
- Prometheus server (accessible at http://localhost:9090)
- Node Exporter (for host metrics)
- cAdvisor (for container metrics)
- For stress testing, you can start the memory stress container:
Or use the container test script:
Using Prometheus Client
The MCPAIComponent
class includes Prometheus capabilities:
Useful PromQL Queries
- CPU Usage:
rate(node_cpu_seconds_total{mode!="idle"}[1m])
- Memory Usage:
node_memory_MemTotal_bytes - node_memory_MemAvailable_bytes
- Disk Usage:
node_filesystem_avail_bytes{mountpoint="/"} / node_filesystem_size_bytes{mountpoint="/"}
- Container CPU Usage:
rate(container_cpu_usage_seconds_total[1m])
- Container Memory Usage:
container_memory_usage_bytes
API Endpoints
OpenAI Endpoints
- GET
/v1/models
- List all available models - GET
/v1/models/{model_id}
- Get information about a specific model - POST
/v1/models/azure-gpt-4/completion
- Generate text completion using Azure OpenAI - POST
/v1/models/azure-gpt-4/chat
- Generate chat response using Azure OpenAI - POST
/v1/models/openai-gpt-chat/chat
- Generate chat response using OpenAI chat model - POST
/v1/models/openai-gpt-completion/completion
- Generate text completion using OpenAI completion model
Git Integration Endpoints
- POST
/v1/models/git-analyzer/analyze
- Analyze a Git repository - POST
/v1/models/git-analyzer/search
- Search a Git repository for files matching a pattern - POST
/v1/models/git-analyzer/diff
- Get the diff of the last commit in a repository
Filesystem Endpoints
- POST
/v1/models/filesystem/list
- List contents of a directory - POST
/v1/models/filesystem/read
- Read a file's contents - POST
/v1/models/filesystem/read-multiple
- Read multiple files at once - POST
/v1/models/filesystem/write
- Write content to a file - POST
/v1/models/filesystem/edit
- Edit a file with multiple replacements - POST
/v1/models/filesystem/mkdir
- Create a directory - POST
/v1/models/filesystem/move
- Move a file or directory - POST
/v1/models/filesystem/search
- Search for files matching a pattern - POST
/v1/models/filesystem/info
- Get information about a file or directory
Prometheus Endpoints
- POST
/v1/models/prometheus/query
- Execute an instant query - POST
/v1/models/prometheus/query_range
- Execute a range query - POST
/v1/models/prometheus/series
- Get series data - GET
/v1/models/prometheus/labels
- Get all available labels - POST
/v1/models/prometheus/label_values
- Get values for a specific label - GET
/v1/models/prometheus/targets
- Get all targets - GET
/v1/models/prometheus/rules
- Get all rules - GET
/v1/models/prometheus/alerts
- Get all alerts
Client Usage
You can use the MCPAIComponent
in your LangFlow pipelines by providing the MCP server URL:
Using the GitCodeAnalyzer Class
For more structured Git analysis, you can use the GitCodeAnalyzer
class:
Troubleshooting
Prometheus Issues
- Verify Prometheus is running:
docker ps | grep prometheus
- Check you can access the Prometheus UI: http://localhost:9090
- Verify the MCP server is running and accessible
- Check the MCP server logs for errors
- Try simple queries first to verify connectivity (e.g.,
up
query)
OpenAI Issues
- Verify your API keys are set correctly
- Check for rate limiting or quota issues
- Verify you're using supported models for your API key
Git Issues
- Ensure the Git repository URL is accessible
- Check for authentication issues if using private repositories
- Ensure GitPython is installed correctly
This server cannot be installed
A server implementation that provides a unified interface for OpenAI services, Git repository analysis, and local filesystem operations through REST API endpoints.