Integrates with Docker for running Prometheus, Node Exporter, cAdvisor, and memory stress testing
Uses FastAPI as the web framework for implementing the MCP server API endpoints
Provides Git repository analysis, searching, and diff functionality through dedicated endpoints
Provides integration with Langflow through the MCPAIComponent for use in LangFlow pipelines
Integrates with OpenAI API to provide text completion and chat functionality via dedicated endpoints
Enables querying and monitoring metrics through Prometheus, including instant queries, range queries, alerts, and targets
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@Model Control Plane (MCP) Serveranalyze the last commit in my GitHub repo for any breaking changes"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
MCP Server with OpenAI, Git, Filesystem, and Prometheus Integration
This repository contains a Model Control Plane (MCP) server implementation that supports OpenAI services, Git repository analysis, local filesystem operations, and Prometheus integration.
Project Structure
MCP/
├── mcp/ # Core MCP library modules
├── scripts/ # Utility scripts and test tools
├── prometheus/ # Prometheus configuration
├── docker-compose.yml # Docker configuration
├── mcp_server.py # Main server implementation
├── mcp_run # Main runner script (shortcut)
└── README.md # This fileRelated MCP server: MCP GitHub Repository Server
Requirements
Python 3.8+
FastAPI
Uvicorn
OpenAI SDK
GitPython
Requests
Docker and Docker Compose (for Prometheus features)
Installation
Clone this repository
Install the dependencies:
pip install -r requirements.txtEnvironment Variables
Set the following environment variables:
For Azure OpenAI:
export AZURE_OPENAI_ENDPOINT="your-azure-endpoint"
export AZURE_OPENAI_API_KEY="your-azure-api-key"
export AZURE_OPENAI_API_VERSION="2023-05-15"
export AZURE_DEPLOYMENT_NAME="your-deployment-name"For Standard OpenAI:
export OPENAI_API_KEY="your-openai-api-key"
# Optional: Specify which models to use
export OPENAI_CHAT_MODEL="gpt-4o-mini" # Default if not specified
export OPENAI_COMPLETION_MODEL="gpt-3.5-turbo-instruct" # Default if not specifiedFor Prometheus:
export PROMETHEUS_URL="http://localhost:9090" # Default if not specifiedRunning the Server
Start the MCP server:
python scripts/start_mcp_server.pyOr for more options:
python scripts/start_mcp_server.py --host 0.0.0.0 --port 8000 --debugThe server will be available at http://localhost:8000.
Unified Testing Tool
We provide a unified testing script that gives you a user-friendly interface to all testing functionality:
./mcp_runThis interactive script provides:
Filesystem tests
Git integration tests
Memory analysis tools
Prometheus tests & memory stress
MCP server management
Environment setup
Individual Tests
You can also run individual tests directly:
Test the OpenAI integration:
python scripts/test_mcp_client.pyTest the Git integration (provide a Git repository URL):
python scripts/test_git_integration.py https://github.com/username/repositoryTest the Git diff functionality (analyze requirements compatibility):
python scripts/test_git_diff.py https://github.com/username/repository [commit-sha]Test the filesystem functionality:
python scripts/test_filesystem.pyTest the langflow integration with MCP:
python scripts/test_langflow_integration.py [OPTIONAL_REPO_URL]Test the Prometheus integration:
python scripts/test_prometheus.py [prometheus_url]Advanced Git Analysis
For more advanced Git repository analysis with AI recommendations:
python scripts/langflow_git_analyzer.py https://github.com/username/repositoryYou can also search for specific patterns in the repository:
python scripts/langflow_git_analyzer.py https://github.com/username/repository --search "def main"Or analyze the last commit diff with AI insights:
python scripts/langflow_git_analyzer.py https://github.com/username/repository --diffMemory Analysis Tools
MCP includes several tools for memory monitoring and analysis:
# Basic memory diagnostics with AI analysis
python scripts/ai_memory_diagnostics.py
# Interactive memory dashboard
python scripts/mcp_memory_dashboard.py
# Memory alerting system
python scripts/mcp_memory_alerting.pyYou can also simulate memory pressure for testing:
python scripts/simulate_memory_pressure.py --target 85 --duration 300Prometheus Integration
Setup
Start the Prometheus stack using Docker Compose:
docker compose up -dThis will start:
Prometheus server (accessible at http://localhost:9090)
Node Exporter (for host metrics)
cAdvisor (for container metrics)
For stress testing, you can start the memory stress container:
docker compose up -d --build memory-stressOr use the container test script:
./scripts/container-memory-test.sh startDocker Configuration and Reset Scripts
This project includes multiple Docker configurations and reset scripts for reliable operation across different environments:
Docker Configurations
Standard Configuration (
docker-compose.yml): Uses custom Dockerfiles for Prometheus and Langflow to ensure consistent permissions across systems.Bridge Network Configuration (
docker-compose.bridge.yml): Alternative configuration that uses bridge networking for environments where host networking is problematic.
Custom Dockerfiles for Solving Permission Issues
The project uses custom Dockerfiles for both Prometheus and Langflow to solve common permission issues:
Dockerfile.prometheus: Sets up the Prometheus configuration with proper permissions for the
nobodyuser.Dockerfile.langflow: Copies the components directory into the container without changing file ownership, allowing Langflow to access the components without permission errors.
This approach eliminates the need for volume mounts that can lead to permission conflicts across different machines and user configurations.
Reset Scripts
All Services Reset (
reset-all.sh): Reset all containers with a single command.# Basic reset (rebuilds containers with existing volumes) ./reset-all.sh # Full reset (removes volumes and rebuilds containers) ./reset-all.sh --cleanIndividual Service Reset:
# Reset only Prometheus ./reset-prometheus.sh # Reset only Langflow ./reset-langflow.sh
These scripts ensure that the containers are properly configured with correct permissions and the latest code changes.
Troubleshooting
If you encounter permission issues:
Use the reset scripts to rebuild the containers
Check the logs with
docker compose logs <service_name>Make sure any components added to Langflow are included in the Dockerfile.langflow
Cross-Machine Deployment
When deploying to a new machine:
Clone the repository
Make reset scripts executable:
chmod +x *.shRun the reset script:
./reset-all.sh
The custom Dockerfiles automatically handle all permission issues that might occur across different systems.
Using Prometheus Client
The MCPAIComponent class includes Prometheus capabilities:
from langflow import MCPAIComponent
# Initialize the client
mcp = MCPAIComponent(mcp_server_url="http://localhost:8000")
# Instant query (current metric values)
result = mcp.prometheus_query("up")
# Range query (metrics over time)
result = mcp.prometheus_query_range(
query="rate(node_cpu_seconds_total{mode='system'}[1m])",
start="2023-03-01T00:00:00Z",
end="2023-03-01T01:00:00Z",
step="15s"
)
# Get all labels
labels = mcp.prometheus_get_labels()
# Get label values
values = mcp.prometheus_get_label_values("job")
# Get targets
targets = mcp.prometheus_get_targets()
# Get alerts
alerts = mcp.prometheus_get_alerts()Useful PromQL Queries
CPU Usage:
rate(node_cpu_seconds_total{mode!="idle"}[1m])Memory Usage:
node_memory_MemTotal_bytes - node_memory_MemAvailable_bytesDisk Usage:
node_filesystem_avail_bytes{mountpoint="/"} / node_filesystem_size_bytes{mountpoint="/"}Container CPU Usage:
rate(container_cpu_usage_seconds_total[1m])Container Memory Usage:
container_memory_usage_bytes
API Endpoints
OpenAI Endpoints
GET
/v1/models- List all available modelsGET
/v1/models/{model_id}- Get information about a specific modelPOST
/v1/models/azure-gpt-4/completion- Generate text completion using Azure OpenAIPOST
/v1/models/azure-gpt-4/chat- Generate chat response using Azure OpenAIPOST
/v1/models/openai-gpt-chat/chat- Generate chat response using OpenAI chat modelPOST
/v1/models/openai-gpt-completion/completion- Generate text completion using OpenAI completion model
Git Integration Endpoints
POST
/v1/models/git-analyzer/analyze- Analyze a Git repositoryPOST
/v1/models/git-analyzer/search- Search a Git repository for files matching a patternPOST
/v1/models/git-analyzer/diff- Get the diff of the last commit in a repository
Filesystem Endpoints
POST
/v1/models/filesystem/list- List contents of a directoryPOST
/v1/models/filesystem/read- Read a file's contentsPOST
/v1/models/filesystem/read-multiple- Read multiple files at oncePOST
/v1/models/filesystem/write- Write content to a filePOST
/v1/models/filesystem/edit- Edit a file with multiple replacementsPOST
/v1/models/filesystem/mkdir- Create a directoryPOST
/v1/models/filesystem/move- Move a file or directoryPOST
/v1/models/filesystem/search- Search for files matching a patternPOST
/v1/models/filesystem/info- Get information about a file or directory
Prometheus Endpoints
POST
/v1/models/prometheus/query- Execute an instant queryPOST
/v1/models/prometheus/query_range- Execute a range queryPOST
/v1/models/prometheus/series- Get series dataGET
/v1/models/prometheus/labels- Get all available labelsPOST
/v1/models/prometheus/label_values- Get values for a specific labelGET
/v1/models/prometheus/targets- Get all targetsGET
/v1/models/prometheus/rules- Get all rulesGET
/v1/models/prometheus/alerts- Get all alerts
Client Usage
You can use the MCPAIComponent in your LangFlow pipelines by providing the MCP server URL:
from langflow import MCPAIComponent
mcp = MCPAIComponent(mcp_server_url="http://localhost:8000")
# List available models
models = mcp.list_models()
print(models)
# Generate chat completion with OpenAI model
chat_response = mcp.chat(
model_id="openai-gpt-chat",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Tell me a joke about programming."}
],
max_tokens=100,
temperature=0.7
)
print(chat_response)
# Generate text completion with OpenAI model
completion_response = mcp.completion(
model_id="openai-gpt-completion",
prompt="Write a function in Python to calculate the factorial of a number:",
max_tokens=150,
temperature=0.7
)
print(completion_response)
# Analyze a Git repository
repo_analysis = mcp.analyze_git_repo("https://github.com/username/repository")
print(repo_analysis)
# Search a Git repository
search_results = mcp.search_git_repo("https://github.com/username/repository", "def main")
print(search_results)
# Get the diff of the last commit
diff_info = mcp.get_git_diff("https://github.com/username/repository")
print(diff_info)
# List files in the current directory
dir_contents = mcp.list_directory()
print(dir_contents)
# Read a file
file_content = mcp.read_file("path/to/file.txt")
print(file_content)
# Write to a file
write_result = mcp.write_file("path/to/new_file.txt", "Hello, world!")
print(write_result)
# Search for files
search_result = mcp.search_files("*.py")
print(search_result)Using the GitCodeAnalyzer Class
For more structured Git analysis, you can use the GitCodeAnalyzer class:
from langflow_git_analyzer import GitCodeAnalyzer
# Initialize the analyzer
analyzer = GitCodeAnalyzer(mcp_server_url="http://localhost:8000")
# Analyze a repository
analyzer.analyze_repository("https://github.com/username/repository")
# Get a summary
summary = analyzer.get_repository_summary()
print(summary)
# Get AI recommendations
recommendations = analyzer.get_repository_recommendations()
print(recommendations)
# Analyze code patterns
pattern_analysis = analyzer.analyze_code_pattern("def process")
print(pattern_analysis)
# Get the last commit diff
diff_info = analyzer.get_last_commit_diff()
print(diff_info)
# Get a formatted summary of the diff
diff_summary = analyzer.get_formatted_diff_summary()
print(diff_summary)
# Get AI analysis of the commit changes
diff_analysis = analyzer.analyze_commit_diff()
print(diff_analysis)Troubleshooting
Prometheus Issues
Verify Prometheus is running:
docker ps | grep prometheusCheck you can access the Prometheus UI: http://localhost:9090
Verify the MCP server is running and accessible
Check the MCP server logs for errors
Try simple queries first to verify connectivity (e.g.,
upquery)
OpenAI Issues
Verify your API keys are set correctly
Check for rate limiting or quota issues
Verify you're using supported models for your API key
Git Issues
Ensure the Git repository URL is accessible
Check for authentication issues if using private repositories
Ensure GitPython is installed correctly