drunk-mcp-proxy
Supports Discord OAuth as a pluggable authentication provider to secure access to the proxy server interface.
Integrates GitHub OAuth for enterprise authentication, providing secure access control for the MCP gateway.
Offers Google OAuth integration as an authentication provider to manage access to the proxy server.
Allows the proxy to route LLM requests to Ollama, enabling it to serve as a backend provider through a unified interface.
Acts as an LLM gateway with OpenAI-compatible endpoints, supporting provider integration and native WebSocket streaming for the Responses API.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@drunk-mcp-proxylist all the configured MCP proxy servers"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
drunk-mcp-proxy
Docs
A powerful, production-ready dynamic proxy server for the Model Context Protocol (MCP) and LLM APIs, built with Python and FastMCP. This service enables MCP clients and LLM-compatible applications to seamlessly connect to multiple backend MCP servers and LLM providers through a unified, scalable interface with advanced features including authentication, CORS support, and environment-based configuration.
๐ฏ Overview
drunk-mcp-proxy acts as a central gateway for both Model Context Protocol (MCP) services and LLM providers, providing:
Unified Interface: Single endpoint for multiple backend MCP servers and LLM providers
Dynamic Routing: Automatic routing to configured backend services
Namespace Isolation: Prevent tool name conflicts with per-server namespaces
OpenAPI Integration: Automatic conversion of OpenAPI specs to MCP tools
LLM Proxy: Multi-provider LLM API gateway with OpenAI-compatible endpoints
Anthropic Compatibility: Proxy Anthropic Messages API requests through OpenAI-compatible backends
WebSocket Responses API: Native WebSocket support for OpenAI Responses API streaming
Enterprise Authentication: 14+ pluggable auth providers (JWT, OAuth, GitHub, Azure, etc.)
Production Ready: Health checks, CORS, structured logging, Docker support
โจ Key Features
๐ Dynamic Proxy Management: Configure multiple MCP and OpenAPI services via YAML
๐ค LLM Gateway: Route requests to multiple LLM providers (OpenAI, Ollama, LM Studio, etc.)
๐ Anthropic API Compatibility: Use Anthropic/Claude clients with any OpenAI-compatible backend
๐ WebSocket Responses API: Full WebSocket support for OpenAI Responses API
๐ณ Docker Support: Multi-stage production Docker image with health checks
๐ Enterprise Auth: JWT, GitHub, Google, Discord, Azure OAuth, and custom auth providers
๐ CORS Ready: Full CORS middleware for web client integration
๐จ OpenAPI Support: Convert OpenAPI specs to MCP tools automatically
๐ Health Monitoring: Built-in health check endpoint
๐ Structured Logging: Configurable log levels
๐ก๏ธ JSON Schema Validation: Automatic config validation
๐ Quick Start
Get up and running with drunk-mcp-proxy using the pre-built Docker image from Docker Hub.
Step 1: Prepare Configuration Files
Create a data/ directory with the required configuration files:
mkdir -p data/mcp data/openapi data/skillsdata/config.yaml - Unified Configuration
Define authentication, LLM providers, and MCP/OpenAPI services in a single file:
# Authentication configuration (optional)
auth:
defaultProvider: basic
basic:
base_url: null
token: $API_KEY
jwt:
base_url: null
jwks_uri: "https://login.microsoftonline.com/common/discovery/keys"
issuer: "https://sts.windows.net/$AZURE_TENANT_ID/"
audience: "api://your-client-id"
# LLM provider configuration (optional)
llm:
- enabled: true
websocket: true
provider: openai
base_url: "https://api.openai.com/v1"
api_key: $OPENAI_API_KEY
# MCP and OpenAPI service configuration
mcp:
- path: /
spec_type: mcp
skill_dir: skills
mcp_servers:
my-server:
enabled: true
command: npx
args: ["@playwright/mcp@0.0.64"]
transport: stdio
- path: /api
spec_file: openapi/petstore.yaml
spec_type: openapi
base_url: "https://api.example.com"Note:
Bearer auth (
defaultProvider: "bearer") is the simplest option for API key authentication, commonly used by API proxies and gateways.Environment variables like
$API_KEYor$AZURE_CLIENT_IDare automatically resolved when the config is loaded.
See the samples in the repository for more configuration examples.
Step 2: Prepare Docker Compose
Create a docker-compose.yml file:
services:
mcp-proxy:
image: baoduy2412/mcp-proxy:latest
container_name: mcp-proxy-server
ports:
- "${FASTMCP_PORT:-9123}:${FASTMCP_PORT:-9123}"
volumes:
- ./data:/drunk-proxy/data
env_file:
- .env
environment:
- FASTMCP_HOST=0.0.0.0
- FASTMCP_PORT=${FASTMCP_PORT:-9123}
restart: unless-stopped
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9123/health"]
interval: 30s
timeout: 10s
retries: 3Note: The
./datadirectory is mounted to/drunk-proxy/datain the container. All configuration files should be placed in this directory.
Step 3: Configure Environment & Run
Create a .env file from the sample:
cp .env.sample .envEdit .env with your settings. Key environment variables:
# Server Configuration
FASTMCP_PORT=9123
FASTMCP_LOG_LEVEL=INFO
FASTMCP_AUTH_ENABLED=false
# Bearer Authentication (API Key)
API_KEY=your-api-key-here
# OAuth Storage (required if using OAuth)
FASTMCP_OAUTH_STORAGE_ENCRYPTION_KEY=your-44-character-encryption-key
# Azure Authentication (if using Azure OAuth)
AZURE_CLIENT_ID=your-client-id
AZURE_CLIENT_SECRET=your-client-secret
AZURE_TENANT_ID=your-tenant-idTip: See .env.sample for the complete list of available environment variables.
Now start the server:
docker-compose up -dVerify it's running:
curl http://localhost:9123/healthAdditional Services (Optional)
The full docker-compose.yml in the repository includes optional services:
MCP Inspector - Debug and inspect MCP servers
OpenWebUI - Web interface for LLM interactions
๐ ๏ธ Local Development
Using Docker (Build from Source)
git clone https://github.com/baoduy/drunk-mcp-proxy.git
cd drunk-mcp-proxy
docker build -t drunk-mcp-proxy .
docker run -d -p 9123:9123 -v $(pwd)/data:/drunk-proxy/data drunk-mcp-proxyRunning Locally
# Setup environment
python -m venv venv
source venv/bin/activate # Windows: venv\Scripts\activate
pip install -e ".[dev]"
# Run the server
python src/main.pyThe server will start on http://0.0.0.0:9123 by default.
๐ Configuration
Configuration Directory Structure
data/
โโโ config.yaml # Unified configuration (auth, LLM, and MCP/OpenAPI services)
โโโ mcp/ # MCP server specifications (optional, for external spec files)
โ โโโ stock.mcp.json
โ โโโ wiki.mcp.json
โโโ openapi/ # OpenAPI specifications
โ โโโ petstore.yaml
โโโ skills/ # Skill directories (optional)Configuration (config.yaml)
The proxy uses a unified YAML configuration file to define authentication, LLM providers, and MCP/OpenAPI services:
# Authentication configuration
auth:
defaultProvider: basic
basic:
token: $API_KEY
# MCP service configuration
mcp:
- path: /stock
spec_file: mcp/stock.mcp.json
spec_type: mcp
- path: /api
spec_file: openapi/petstore.yaml
spec_type: openapi
base_url: "https://api.example.com"
filters:
methods: ["GET", "POST"]
tags: ["public"]Environment Variables
Key environment variables (see .env.sample for complete list):
Variable | Description | Default |
| Server port |
|
| Server host |
|
| Log level (DEBUG, INFO, WARNING, ERROR) |
|
| Enable authentication |
|
| Configuration directory |
|
| CORS allowed origins |
|
| API key for bearer authentication | - |
| Fernet key for OAuth token encryption | - |
See Environment Variables for complete list.
๐ Documentation
Getting Started
Configuration
Features
Architecture
API Reference
Deployment
Development
For comprehensive documentation, see the Documentation Index.
๐๏ธ Architecture Overview
MCP Client / LLM Client / Anthropic Client
โ (HTTP/SSE/WebSocket + Authorization)
โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ drunk-mcp-proxy Server โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ Starlette ASGI Application โ โ
โ โ โข CORS Middleware โ โ
โ โ โข Auth Validation โ โ
โ โ โข Rate Limiting โ โ
โ โ โข Health Check: /health โ โ
โ โ โข Root FastMCP Server (/) โ โ
โ โ โข MCP Sub-services: โ โ
โ โ - /stock (MCP) โ โ
โ โ - /wiki (MCP) โ โ
โ โ - /api (OpenAPI) โ โ
โ โ โข LLM Proxy (/api/v1): โ โ
โ โ - POST /chat/completions โ โ
โ โ - POST /messages (Anthropic API) โ โ
โ โ - WS /responses (WebSocket) โ โ
โ โ - POST /embeddings โ โ
โ โ - POST /images/generations โ โ
โ โ - POST /audio/transcriptions โ โ
โ โ - POST /audio/translations โ โ
โ โ - GET /models โ โ
โ โ - GET /providers โ โ
โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ โ โ
[Backend MCP/OpenAPI/LLM Services]See System Architecture for detailed diagrams.
๐ Authentication
drunk-mcp-proxy supports 14+ authentication providers:
Token-based: Bearer (API Keys), JWT
OAuth 2.0: Azure AD, GitHub, Google, Discord, Auth0
Enterprise: WorkOS, Scalekit, Descope
Custom: Pass-through, Introspection
Bearer Authentication (API Key)
The simplest option for API key authentication, commonly used by API proxies and gateways:
auth:
defaultProvider: basic
basic:
token: $API_KEYSet the API_KEY environment variable in your .env file.
OAuth 2.0 Authentication (Azure AD Example)
auth:
defaultProvider: azure
azure:
client_id: $AZURE_CLIENT_ID
client_secret: $AZURE_CLIENT_SECRET
tenant_id: $AZURE_TENANT_IDSee Authentication Guide for details.
๐งช Testing
# Run all tests
python -m pytest
# Run specific test file
python -m pytest tests/test_server.py
# Run with coverage
python -m pytest --cov=src --cov-report=html๐ค LLM Proxy
When LLM providers are configured, drunk-mcp-proxy exposes a full OpenAI-compatible LLM gateway at /api/v1. All endpoints use the model ID format provider_modelname (e.g., openai_gpt-4o, lms_llama3.2) to route requests to the appropriate backend.
LLM Provider Configuration
Add providers to the llm section of config.yaml:
llm:
- enabled: true
websocket: true # Enable for providers that support native WebSocket Responses API
# When false, HTTP Responses API is used as fallback
provider: openai # Short provider name used as prefix in model IDs
base_url: "https://api.openai.com/v1"
api_key: $OPENAI_API_KEY
- enabled: true
websocket: false
provider: lms # LM Studio
base_url: "http://host.docker.internal:1234/v1"
- enabled: false
provider: oll # Ollama
base_url: "http://host.docker.internal:11434/v1"Model ID Format
All LLM endpoints expect the model ID to include a provider prefix separated by an underscore:
{provider}_{model_name}Examples:
openai_gpt-4oโ routes to theopenaiprovider, modelgpt-4olms_llama3.2โ routes to thelms(LM Studio) provider, modelllama3.2ort_claude-3-5-sonnetโ routes to theort(OpenRouter) provider, modelclaude-3-5-sonnet
Available Endpoints
All endpoints are mounted at /api/v1 (configurable via FASTMCP_LLM_ROUTE_PREFIX):
Method | Endpoint | Description |
|
| OpenAI-compatible chat completions |
|
| Anthropic Messages API (see below) |
|
| OpenAI WebSocket Responses API |
|
| Text embeddings |
|
| Image generation |
|
| Audio transcription (Whisper) |
|
| Audio translation |
|
| List all available models across providers |
|
| List all configured providers |
Chat Completions
Standard OpenAI-compatible chat completions:
curl -X POST http://localhost:9123/api/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{
"model": "openai_gpt-4o",
"messages": [{"role": "user", "content": "Hello!"}],
"stream": false
}'Anthropic Messages API Compatibility
The /messages endpoint accepts Anthropic Messages API format and transparently converts to/from the OpenAI format, letting Anthropic/Claude clients use any OpenAI-compatible backend:
curl -X POST http://localhost:9123/api/v1/messages \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{
"model": "lms_llama3.2",
"messages": [{"role": "user", "content": "Hello!"}],
"max_tokens": 1024
}'Supported conversions:
System prompts (string and block array)
Multimodal content (text, base64 images, URL images)
Tool use and tool results
Streaming SSE events in Anthropic format
stop_sequencesโstop,metadata.user_idโuser, finish reason mapping
Use with Claude Code CLI
Point the Claude Code CLI at the proxy to use any backend model with the Anthropic-compatible endpoint:
export ANTHROPIC_BASE_URL=http://localhost:9123/api/v1
export ANTHROPIC_AUTH_TOKEN=YOUR_API_KEY_HERE
claude --model lms_llama3.2WebSocket Responses API
The /responses WebSocket endpoint provides OpenAI Responses API streaming. Clients connect via WebSocket and exchange JSON messages using the OpenAI Responses API protocol.
Connection URL: ws://localhost:9123/api/v1/responses
Message flow:
Client connects with
Authorization: Bearer <token>headerClient sends a
response.createevent withmodel: "provider_modelname"Proxy routes to the configured backend and streams response events back
For providers with
websocket: true, native WebSocket is used for lowest latencyFor other providers, the HTTP Responses API is used as fallback
const ws = new WebSocket("ws://localhost:9123/api/v1/responses", {
headers: { "Authorization": "Bearer YOUR_API_KEY" }
});
ws.send(JSON.stringify({
type: "response.create",
response: {
model: "openai_gpt-4o",
instructions: "You are a helpful assistant.",
input: [{ type: "message", role: "user", content: "Hello!" }]
}
}));
ws.onmessage = (event) => {
const data = JSON.parse(event.data);
console.log(data.type, data); // response.created, response.output_text.delta, response.done, etc.
};Provider WebSocket support: Set websocket: true in the provider config for providers that natively support the /responses WebSocket endpoint (e.g., OpenAI). For all other providers, the HTTP Responses API is used as a fallback.
Note: The
previous_response_idcontinuation feature is only supported for providers with native WebSocket (websocket: true). Using it with HTTP fallback providers returns an error.
List Models and Providers
# List all models across all configured providers
curl http://localhost:9123/api/v1/models \
-H "Authorization: Bearer YOUR_API_KEY"
# Filter by provider
curl "http://localhost:9123/api/v1/models?provider=openai" \
-H "Authorization: Bearer YOUR_API_KEY"
# List configured providers
curl http://localhost:9123/api/v1/providers \
-H "Authorization: Bearer YOUR_API_KEY"๐ค Contributing
Contributions are welcome! Please:
Fork the repository
Create a feature branch (
git checkout -b feature/amazing-feature)Commit your changes (
git commit -m 'Add amazing feature')Push to the branch (
git push origin feature/amazing-feature)Open a Pull Request
๐ License
This project is licensed under the MIT License - see the LICENSE file for details.
๐ Acknowledgments
Built with FastMCP framework
Powered by Starlette ASGI framework
Authentication via FastMCP's pluggable auth system
๐ Support
๐ Documentation
๐ Issue Tracker
๐ฌ Discussions
Note: For detailed technical documentation, API references, and advanced configuration, please refer to the comprehensive documentation.
This server cannot be installed
Maintenance
Resources
Unclaimed servers have limited discoverability.
Looking for Admin?
If you are the server author, to access and configure the admin panel.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/baoduy/drunk-mcp-proxy'
If you have feedback or need assistance with the MCP directory API, please join our Discord server