Allows using OpenAI's models for text generation via the Model Context Protocol, requiring an API key for authentication.
Provides type safety throughout the application with Pydantic validation for request and response data.
Stores conversation history in a SQLite database with WAL mode and optimized indexes for high performance.
Uses YAML for configuration of LLM providers and settings in the src/config.yaml file.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@MCP Platformconnect to OpenAI and start a new chat session"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
MCP Backend OpenRouter
A high-performance chatbot platform connecting MCP servers with LLM APIs for intelligent tool execution.
π Quick Start
# Install dependencies
uv install
# Start the platform
uv run python src/main.py
# Reset configuration to defaults
uv run mcp-reset-configConnect: ws://localhost:8000/ws/chat
Related MCP server: Swagger/OpenAPI MCP Server
π‘ WebSocket API
Send Messages
{
"type": "user_message",
"message": "Hello, how can you help me today?"
}Receive Responses
{
"type": "assistant_message",
"message": "I'm here to help! What would you like to know?",
"thinking": "The user is greeting me...",
"usage": {
"prompt_tokens": 15,
"completion_tokens": 12,
"total_tokens": 27
}
}Message Types
Type | Purpose | Payload |
| Send user input |
|
| Start new session |
|
| AI response |
|
| Tool status |
|
βοΈ Configuration
Essential Settings (src/runtime_config.yaml)
chat:
websocket:
port: 8000 # WebSocket server port
service:
max_tool_hops: 8 # Maximum tool call iterations
streaming:
enabled: true # Enable streaming responses
storage:
persistence:
db_path: "chat_history.db"
retention:
max_age_hours: 24
max_messages: 1000
llm:
active: "openrouter" # Active LLM provider
providers:
openrouter:
base_url: "https://openrouter.ai/api/v1"
model: "openai/gpt-4o-mini"
temperature: 0.7
max_tokens: 4096MCP Servers (servers_config.json)
{
"mcpServers": {
"demo": {
"enabled": true,
"command": "uv",
"args": ["run", "python", "Servers/config_server.py"],
"cwd": "/path/to/your/project"
}
}
}π§ Performance Tuning
Streaming Optimization
chat:
service:
streaming:
persistence:
persist_deltas: false # Maximum speed (no DB writes during streaming)
interval_ms: 200 # Flush every 200ms
min_chars: 1024 # Or when buffer reaches 1024 charsHTTP/2 Support
uv add h2 # Required for HTTP/2 optimizationπ οΈ Development
Code Standards
Use
uvfor package managementPydantic for data validation
Type hints required
Fail-fast error handling
Available Scripts
uv run python src/main.py # Start platform
uv run python scripts/format.py # Format code
uv run mcp-reset-config # Reset configurationCode Formatting
# Quick format (ignores line length issues)
./format.sh
# Full check including line length
uv run ruff check src/
# Format specific files
uv run ruff format src/chat/ src/clients/π Project Structure
MCP_BACKEND_OPENROUTER/
βββ src/ # Main source code
β βββ main.py # Application entry point
β βββ config.py # Configuration management
β βββ websocket_server.py # WebSocket communication
β βββ chat/ # Chat system modules
β βββ clients/ # LLM and MCP clients
β βββ history/ # Storage and persistence
βββ Servers/ # MCP server implementations
βββ config.yaml # Default configuration
βββ runtime_config.yaml # Runtime overrides
βββ servers_config.json # MCP server config
βββ uv.lock # Dependency lock fileπ Environment Variables
# Required for LLM APIs
export OPENAI_API_KEY="your-key"
export OPENROUTER_API_KEY="your-key"
export GROQ_API_KEY="your-key"π¨ Troubleshooting
Common Issues
Problem | Solution |
Configuration not updating | Check file permissions on |
WebSocket connection fails | Verify server is running and port is correct |
MCP server errors | Check |
LLM API issues | Verify API keys and model configuration |
Debug Mode
# In runtime_config.yaml
logging:
level: "DEBUG"Component Testing
# Test configuration
from src.config import Configuration
config = Configuration()
print(config.get_config_dict())
# Test LLM client
from src.clients.llm_client import LLMClient
llm = LLMClient(config.get_llm_config())β Features
Full MCP Protocol - Tools, prompts, resources
High Performance - SQLite with WAL mode, optimized indexes
Real-time Streaming - WebSocket with delta persistence
Multi-Provider - OpenRouter (100+ models), OpenAI, Groq
Type Safe - Pydantic validation throughout
Dynamic Configuration - Runtime changes without restart
Auto-Persistence - Automatic conversation storage
π Quick Reference
Command | Purpose |
| Start the platform |
| Reset to default config |
Edit | Change settings (auto-reload) |
Edit | Configure MCP servers |
π Support
Check logs for detailed error messages
Verify configuration syntax with YAML validator
Test individual components for isolation
Monitor WebSocket connections and database size
Requirements: Python 3.13+, uv package manager