Skip to main content
Glama

MCP Platform

by jck411

MCP Backend OpenRouter

A high-performance chatbot platform connecting MCP servers with LLM APIs for intelligent tool execution.

๐Ÿš€ Quick Start

# Install dependencies uv install # Start the platform uv run python src/main.py # Reset configuration to defaults uv run mcp-reset-config

Connect: ws://localhost:8000/ws/chat

๐Ÿ“ก WebSocket API

Send Messages

{ "type": "user_message", "message": "Hello, how can you help me today?" }

Receive Responses

{ "type": "assistant_message", "message": "I'm here to help! What would you like to know?", "thinking": "The user is greeting me...", "usage": { "prompt_tokens": 15, "completion_tokens": 12, "total_tokens": 27 } }

Message Types

Type

Purpose

Payload

user_message

Send user input

{"type": "user_message", "message": "text"}

clear_history

Start new session

{"type": "clear_history"}

assistant_message

AI response

{"type": "assistant_message", "message": "text", "thinking": "reasoning"}

tool_execution

Tool status

{"type": "tool_execution", "tool_name": "name", "status": "executing"}

โš™๏ธ Configuration

Essential Settings (src/runtime_config.yaml)

chat: websocket: port: 8000 # WebSocket server port service: max_tool_hops: 8 # Maximum tool call iterations streaming: enabled: true # Enable streaming responses storage: persistence: db_path: "chat_history.db" retention: max_age_hours: 24 max_messages: 1000 llm: active: "openrouter" # Active LLM provider providers: openrouter: base_url: "https://openrouter.ai/api/v1" model: "openai/gpt-4o-mini" temperature: 0.7 max_tokens: 4096

MCP Servers (servers_config.json)

{ "mcpServers": { "demo": { "enabled": true, "command": "uv", "args": ["run", "python", "Servers/config_server.py"], "cwd": "/path/to/your/project" } } }

๐Ÿ”ง Performance Tuning

Streaming Optimization

chat: service: streaming: persistence: persist_deltas: false # Maximum speed (no DB writes during streaming) interval_ms: 200 # Flush every 200ms min_chars: 1024 # Or when buffer reaches 1024 chars

HTTP/2 Support

uv add h2 # Required for HTTP/2 optimization

๐Ÿ› ๏ธ Development

Code Standards

  • Use uv for package management

  • Pydantic for data validation

  • Type hints required

  • Fail-fast error handling

Available Scripts

uv run python src/main.py # Start platform uv run python scripts/format.py # Format code uv run mcp-reset-config # Reset configuration

Code Formatting

# Quick format (ignores line length issues) ./format.sh # Full check including line length uv run ruff check src/ # Format specific files uv run ruff format src/chat/ src/clients/

๐Ÿ“ Project Structure

MCP_BACKEND_OPENROUTER/ โ”œโ”€โ”€ src/ # Main source code โ”‚ โ”œโ”€โ”€ main.py # Application entry point โ”‚ โ”œโ”€โ”€ config.py # Configuration management โ”‚ โ”œโ”€โ”€ websocket_server.py # WebSocket communication โ”‚ โ”œโ”€โ”€ chat/ # Chat system modules โ”‚ โ”œโ”€โ”€ clients/ # LLM and MCP clients โ”‚ โ””โ”€โ”€ history/ # Storage and persistence โ”œโ”€โ”€ Servers/ # MCP server implementations โ”œโ”€โ”€ config.yaml # Default configuration โ”œโ”€โ”€ runtime_config.yaml # Runtime overrides โ”œโ”€โ”€ servers_config.json # MCP server config โ””โ”€โ”€ uv.lock # Dependency lock file

๐Ÿ”‘ Environment Variables

# Required for LLM APIs export OPENAI_API_KEY="your-key" export OPENROUTER_API_KEY="your-key" export GROQ_API_KEY="your-key"

๐Ÿšจ Troubleshooting

Common Issues

Problem

Solution

Configuration not updating

Check file permissions on

runtime_config.yaml

WebSocket connection fails

Verify server is running and port is correct

MCP server errors

Check

servers_config.json

and server availability

LLM API issues

Verify API keys and model configuration

Debug Mode

# In runtime_config.yaml logging: level: "DEBUG"

Component Testing

# Test configuration from src.config import Configuration config = Configuration() print(config.get_config_dict()) # Test LLM client from src.clients.llm_client import LLMClient llm = LLMClient(config.get_llm_config())

โœ… Features

  • Full MCP Protocol - Tools, prompts, resources

  • High Performance - SQLite with WAL mode, optimized indexes

  • Real-time Streaming - WebSocket with delta persistence

  • Multi-Provider - OpenRouter (100+ models), OpenAI, Groq

  • Type Safe - Pydantic validation throughout

  • Dynamic Configuration - Runtime changes without restart

  • Auto-Persistence - Automatic conversation storage

๐Ÿ“š Quick Reference

Command

Purpose

uv run python src/main.py

Start the platform

uv run mcp-reset-config

Reset to default config

Edit

runtime_config.yaml

Change settings (auto-reload)

Edit

servers_config.json

Configure MCP servers

๐Ÿ†˜ Support

  • Check logs for detailed error messages

  • Verify configuration syntax with YAML validator

  • Test individual components for isolation

  • Monitor WebSocket connections and database size


Requirements: Python 3.13+, uv package manager

Related MCP Servers

  • -
    security
    -
    license
    -
    quality
    A Model Context Protocol server that enables LLMs to interact with databases (currently MongoDB) through natural language, supporting operations like querying, inserting, deleting documents, and running aggregation pipelines.
    Last updated -
    MIT License
    • Apple
  • A
    security
    -
    license
    A
    quality
    A Model Context Protocol server that enables LLMs to explore and interact with API specifications by providing tools for loading, browsing, and getting detailed information about API endpoints.
    Last updated -
    4
    10
    13
  • A
    security
    -
    license
    A
    quality
    A Model Context Protocol server that provides knowledge graph-based persistent memory for LLMs, allowing them to store, retrieve, and reason about information across multiple conversations and sessions.
    Last updated -
    9
    94,391
    2
    MIT License
  • -
    security
    -
    license
    -
    quality
    A Model Context Protocol server that allows saving, retrieving, adding, and clearing memories from LLM conversations with MongoDB persistence.
    Last updated -
    17
    6
    MIT License

View all related MCP servers

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/jck411/MCP_BACKEND_OPENROUTER'

If you have feedback or need assistance with the MCP directory API, please join our Discord server