Skip to main content
Glama

MCP Platform

by jck411
README.md•5.68 kB
# MCP Backend OpenRouter A high-performance chatbot platform connecting MCP servers with LLM APIs for intelligent tool execution. ## šŸš€ Quick Start ```bash # Install dependencies uv install # Start the platform uv run python src/main.py # Reset configuration to defaults uv run mcp-reset-config ``` **Connect:** `ws://localhost:8000/ws/chat` ## šŸ“” WebSocket API ### Send Messages ```json { "type": "user_message", "message": "Hello, how can you help me today?" } ``` ### Receive Responses ```json { "type": "assistant_message", "message": "I'm here to help! What would you like to know?", "thinking": "The user is greeting me...", "usage": { "prompt_tokens": 15, "completion_tokens": 12, "total_tokens": 27 } } ``` ### Message Types | Type | Purpose | Payload | |------|---------|---------| | `user_message` | Send user input | `{"type": "user_message", "message": "text"}` | | `clear_history` | Start new session | `{"type": "clear_history"}` | | `assistant_message` | AI response | `{"type": "assistant_message", "message": "text", "thinking": "reasoning"}` | | `tool_execution` | Tool status | `{"type": "tool_execution", "tool_name": "name", "status": "executing"}` | ## āš™ļø Configuration ### Essential Settings (`src/runtime_config.yaml`) ```yaml chat: websocket: port: 8000 # WebSocket server port service: max_tool_hops: 8 # Maximum tool call iterations streaming: enabled: true # Enable streaming responses storage: persistence: db_path: "chat_history.db" retention: max_age_hours: 24 max_messages: 1000 llm: active: "openrouter" # Active LLM provider providers: openrouter: base_url: "https://openrouter.ai/api/v1" model: "openai/gpt-4o-mini" temperature: 0.7 max_tokens: 4096 ``` ### MCP Servers (`servers_config.json`) ```json { "mcpServers": { "demo": { "enabled": true, "command": "uv", "args": ["run", "python", "Servers/config_server.py"], "cwd": "/path/to/your/project" } } } ``` ## šŸ”§ Performance Tuning ### Streaming Optimization ```yaml chat: service: streaming: persistence: persist_deltas: false # Maximum speed (no DB writes during streaming) interval_ms: 200 # Flush every 200ms min_chars: 1024 # Or when buffer reaches 1024 chars ``` ### HTTP/2 Support ```bash uv add h2 # Required for HTTP/2 optimization ``` ## šŸ› ļø Development ### Code Standards - Use `uv` for package management - Pydantic for data validation - Type hints required - Fail-fast error handling ### Available Scripts ```bash uv run python src/main.py # Start platform uv run python scripts/format.py # Format code uv run mcp-reset-config # Reset configuration ``` ### Code Formatting ```bash # Quick format (ignores line length issues) ./format.sh # Full check including line length uv run ruff check src/ # Format specific files uv run ruff format src/chat/ src/clients/ ``` ## šŸ“ Project Structure ``` MCP_BACKEND_OPENROUTER/ ā”œā”€ā”€ src/ # Main source code │ ā”œā”€ā”€ main.py # Application entry point │ ā”œā”€ā”€ config.py # Configuration management │ ā”œā”€ā”€ websocket_server.py # WebSocket communication │ ā”œā”€ā”€ chat/ # Chat system modules │ ā”œā”€ā”€ clients/ # LLM and MCP clients │ └── history/ # Storage and persistence ā”œā”€ā”€ Servers/ # MCP server implementations ā”œā”€ā”€ config.yaml # Default configuration ā”œā”€ā”€ runtime_config.yaml # Runtime overrides ā”œā”€ā”€ servers_config.json # MCP server config └── uv.lock # Dependency lock file ``` ## šŸ”‘ Environment Variables ```bash # Required for LLM APIs export OPENAI_API_KEY="your-key" export OPENROUTER_API_KEY="your-key" export GROQ_API_KEY="your-key" ``` ## 🚨 Troubleshooting ### Common Issues | Problem | Solution | |---------|----------| | Configuration not updating | Check file permissions on `runtime_config.yaml` | | WebSocket connection fails | Verify server is running and port is correct | | MCP server errors | Check `servers_config.json` and server availability | | LLM API issues | Verify API keys and model configuration | ### Debug Mode ```yaml # In runtime_config.yaml logging: level: "DEBUG" ``` ### Component Testing ```python # Test configuration from src.config import Configuration config = Configuration() print(config.get_config_dict()) # Test LLM client from src.clients.llm_client import LLMClient llm = LLMClient(config.get_llm_config()) ``` ## āœ… Features - **Full MCP Protocol** - Tools, prompts, resources - **High Performance** - SQLite with WAL mode, optimized indexes - **Real-time Streaming** - WebSocket with delta persistence - **Multi-Provider** - OpenRouter (100+ models), OpenAI, Groq - **Type Safe** - Pydantic validation throughout - **Dynamic Configuration** - Runtime changes without restart - **Auto-Persistence** - Automatic conversation storage ## šŸ“š Quick Reference | Command | Purpose | |---------|---------| | `uv run python src/main.py` | Start the platform | | `uv run mcp-reset-config` | Reset to default config | | Edit `runtime_config.yaml` | Change settings (auto-reload) | | Edit `servers_config.json` | Configure MCP servers | ## šŸ†˜ Support - Check logs for detailed error messages - Verify configuration syntax with YAML validator - Test individual components for isolation - Monitor WebSocket connections and database size --- **Requirements:** Python 3.13+, `uv` package manager

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/jck411/MCP_BACKEND_OPENROUTER'

If you have feedback or need assistance with the MCP directory API, please join our Discord server