remote-capable server
The server can be hosted and run remotely because it primarily relies on remote services or has no dependency on the local environment.
Integrations
Offers deployment through Docker containers, with support for environment variables and seamless integration with MCP configurations.
Enables use of Google's Gemini models through OpenRouter for text chat and multimodal conversations, with support for vision capabilities and model customization.
Provides Node.js-based installation and execution options with NPX support for easy integration into MCP environments.
OpenRouter MCP Multimodal Server
An MCP (Model Context Protocol) server that provides chat and image analysis capabilities through OpenRouter.ai's diverse model ecosystem. This server combines text chat functionality with powerful image analysis capabilities.
Features
- Text Chat:
- Direct access to all OpenRouter.ai chat models
- Support for simple text and multimodal conversations
- Configurable temperature and other parameters
- Image Analysis:
- Analyze single images with custom questions
- Process multiple images simultaneously
- Automatic image resizing and optimization
- Support for various image sources (local files, URLs, data URLs)
- Model Selection:
- Search and filter available models
- Validate model IDs
- Get detailed model information
- Support for default model configuration
- Performance Optimization:
- Smart model information caching
- Exponential backoff for retries
- Automatic rate limit handling
What's New in 1.5.0
- Improved OS Compatibility:
- Enhanced path handling for Windows, macOS, and Linux
- Better support for Windows-style paths with drive letters
- Normalized path processing for consistent behavior across platforms
- MCP Configuration Support:
- Cursor MCP integration without requiring environment variables
- Direct configuration via MCP parameters
- Flexible API key and model specification options
- Robust Error Handling:
- Improved fallback mechanisms for image processing
- Better error reporting with specific diagnostics
- Multiple backup strategies for file reading
- Image Processing Enhancements:
- More reliable base64 encoding for all image types
- Fallback options when Sharp module is unavailable
- Better handling of large images with automatic optimization
Installation
Option 1: Install via npm
Option 2: Run via Docker
Quick Start Configuration
Prerequisites
- Get your OpenRouter API key from OpenRouter Keys
- Choose a default model (optional)
MCP Configuration Options
Add one of the following configurations to your MCP settings file (e.g., cline_mcp_settings.json
or claude_desktop_config.json
):
Option 1: Using npx (Node.js)
Option 2: Using uv (Python Package Manager)
Option 3: Using Docker
Option 4: Using Smithery (recommended)
Examples
For comprehensive examples of how to use this MCP server, check out the examples directory. We provide:
- JavaScript examples for Node.js applications
- Python examples with interactive chat capabilities
- Code snippets for integrating with various applications
Each example comes with clear documentation and step-by-step instructions.
Dependencies
This project uses the following key dependencies:
@modelcontextprotocol/sdk
: ^1.8.0 - Latest MCP SDK for tool implementationopenai
: ^4.89.1 - OpenAI-compatible API client for OpenRoutersharp
: ^0.33.5 - Fast image processing libraryaxios
: ^1.8.4 - HTTP client for API requestsnode-fetch
: ^3.3.2 - Modern fetch implementation
Node.js 18 or later is required. All dependencies are regularly updated to ensure compatibility and security.
Available Tools
mcp_openrouter_chat_completion
Send text or multimodal messages to OpenRouter models:
For multimodal messages with images:
This server cannot be installed
Provides chat and image analysis capabilities through OpenRouter.ai's diverse model ecosystem, enabling both text conversations and powerful multimodal image processing with various AI models.