OpenRouter MCP Server

remote-capable server

The server can be hosted and run remotely because it primarily relies on remote services or has no dependency on the local environment.

Integrations

  • Provides a type-safe interface for accessing and interacting with OpenRouter.ai's diverse model ecosystem

OpenRouter MCP Server

A Model Context Protocol (MCP) server providing seamless integration with OpenRouter.ai's diverse model ecosystem. Access various AI models through a unified, type-safe interface with built-in caching, rate limiting, and error handling.

Features

  • Model Access
    • Direct access to all OpenRouter.ai models
    • Automatic model validation and capability checking
    • Default model configuration support
  • Performance Optimization
    • Smart model information caching (1-hour expiry)
    • Automatic rate limit management
    • Exponential backoff for failed requests
  • Unified Response Format
    • Consistent ToolResult structure for all responses
    • Clear error identification with isError flag
    • Structured error messages with context

Installation

pnpm install @mcpservers/openrouterai

Configuration

Prerequisites

  1. Get your OpenRouter API key from OpenRouter Keys
  2. Choose a default model (optional)

Environment Variables

OPENROUTER_API_KEY=your-api-key-here OPENROUTER_DEFAULT_MODEL=optional-default-model

Setup

Add to your MCP settings configuration file (cline_mcp_settings.json or claude_desktop_config.json):

{ "mcpServers": { "openrouterai": { "command": "npx", "args": ["@mcpservers/openrouterai"], "env": { "OPENROUTER_API_KEY": "your-api-key-here", "OPENROUTER_DEFAULT_MODEL": "optional-default-model" } } } }

Response Format

All tools return responses in a standardized structure:

interface ToolResult { isError: boolean; content: Array<{ type: "text"; text: string; // JSON string or error message }>; }

Success Example:

{ "isError": false, "content": [{ "type": "text", "text": "{\"id\": \"gen-123\", ...}" }] }

Error Example:

{ "isError": true, "content": [{ "type": "text", "text": "Error: Model validation failed - 'invalid-model' not found" }] }

Available Tools

chat_completion

Send messages to OpenRouter.ai models:

interface ChatCompletionRequest { model?: string; messages: Array<{role: "user"|"system"|"assistant", content: string}>; temperature?: number; // 0-2 } // Response: ToolResult with chat completion data or error

search_models

Search and filter available models:

interface ModelSearchRequest { query?: string; provider?: string; minContextLength?: number; capabilities?: { functions?: boolean; vision?: boolean; }; } // Response: ToolResult with model list or error

get_model_info

Get detailed information about a specific model:

{ model: string; // Model identifier }

validate_model

Check if a model ID is valid:

interface ModelValidationRequest { model: string; } // Response: // Success: { isError: false, valid: true } // Error: { isError: true, error: "Model not found" }

Error Handling

The server provides structured errors with contextual information:

// Error response structure { isError: true, content: [{ type: "text", text: "Error: [Category] - Detailed message" }] }

Common Error Categories:

  • Validation Error: Invalid input parameters
  • API Error: OpenRouter API communication issues
  • Rate Limit: Request throttling detection
  • Internal Error: Server-side processing failures

Handling Responses:

async function handleResponse(result: ToolResult) { if (result.isError) { const errorMessage = result.content[0].text; if (errorMessage.startsWith('Error: Rate Limit')) { // Handle rate limiting } // Other error handling } else { const data = JSON.parse(result.content[0].text); // Process successful response } }

Development

See CONTRIBUTING.md for detailed information about:

  • Development setup
  • Project structure
  • Feature implementation
  • Error handling guidelines
  • Tool usage examples
# Install dependencies pnpm install # Build project pnpm run build # Run tests pnpm test

Changelog

See CHANGELOG.md for recent updates including:

  • Unified response format implementation
  • Enhanced error handling system
  • Type-safe interface improvements

License

This project is licensed under the Apache License 2.0 - see the LICENSE file for details.

You must be authenticated.

A
security – no known vulnerabilities
A
license - permissive license
A
quality - confirmed to work

Provides integration with OpenRouter.ai, allowing access to various AI models through a unified interface.

  1. Features
    1. Installation
      1. Configuration
        1. Prerequisites
        2. Environment Variables
        3. Setup
      2. Response Format
        1. Available Tools
          1. chat_completion
          2. search_models
          3. get_model_info
          4. validate_model
        2. Error Handling
          1. Development
            1. Changelog
              1. License