Skip to main content
Glama

OpenRouter MCP Server

OpenRouter MCP Server

A Model Context Protocol (MCP) server providing seamless integration with OpenRouter.ai's diverse model ecosystem. Access various AI models through a unified, type-safe interface with built-in caching, rate limiting, and error handling.

Features

  • Model Access
    • Direct access to all OpenRouter.ai models
    • Automatic model validation and capability checking
    • Default model configuration support
  • Performance Optimization
    • Smart model information caching (1-hour expiry)
    • Automatic rate limit management
    • Exponential backoff for failed requests
  • Unified Response Format
    • Consistent ToolResult structure for all responses
    • Clear error identification with isError flag
    • Structured error messages with context

Installation

pnpm install @mcpservers/openrouterai

Configuration

Prerequisites

  1. Get your OpenRouter API key from OpenRouter Keys
  2. Choose a default model (optional)

Environment Variables

  • OPENROUTER_API_KEY: Required. Your OpenRouter API key.
  • OPENROUTER_DEFAULT_MODEL: Optional. The default model to use if not specified in the request (e.g., openrouter/auto).
  • OPENROUTER_MAX_TOKENS: Optional. Default maximum number of tokens to generate if max_tokens is not provided in the request.
  • OPENROUTER_PROVIDER_QUANTIZATIONS: Optional. Comma-separated list of default quantization levels to filter by (e.g., fp16,int8) if provider.quantizations is not provided in the request. (Phase 1)
  • OPENROUTER_PROVIDER_IGNORE: Optional. Comma-separated list of default provider names to ignore (e.g., mistralai,openai) if provider.ignore is not provided in the request. (Phase 1)
  • OPENROUTER_PROVIDER_SORT: Optional. Default sort order for providers ("price", "throughput", or "latency"). Overridden by provider.sort argument. (Phase 2)
  • OPENROUTER_PROVIDER_ORDER: Optional. Default prioritized list of provider IDs (JSON array string, e.g., '["openai/gpt-4o", "anthropic/claude-3-opus"]'). Overridden by provider.order argument. (Phase 2)
  • OPENROUTER_PROVIDER_REQUIRE_PARAMETERS: Optional. Default boolean (true or false) to only use providers supporting all specified request parameters. Overridden by provider.require_parameters argument. (Phase 2)
  • OPENROUTER_PROVIDER_DATA_COLLECTION: Optional. Default data collection policy ("allow" or "deny"). Overridden by provider.data_collection argument. (Phase 2)
  • OPENROUTER_PROVIDER_ALLOW_FALLBACKS: Optional. Default boolean (true or false) to control fallback behavior if preferred providers fail. Overridden by provider.allow_fallbacks argument. (Phase 2)
# Example .env file content OPENROUTER_API_KEY=your-api-key-here OPENROUTER_DEFAULT_MODEL=openrouter/auto OPENROUTER_MAX_TOKENS=1024 OPENROUTER_PROVIDER_QUANTIZATIONS=fp16,int8 OPENROUTER_PROVIDER_IGNORE=openai,anthropic OPENROUTER_PROVIDER_SORT=price OPENROUTER_PROVIDER_ORDER='["openai/gpt-4o", "anthropic/claude-3-opus"]' OPENROUTER_PROVIDER_REQUIRE_PARAMETERS=true OPENROUTER_PROVIDER_DATA_COLLECTION=deny OPENROUTER_PROVIDER_ALLOW_FALLBACKS=false

OPENROUTER_PROVIDER_QUANTIZATIONS=fp16,int8 OPENROUTER_PROVIDER_IGNORE=openai,anthropic

### Setup Add to your MCP settings configuration file (`cline_mcp_settings.json` or `claude_desktop_config.json`): ```json { "mcpServers": { "openrouterai": { "command": "npx", "args": ["@mcpservers/openrouterai"], "env": { "OPENROUTER_API_KEY": "your-api-key-here", "OPENROUTER_DEFAULT_MODEL": "optional-default-model", "OPENROUTER_MAX_TOKENS": "1024", "OPENROUTER_PROVIDER_QUANTIZATIONS": "fp16,int8", "OPENROUTER_PROVIDER_IGNORE": "openai,anthropic" } } } } ## Response Format All tools return responses in a standardized structure: ```typescript interface ToolResult { isError: boolean; content: Array<{ type: "text"; text: string; // JSON string or error message }>; }

Success Example:

{ "isError": false, "content": [{ "type": "text", "text": "{\"id\": \"gen-123\", ...}" }] }

Error Example:

{ "isError": true, "content": [{ "type": "text", "text": "Error: Model validation failed - 'invalid-model' not found" }] }

Available Tools

chat_completion

Sends a request to the OpenRouter Chat Completions API.

Input Schema:

  • model (string, optional): The model to use (e.g., openai/gpt-4o, google/gemini-pro). Overrides OPENROUTER_DEFAULT_MODEL. Defaults to openrouter/auto if neither is set.
    • Model Suffixes: You can append :nitro to a model ID (e.g., openai/gpt-4o:nitro) to potentially route to faster, experimental versions if available. Append :floor (e.g., mistralai/mistral-7b-instruct:floor) to use the cheapest available variant of a model, often useful for testing or low-cost tasks. Note: Availability of :nitro and :floor variants depends on OpenRouter.
  • messages (array, required): An array of message objects conforming to the OpenAI chat completion format.
  • temperature (number, optional): Sampling temperature. Defaults to 1.
  • max_tokens (number, optional): Maximum number of tokens to generate in the completion. Overrides OPENROUTER_MAX_TOKENS.
  • provider (object, optional): Provider routing configuration. Overrides corresponding OPENROUTER_PROVIDER_* environment variables.
    • quantizations (array of strings, optional): List of quantization levels to filter by (e.g., ["fp16", "int8"]). Only models matching one of these levels will be considered. Overrides OPENROUTER_PROVIDER_QUANTIZATIONS. (Phase 1)
    • ignore (array of strings, optional): List of provider names to exclude (e.g., ["openai", "anthropic"]). Models from these providers will not be used. Overrides OPENROUTER_PROVIDER_IGNORE. (Phase 1)
    • sort ("price" | "throughput" | "latency", optional): Sort providers by the specified criteria. Overrides OPENROUTER_PROVIDER_SORT. (Phase 2)
    • order (array of strings, optional): A prioritized list of provider IDs (e.g., ["openai/gpt-4o", "anthropic/claude-3-opus"]). Overrides OPENROUTER_PROVIDER_ORDER. (Phase 2)
    • require_parameters (boolean, optional): If true, only use providers that support all specified request parameters (like tools, functions, temperature). Overrides OPENROUTER_PROVIDER_REQUIRE_PARAMETERS. (Phase 2)
    • data_collection ("allow" | "deny", optional): Specify whether providers are allowed to collect data from the request. Overrides OPENROUTER_PROVIDER_DATA_COLLECTION. (Phase 2)
    • allow_fallbacks (boolean, optional): If true (default), allows falling back to other providers if the preferred ones fail or are unavailable. If false, fails the request if preferred providers cannot be used. Overrides OPENROUTER_PROVIDER_ALLOW_FALLBACKS. (Phase 2)

Example Usage:

{ "tool": "chat_completion", "arguments": { "model": "anthropic/claude-3-haiku", "messages": [ { "role": "user", "content": "Explain the concept of quantization in AI models." } ], "max_tokens": 500, "provider": { "quantizations": ["fp16"], "ignore": ["openai"], "sort": "price", "order": ["anthropic/claude-3-haiku", "google/gemini-pro"], "require_parameters": true, "allow_fallbacks": false } } }

This example requests a completion from anthropic/claude-3-haiku, limits the response to 500 tokens. It specifies provider routing options: prefer fp16 quantized models, ignore openai providers, sort remaining providers by price, prioritize anthropic/claude-3-haiku then google/gemini-pro, require the chosen provider to support all request parameters (like max_tokens), and disable fallbacks (fail if the prioritized providers cannot fulfill the request).

search_models

Search and filter available models:

interface ModelSearchRequest { query?: string; provider?: string; minContextLength?: number; capabilities?: { functions?: boolean; vision?: boolean; }; } // Response: ToolResult with model list or error

get_model_info

Get detailed information about a specific model:

{ model: string; // Model identifier }

validate_model

Check if a model ID is valid:

interface ModelValidationRequest { model: string; } // Response: // Success: { isError: false, valid: true } // Error: { isError: true, error: "Model not found" }

Error Handling

The server provides structured errors with contextual information:

// Error response structure { isError: true, content: [{ type: "text", text: "Error: [Category] - Detailed message" }] }

Common Error Categories:

  • Validation Error: Invalid input parameters
  • API Error: OpenRouter API communication issues
  • Rate Limit: Request throttling detection
  • Internal Error: Server-side processing failures

Handling Responses:

async function handleResponse(result: ToolResult) { if (result.isError) { const errorMessage = result.content[0].text; if (errorMessage.startsWith('Error: Rate Limit')) { // Handle rate limiting } // Other error handling } else { const data = JSON.parse(result.content[0].text); // Process successful response } }

Development

See CONTRIBUTING.md for detailed information about:

  • Development setup
  • Project structure
  • Feature implementation
  • Error handling guidelines
  • Tool usage examples
# Install dependencies pnpm install # Build project pnpm run build # Run tests pnpm test

Changelog

See CHANGELOG.md for recent updates including:

  • Unified response format implementation
  • Enhanced error handling system
  • Type-safe interface improvements

License

This project is licensed under the Apache License 2.0 - see the LICENSE file for details.

Install Server
A
security – no known vulnerabilities
A
license - permissive license
A
quality - confirmed to work

remote-capable server

The server can be hosted and run remotely because it primarily relies on remote services or has no dependency on the local environment.

Provides integration with OpenRouter.ai, allowing access to various AI models through a unified interface.

  1. Features
    1. Installation
      1. Configuration
        1. Prerequisites
        2. Environment Variables
        3. Setup
      2. Response Format
        1. Available Tools
          1. chat_completion
          2. search_models
          3. get_model_info
          4. validate_model
        2. Error Handling
          1. Development
            1. Changelog
              1. License

                Related MCP Servers

                • -
                  security
                  F
                  license
                  -
                  quality
                  Enables AI models to interact with Jira using a standardized protocol, offering full Jira REST API integration with features like optimal performance through connection pooling, error handling, and request monitoring.
                  Last updated -
                  2
                  TypeScript
                • A
                  security
                  A
                  license
                  A
                  quality
                  Enables AI agents to interact with multiple LLM providers (OpenAI, Anthropic, Google, DeepSeek) through a standardized interface, making it easy to switch between models or use multiple models in the same application.
                  Last updated -
                  1
                  5
                  Python
                  MIT License
                  • Linux
                  • Apple
                • A
                  security
                  A
                  license
                  A
                  quality
                  An AI router that connects applications to multiple LLM providers (OpenAI, Anthropic, Google, DeepSeek, Ollama, etc.) with smart model orchestration capabilities, enabling dynamic switching between models for different reasoning tasks.
                  Last updated -
                  3
                  619
                  17
                  TypeScript
                  MIT License
                  • Linux
                  • Apple
                • -
                  security
                  F
                  license
                  -
                  quality
                  A unified API server that enables interaction with multiple AI model providers like Anthropic and OpenAI through a consistent interface, supporting chat completions, tool calling, and context handling.
                  Last updated -
                  JavaScript

                View all related MCP servers

                MCP directory API

                We provide all the information about MCP servers via our MCP API.

                curl -X GET 'https://glama.ai/api/mcp/v1/servers/heltonteixeira/openrouterai'

                If you have feedback or need assistance with the MCP directory API, please join our Discord server