Skip to main content
Glama
lumishoang

OpenRouter MCP Server

by lumishoang

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault
OPENROUTER_API_KEYNoOptional API key for higher rate limits (200 req/min vs 20 req/min). Get your key at: https://openrouter.ai/keys

Capabilities

Features and capabilities supported by this server

CapabilityDetails
tools
{
  "listChanged": false
}
prompts
{
  "listChanged": false
}
resources
{
  "subscribe": false,
  "listChanged": false
}
experimental
{}

Tools

Functions exposed to the LLM to take actions

NameDescription
list_modelsA

List models available on OpenRouter.

Args: modality: Filter by output type. Options: text, image, audio, embeddings, all sort_by: Sort by: name, created, price, context_length

get_modelA

Get detailed info for one model.

Args: model_id: Model slug, e.g. 'anthropic/claude-sonnet-4.6'

search_modelsB

Search and filter OpenRouter models.

Args: query: Free-text search in model name/id/description provider: Filter by provider (anthropic, google, openai, etc.) max_input_price: Max input price per 1M tokens, 0 = no limit min_context: Minimum context window size requires_tools: Only models supporting tool calling requires_vision: Only models with vision/image input free_only: Only free models

compare_modelsC

Compare multiple models side by side.

Args: model_ids: Comma-separated model IDs

refresh_cacheB

Force refresh the model cache from OpenRouter.

Prompts

Interactive templates invoked by user choice

NameDescription

No prompts

Resources

Contextual data attached and managed by the client

NameDescription

No resources

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/lumishoang/openrouter-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server