OpenRouter MCP Multimodal Server
Server Configuration
Describes the environment variables required to run the server.
| Name | Required | Description | Default |
|---|---|---|---|
| OPENROUTER_API_KEY | Yes | Your OpenRouter API key. Get one free at https://openrouter.ai/keys | |
| OPENROUTER_INPUT_DIR | No | Sandbox root for input_images on generate_image. Falls back to OPENROUTER_OUTPUT_DIR. | |
| OPENROUTER_OUTPUT_DIR | No | Sandbox root for save_path on generate_* tools. Defaults to cwd. | |
| OPENROUTER_DEFAULT_MODEL | No | Default model for chat + analyze tools. | nvidia/nemotron-nano-12b-v2-vl:free |
Capabilities
Features and capabilities supported by this server
| Capability | Details |
|---|---|
| tools | {} |
Tools
Functions exposed to the LLM to take actions
| Name | Description |
|---|---|
| chat_completionB | Send messages to an OpenRouter model and get a response |
| analyze_imageB | Analyze an image using a vision model |
| analyze_audioB | Analyze or transcribe an audio file using a multimodal model |
| analyze_videoA | Analyze or transcribe a video file using a multimodal model. Accepts mp4, mpeg, mov, or webm from a local file path, HTTP(S) URL, or base64 data URL. Default model: google/gemini-2.5-flash. |
| search_modelsC | Search available OpenRouter models |
| get_model_infoC | Get details about a specific model |
| validate_modelA | Check if a model ID exists |
| generate_imageA | Generate an image from a text prompt. Optionally conditioned on one or more reference images (file paths, http(s) URLs, or data URLs) for character / style consistency. Sends |
| generate_audioA | Generate audio from a text prompt. Conversational models (e.g. openai/gpt-audio) respond in spoken audio. Music models (e.g. google/lyria-3-clip-preview) need a structured prompt. Output format is auto-detected and file extension is corrected automatically. |
| generate_videoA | Generate a video from a text prompt using an OpenRouter video-generation model (default: google/veo-3.1). Submits an async job, polls until completion or max_wait_ms, then downloads the result. Optionally conditioned on first/last-frame images or reference images. Large outputs are auto-saved when save_path is provided and path-sandboxed. |
| get_video_statusA | Resume a previously submitted video generation job by id. Returns the latest status; if completed, downloads the video (and saves it when save_path is provided). |
Prompts
Interactive templates invoked by user choice
| Name | Description |
|---|---|
No prompts | |
Resources
Contextual data attached and managed by the client
| Name | Description |
|---|---|
No resources | |
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/stabgan/openrouter-mcp-multimodal'
If you have feedback or need assistance with the MCP directory API, please join our Discord server