mcp_openrouter_analyze_image
Analyze images using OpenRouter vision models to answer questions about visual content, supporting various image sources and model selections.
Instructions
Analyze an image using OpenRouter vision models
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| image_path | Yes | Path to the image file to analyze (can be an absolute file path, URL, or base64 data URL starting with "data:") | |
| question | No | Question to ask about the image | |
| model | No | OpenRouter model to use (e.g., "anthropic/claude-3.5-sonnet") |
Implementation Reference
- The main handler function for 'mcp_openrouter_analyze_image'. Validates input, prepares the image (converts to base64 JPEG, resizes if needed), constructs OpenAI vision chat completion request, handles model fallbacks and free model search, returns the analysis response.export async function handleAnalyzeImage( request: { params: { arguments: AnalyzeImageToolRequest } }, openai: OpenAI, defaultModel?: string ) { const args = request.params.arguments; try { // Validate inputs if (!args.image_path) { throw new McpError(ErrorCode.InvalidParams, 'An image path, URL, or base64 data is required'); } const question = args.question || "What's in this image?"; console.error(`Processing image: ${args.image_path.substring(0, 100)}${args.image_path.length > 100 ? '...' : ''}`); // Convert the image to base64 const { base64, mimeType } = await prepareImage(args.image_path); // Create the content array for the OpenAI API const content = [ { type: 'text', text: question }, { type: 'image_url', image_url: { url: `data:${mimeType};base64,${base64}` } } ]; // Select model with priority: // 1. User-specified model // 2. Default model from environment (OPENROUTER_DEFAULT_MODEL_IMG) let model = args.model || defaultModel || DEFAULT_FREE_MODEL; console.error(`[Image Tool] Using IMAGE model: ${model}`); // Try primary model first try { const completion = await openai.chat.completions.create({ model, messages: [{ role: 'user', content }] as any }); return { content: [ { type: 'text', text: completion.choices[0].message.content || '', }, ], metadata: { model: completion.model, usage: completion.usage } }; } catch (primaryError: any) { // If primary model fails and backup exists, try backup const backupModel = process.env.OPENROUTER_DEFAULT_MODEL_IMG_BACKUP; if (backupModel && backupModel !== model) { try { console.error(`Primary model failed, trying backup: ${backupModel}`); const completion = await openai.chat.completions.create({ model: backupModel, messages: [{ role: 'user', content }] as any }); return { content: [ { type: 'text', text: completion.choices[0].message.content || '', }, ], metadata: { model: completion.model, usage: completion.usage } }; } catch (backupError: any) { console.error(`Backup model failed, searching for free models...`); } } // If both failed or no backup, try to find a free model try { const freeModel = await findSuitableFreeModel(openai); if (freeModel && freeModel !== model && freeModel !== backupModel) { console.error(`Trying free model: ${freeModel}`); const completion = await openai.chat.completions.create({ model: freeModel, messages: [{ role: 'user', content }] as any }); return { content: [ { type: 'text', text: completion.choices[0].message.content || '', }, ], metadata: { model: completion.model, usage: completion.usage } }; } } catch (freeModelError: any) { console.error(`Free model search failed: ${freeModelError.message}`); } // All attempts failed, throw the original error throw primaryError; } } catch (error) { console.error('Error in image analysis:', error); if (error instanceof McpError) { throw error; } return { content: [ { type: 'text', text: `Error analyzing image: ${error instanceof Error ? error.message : String(error)}`, }, ], isError: true, metadata: { error_type: error instanceof Error ? error.constructor.name : 'Unknown', error_message: error instanceof Error ? error.message : String(error) } }; } }
- TypeScript interface defining the input parameters for the analyze image tool.export interface AnalyzeImageToolRequest { image_path: string; question?: string; model?: string; }
- src/tool-handlers.ts:137-157 (schema)JSON schema definition for the tool input, defining parameters image_path (required), question, model.name: 'mcp_openrouter_analyze_image', description: 'Analyze an image using OpenRouter vision models', inputSchema: { type: 'object', properties: { image_path: { type: 'string', description: 'Path to the image file to analyze (can be an absolute file path, URL, or base64 data URL starting with "data:")', }, question: { type: 'string', description: 'Question to ask about the image', }, model: { type: 'string', description: 'OpenRouter model to use (e.g., "anthropic/claude-3.5-sonnet")', }, }, required: ['image_path'], }, },
- src/tool-handlers.ts:328-333 (registration)Switch case in CallToolRequestSchema handler that dispatches to the handleAnalyzeImage function.case 'mcp_openrouter_analyze_image': return handleAnalyzeImage({ params: { arguments: request.params.arguments as unknown as AnalyzeImageToolRequest } }, this.openai, this.defaultModel);
- src/tool-handlers.ts:137-157 (registration)Tool registration in ListToolsRequestSchema response, including name, description, and schema.name: 'mcp_openrouter_analyze_image', description: 'Analyze an image using OpenRouter vision models', inputSchema: { type: 'object', properties: { image_path: { type: 'string', description: 'Path to the image file to analyze (can be an absolute file path, URL, or base64 data URL starting with "data:")', }, question: { type: 'string', description: 'Question to ask about the image', }, model: { type: 'string', description: 'OpenRouter model to use (e.g., "anthropic/claude-3.5-sonnet")', }, }, required: ['image_path'], }, },