Skip to main content
Glama
hoangdn3
by hoangdn3

mcp_openrouter_multi_image_analysis

Analyze multiple images simultaneously with a single prompt to extract detailed insights and information from visual content.

Instructions

Analyze multiple images at once with a single prompt and receive detailed responses

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
imagesYesArray of image objects to analyze
promptYesPrompt for analyzing the images
markdown_responseNoWhether to format the response in Markdown (default: true)
modelNoOpenRouter model to use. If not specified, the system will use a free model with vision capabilities or the default model.

Implementation Reference

  • Main handler function: validates input, fetches and processes multiple images into base64 data URLs, constructs multimodal message, calls OpenRouter API (with model fallbacks), formats Markdown response if requested, and returns structured result with metadata.
    export async function handleMultiImageAnalysis( request: { params: { arguments: MultiImageAnalysisToolRequest } }, openai: OpenAI, defaultModel?: string ) { const args = request.params.arguments; try { // Validate inputs if (!args.images || !Array.isArray(args.images) || args.images.length === 0) { throw new McpError(ErrorCode.InvalidParams, 'At least one image is required'); } if (!args.prompt) { throw new McpError(ErrorCode.InvalidParams, 'A prompt for analyzing the images is required'); } console.error(`Processing ${args.images.length} images`); // Process each image and convert to base64 if needed const processedImages = await Promise.all( args.images.map(async (image, index) => { try { // Skip processing if already a data URL if (image.url.startsWith('data:')) { console.error(`Image ${index + 1} is already in base64 format`); return image; } console.error(`Processing image ${index + 1}: ${image.url.substring(0, 100)}${image.url.length > 100 ? '...' : ''}`); // Get MIME type const mimeType = getMimeType(image.url); // Fetch and process the image const buffer = await fetchImageAsBuffer(image.url); const base64 = await processImage(buffer, mimeType); return { url: `data:${mimeType === 'application/octet-stream' ? 'image/jpeg' : mimeType};base64,${base64}`, alt: image.alt }; } catch (error: any) { console.error(`Error processing image ${index + 1}:`, error); throw new Error(`Failed to process image ${index + 1}: ${image.url}. Error: ${error.message}`); } }) ); // Select model with priority: // 1. User-specified model // 2. Default model from environment let model = args.model || defaultModel || DEFAULT_FREE_MODEL; console.error(`[Multi-Image Tool] Using IMAGE model: ${model}`); // Build content array for the API call const content: Array<{ type: string; text?: string; image_url?: { url: string } }> = [ { type: 'text', text: args.prompt } ]; // Add each processed image to the content array processedImages.forEach(image => { content.push({ type: 'image_url', image_url: { url: image.url } }); }); // Try primary model first let responseText: string; let responseId: string; let usedModel: string; let usage: any; try { const completion = await openai.chat.completions.create({ model, messages: [{ role: 'user', content }] as any }); const response = completion as any; responseText = completion.choices[0].message.content || ''; responseId = response.id; usedModel = response.model; usage = response.usage; } catch (primaryError: any) { // If primary model fails and backup exists, try backup const backupModel = process.env.OPENROUTER_DEFAULT_MODEL_IMG_BACKUP; if (backupModel && backupModel !== model) { try { console.error(`Primary model failed, trying backup: ${backupModel}`); const completion = await openai.chat.completions.create({ model: backupModel, messages: [{ role: 'user', content }] as any }); const resp = completion as any; responseText = completion.choices[0].message.content || ''; responseId = resp.id; usedModel = resp.model; usage = resp.usage; } catch (backupError: any) { console.error(`Backup model failed, searching for free models...`); // Try to find a free model const freeModel = await findSuitableFreeModel(openai); if (freeModel && freeModel !== model && freeModel !== backupModel) { console.error(`Trying free model: ${freeModel}`); const completion = await openai.chat.completions.create({ model: freeModel, messages: [{ role: 'user', content }] as any }); const resp = completion as any; responseText = completion.choices[0].message.content || ''; responseId = resp.id; usedModel = resp.model; usage = resp.usage; } else { throw backupError; } } } else { // No backup, try free model directly console.error(`Primary model failed, searching for free models...`); const freeModel = await findSuitableFreeModel(openai); if (freeModel && freeModel !== model) { console.error(`Trying free model: ${freeModel}`); const completion = await openai.chat.completions.create({ model: freeModel, messages: [{ role: 'user', content }] as any }); const resp = completion as any; responseText = completion.choices[0].message.content || ''; responseId = resp.id; usedModel = resp.model; usage = resp.usage; } else { throw primaryError; } } } // Format as markdown if requested if (args.markdown_response) { // Simple formatting enhancements responseText = responseText // Add horizontal rule after sections .replace(/^(#{1,3}.*)/gm, '$1\n\n---') // Ensure proper spacing for lists .replace(/^(\s*[-*•]\s.+)$/gm, '\n$1') // Convert plain URLs to markdown links .replace(/(https?:\/\/[^\s]+)/g, '[$1]($1)'); } // Return the analysis result return { content: [ { type: 'text', text: responseText, }, ], metadata: { id: responseId, model: usedModel, usage: usage } }; } catch (error: any) { console.error('Error in multi-image analysis:', error); if (error instanceof McpError) { throw error; } return { content: [ { type: 'text', text: `Error analyzing images: ${error.message}`, }, ], isError: true, metadata: { error_type: error.constructor.name, error_message: error.message } }; } }
  • Tool schema definition including input schema for images array, prompt, markdown_response flag, and optional model.
    { name: 'mcp_openrouter_multi_image_analysis', description: 'Analyze multiple images at once with a single prompt and receive detailed responses', inputSchema: { type: 'object', properties: { images: { type: 'array', description: 'Array of image objects to analyze', items: { type: 'object', properties: { url: { type: 'string', description: 'URL or data URL of the image (use http(s):// for web images, absolute file paths for local files, or data:image/xxx;base64,... for base64 encoded images)', }, alt: { type: 'string', description: 'Optional alt text or description of the image', }, }, required: ['url'], }, }, prompt: { type: 'string', description: 'Prompt for analyzing the images', }, markdown_response: { type: 'boolean', description: 'Whether to format the response in Markdown (default: true)', default: true, }, model: { type: 'string', description: 'OpenRouter model to use. If not specified, the system will use a free model with vision capabilities or the default model.', }, }, required: ['images', 'prompt'], }, },
  • Registration in the CallToolRequestSchema switch statement that dispatches to the handleMultiImageAnalysis handler function.
    case 'mcp_openrouter_multi_image_analysis': return handleMultiImageAnalysis({ params: { arguments: request.params.arguments as unknown as MultiImageAnalysisToolRequest } }, this.openai, this.defaultModel);
  • TypeScript interface defining the tool request parameters, used for type safety in the handler.
    export interface MultiImageAnalysisToolRequest { images: Array<{ url: string; alt?: string; }>; prompt: string; markdown_response?: boolean; model?: string; }

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/hoangdn3/mcp-ocr-fallback'

If you have feedback or need assistance with the MCP directory API, please join our Discord server