Skip to main content
Glama

extract_image_from_url

Extract and analyze images from web URLs for visual content analysis, text extraction, and object recognition, optimized for AI model processing.

Instructions

Extract and analyze images from web URLs. Perfect for analyzing web screenshots, online photos, diagrams, or any image accessible via HTTP/HTTPS for visual content analysis and text extraction.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
max_heightNoFor backward compatibility only. Default maximum height is now 512px
max_widthNoFor backward compatibility only. Default maximum width is now 512px
resizeNoFor backward compatibility only. Images are always automatically resized to optimal dimensions (max 512x512) for LLM analysis
urlYesURL of the image to analyze for visual content, text extraction, or object recognition (supports web screenshots, photos, diagrams)

Implementation Reference

  • Core handler function that implements the tool logic: validates URL, fetches image with axios, resizes to optimal dimensions using sharp, compresses, base64 encodes, and returns MCP-formatted response with metadata and image data.
    export async function extractImageFromUrl(params: ExtractImageFromUrlParams): Promise<McpToolResponse> {
      try {
        const { url, resize, max_width, max_height } = params;
        
        // Validate URL
        if (!url.startsWith('http://') && !url.startsWith('https://')) {
          return {
            content: [{ type: "text", text: "Error: URL must start with http:// or https://" }],
            isError: true
          };
        }
    
        // Domain validation if ALLOWED_DOMAINS is set
        if (ALLOWED_DOMAINS.length > 0) {
          const urlObj = new URL(url);
          const domain = urlObj.hostname;
          const isAllowed = ALLOWED_DOMAINS.some((allowedDomain: string) => 
            domain === allowedDomain || domain.endsWith(`.${allowedDomain}`)
          );
    
          if (!isAllowed) {
            return {
              content: [{ type: "text", text: `Error: Domain ${domain} is not in the allowed domains list` }],
              isError: true
            };
          }
        }
    
        // Fetch the image
        const response = await axios.get(url, {
          responseType: 'arraybuffer',
          maxContentLength: MAX_IMAGE_SIZE,
        });
    
        // Process the image
        let imageBuffer = Buffer.from(response.data);
        let metadata = await sharp(imageBuffer).metadata();
        
        // Always resize to ensure the base64 representation is reasonable
        // This will help avoid consuming too much of the context window
        if (metadata.width && metadata.height) {
          // Use provided dimensions or fallback to defaults for optimal LLM context usage
          const targetWidth = Math.min(metadata.width, DEFAULT_MAX_WIDTH);
          const targetHeight = Math.min(metadata.height, DEFAULT_MAX_HEIGHT);
          
          // Only resize if needed
          if (metadata.width > targetWidth || metadata.height > targetHeight) {
            imageBuffer = await sharp(imageBuffer)
              .resize({
                width: targetWidth,
                height: targetHeight,
                fit: 'inside',
                withoutEnlargement: true
              })
              .toBuffer();
            
            // Update metadata after resize
            metadata = await sharp(imageBuffer).metadata();
          }
        }
    
        // Compress the image based on its format
        try {
          const format = metadata.format || 'jpeg';
          imageBuffer = await compressImage(imageBuffer, format);
        } catch (compressionError) {
          console.warn('Compression warning, using original image:', compressionError);
          // Continue with the original image if compression fails
        }
    
        // Convert to base64
        const base64 = imageBuffer.toString('base64');
        const mimeType = response.headers['content-type'] || 'image/jpeg';
    
        // Return both text and image content
        return {
          content: [
            { 
              type: "text", 
              text: JSON.stringify({
                width: metadata.width,
                height: metadata.height,
                format: metadata.format,
                size: imageBuffer.length
              })
            },
            {
              type: "image",
              data: base64,
              mimeType: mimeType
            }
          ]
        };
      } catch (error: unknown) {
        console.error('Error processing image from URL:', error);
        return {
          content: [{ type: "text", text: `Error: ${error instanceof Error ? error.message : String(error)}` }],
          isError: true
        };
      }
    }
  • src/index.ts:37-51 (registration)
    Registers the 'extract_image_from_url' tool with the MCP server, providing description, Zod input schema, and a thin wrapper handler that delegates to the core extractImageFromUrl function.
    // Add extract_image_from_url tool
    server.tool(
      "extract_image_from_url",
      "Extract and analyze images from web URLs. Perfect for analyzing web screenshots, online photos, diagrams, or any image accessible via HTTP/HTTPS for visual content analysis and text extraction.",
      {
        url: z.string().describe("URL of the image to analyze for visual content, text extraction, or object recognition (supports web screenshots, photos, diagrams)"),
        resize: z.boolean().default(true).describe("For backward compatibility only. Images are always automatically resized to optimal dimensions (max 512x512) for LLM analysis"),
        max_width: z.number().default(512).describe("For backward compatibility only. Default maximum width is now 512px"),
        max_height: z.number().default(512).describe("For backward compatibility only. Default maximum height is now 512px")
      },
      async (args, extra) => {
        const result = await extractImageFromUrl(args);
        return result;
      }
    );
  • TypeScript type definition for the input parameters of the extractImageFromUrl function.
    export type ExtractImageFromUrlParams = {
      url: string;
      resize: boolean;
      max_width: number;
      max_height: number;
    };
  • Type definition for the MCP tool response format used by extractImageFromUrl.
    export type McpToolResponse = {
      [x: string]: unknown;
      content: (
        | { [x: string]: unknown; type: "text"; text: string; }
        | { [x: string]: unknown; type: "image"; data: string; mimeType: string; }
        | { 
            [x: string]: unknown; 
            type: "resource"; 
            resource: { 
              [x: string]: unknown; 
              text: string; 
              uri: string; 
              mimeType?: string; 
            } | { 
              [x: string]: unknown; 
              uri: string; 
              blob: string; 
              mimeType?: string; 
            }; 
          }
      )[];
      _meta?: Record<string, unknown>;
      isError?: boolean;
    };
  • Helper function used by extractImageFromUrl to compress images based on detected format using sharp.
    async function compressImage(imageBuffer: Buffer, formatStr: string): Promise<Buffer> {
      const sharpInstance = sharp(imageBuffer);
      const format = formatStr.toLowerCase() as SupportedFormat;
      
      // Check if format is supported
      if (format in COMPRESSION_OPTIONS) {
        const options = COMPRESSION_OPTIONS[format];
        
        // Use specific methods based on format
        switch (format) {
          case 'jpeg':
          case 'jpg':
            return await sharpInstance.jpeg(options as any).toBuffer();
          case 'png':
            return await sharpInstance.png(options as any).toBuffer();
          case 'webp':
            return await sharpInstance.webp(options as any).toBuffer();
          case 'avif':
            return await sharpInstance.avif(options as any).toBuffer();
          case 'tiff':
            return await sharpInstance.tiff(options as any).toBuffer();
          // For formats without specific compression options
          case 'gif':
          case 'svg':
            return await sharpInstance.toBuffer();
        }
      }
      
      // Default to jpeg if format not supported
      return await sharpInstance.jpeg(COMPRESSION_OPTIONS.jpeg as any).toBuffer();
    }
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions analysis purposes ('visual content analysis and text extraction') and that images are 'accessible via HTTP/HTTPS', but lacks details on permissions, rate limits, error handling, or output format. It adds some context but leaves significant behavioral traits unspecified.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence, followed by specific use cases. Every sentence earns its place by clarifying scope and applications without redundancy, making it efficiently structured and appropriately sized.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 4 parameters with high schema coverage but no annotations and no output schema, the description is moderately complete. It covers the purpose and usage context well, but as a tool with potential behavioral complexities (e.g., network access, analysis output), it lacks details on permissions, errors, or result format, leaving gaps in completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds minimal value beyond the schema by implying the 'url' parameter is for 'web screenshots, photos, diagrams', but does not provide additional syntax, format, or usage details for parameters. Baseline 3 is appropriate as the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('extract and analyze images'), resource ('from web URLs'), and scope ('for visual content analysis and text extraction'). It distinguishes from sibling tools by specifying 'from web URLs' versus 'from_base64' or 'from_file', making the purpose unambiguous and differentiated.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context on when to use this tool ('for analyzing web screenshots, online photos, diagrams, or any image accessible via HTTP/HTTPS'), but does not explicitly state when not to use it or name alternatives like the sibling tools. It implies usage scenarios without explicit exclusions or comparisons.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/ifmelate/mcp-image-extractor'

If you have feedback or need assistance with the MCP directory API, please join our Discord server