Skip to main content
Glama

extract_image_from_file

Extract and process images from file paths for visual content analysis, OCR text extraction, and object recognition. Supports screenshots, photos, diagrams, and documents in PNG, JPG, GIF, and WebP formats.

Instructions

Extract and analyze images from local file paths. Supports visual content understanding, OCR text extraction, and object recognition for screenshots, photos, diagrams, and documents.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
file_pathYesPath to the image file to analyze (supports screenshots, photos, diagrams, and documents in PNG, JPG, GIF, WebP formats)
max_heightNoFor backward compatibility only. Default maximum height is now 512px
max_widthNoFor backward compatibility only. Default maximum width is now 512px
resizeNoFor backward compatibility only. Images are always automatically resized to optimal dimensions (max 512x512) for LLM analysis

Implementation Reference

  • Core implementation of the extractImageFromFile handler: reads file, checks size, resizes to max 512x512, determines MIME type, compresses, encodes to base64, returns MCP response with metadata and image.
    export async function extractImageFromFile(params: ExtractImageFromFileParams): Promise<McpToolResponse> {
      try {
        const { file_path, resize, max_width, max_height } = params;
        
        // Check if file exists
        if (!fs.existsSync(file_path)) {
          return {
            content: [{ type: "text", text: `Error: File ${file_path} does not exist` }],
            isError: true
          };
        }
        
        // Read file
        let imageBuffer = fs.readFileSync(file_path);
        
        // Check size
        if (imageBuffer.length > MAX_IMAGE_SIZE) {
          return {
            content: [{ type: "text", text: `Error: Image size exceeds maximum allowed size of ${MAX_IMAGE_SIZE} bytes` }],
            isError: true
          };
        }
        
        // Process the image
        let metadata = await sharp(imageBuffer).metadata();
        
        // Always resize to ensure the base64 representation is reasonable
        // This will help avoid consuming too much of the context window
        if (metadata.width && metadata.height) {
          // Use provided dimensions or fallback to defaults for optimal LLM context usage
          const targetWidth = Math.min(metadata.width, DEFAULT_MAX_WIDTH);
          const targetHeight = Math.min(metadata.height, DEFAULT_MAX_HEIGHT);
          
          // Only resize if needed
          if (metadata.width > targetWidth || metadata.height > targetHeight) {
            imageBuffer = await sharp(imageBuffer)
              .resize({
                width: targetWidth,
                height: targetHeight,
                fit: 'inside',
                withoutEnlargement: true
              })
              .toBuffer();
            
            // Update metadata after resize
            metadata = await sharp(imageBuffer).metadata();
          }
        }
    
        // Determine mime type based on file extension
        const fileExt = path.extname(file_path).toLowerCase();
        let mimeType = 'image/jpeg';
        let format = 'jpeg';
        
        if (fileExt === '.png') {
          mimeType = 'image/png';
          format = 'png';
        }
        else if (fileExt === '.jpg' || fileExt === '.jpeg') {
          mimeType = 'image/jpeg';
          format = 'jpeg';
        }
        else if (fileExt === '.gif') {
          mimeType = 'image/gif';
          format = 'gif';
        }
        else if (fileExt === '.webp') {
          mimeType = 'image/webp';
          format = 'webp';
        }
        else if (fileExt === '.svg') {
          mimeType = 'image/svg+xml';
          format = 'svg';
        }
        else if (fileExt === '.avif') {
          mimeType = 'image/avif';
          format = 'avif';
        }
        
        // Compress the image based on its format
        try {
          imageBuffer = await compressImage(imageBuffer, format);
        } catch (compressionError) {
          console.warn('Compression warning, using original image:', compressionError);
          // Continue with the original image if compression fails
        }
        
        // Convert to base64
        const base64 = imageBuffer.toString('base64');
    
        // Return both text and image content
        return {
          content: [
            { 
              type: "text", 
              text: JSON.stringify({
                width: metadata.width,
                height: metadata.height,
                format: metadata.format,
                size: imageBuffer.length
              })
            },
            {
              type: "image",
              data: base64,
              mimeType: mimeType
            }
          ]
        };
      } catch (error: unknown) {
        console.error('Error processing image file:', error);
        return {
          content: [{ type: "text", text: `Error: ${error instanceof Error ? error.message : String(error)}` }],
          isError: true
        };
      }
    }
  • TypeScript type definition for the input parameters of extractImageFromFile.
    export type ExtractImageFromFileParams = {
      file_path: string;
      resize: boolean;
      max_width: number;
      max_height: number;
    };
  • src/index.ts:22-35 (registration)
    MCP tool registration for 'extract_image_from_file': defines name, description, Zod input schema, and thin async handler that calls the core extractImageFromFile function.
    server.tool(
      "extract_image_from_file",
      "Extract and analyze images from local file paths. Supports visual content understanding, OCR text extraction, and object recognition for screenshots, photos, diagrams, and documents.",
      {
        file_path: z.string().describe("Path to the image file to analyze (supports screenshots, photos, diagrams, and documents in PNG, JPG, GIF, WebP formats)"),
        resize: z.boolean().default(true).describe("For backward compatibility only. Images are always automatically resized to optimal dimensions (max 512x512) for LLM analysis"),
        max_width: z.number().default(512).describe("For backward compatibility only. Default maximum width is now 512px"),
        max_height: z.number().default(512).describe("For backward compatibility only. Default maximum height is now 512px")
      },
      async (args, extra) => {
        const result = await extractImageFromFile(args);
        return result;
      }
    );
  • Supporting helper function for compressing images using sharp based on detected format.
    async function compressImage(imageBuffer: Buffer, formatStr: string): Promise<Buffer> {
      const sharpInstance = sharp(imageBuffer);
      const format = formatStr.toLowerCase() as SupportedFormat;
      
      // Check if format is supported
      if (format in COMPRESSION_OPTIONS) {
        const options = COMPRESSION_OPTIONS[format];
        
        // Use specific methods based on format
        switch (format) {
          case 'jpeg':
          case 'jpg':
            return await sharpInstance.jpeg(options as any).toBuffer();
          case 'png':
            return await sharpInstance.png(options as any).toBuffer();
          case 'webp':
            return await sharpInstance.webp(options as any).toBuffer();
          case 'avif':
            return await sharpInstance.avif(options as any).toBuffer();
          case 'tiff':
            return await sharpInstance.tiff(options as any).toBuffer();
          // For formats without specific compression options
          case 'gif':
          case 'svg':
            return await sharpInstance.toBuffer();
        }
      }
      
      // Default to jpeg if format not supported
      return await sharpInstance.jpeg(COMPRESSION_OPTIONS.jpeg as any).toBuffer();
    }
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden. It discloses analysis capabilities (visual understanding, OCR, object recognition) and supported file types, but doesn't mention performance characteristics, rate limits, authentication needs, error conditions, or output format. It provides basic behavioral context but lacks operational details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste. The first sentence states the core purpose and scope. The second sentence elaborates on capabilities and supported content types. Every word serves a purpose, and the description is appropriately sized for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 4 parameters, 100% schema coverage, but no annotations or output schema, the description provides good purpose and usage context. However, it lacks information about what the tool returns (output format), error handling, or operational constraints. Given the absence of output schema, more detail about return values would improve completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, providing complete parameter documentation. The description adds value by mentioning supported file types (PNG, JPG, GIF, WebP) and analysis capabilities, which helps contextualize the file_path parameter. However, it doesn't provide additional semantic context beyond what the schema already documents well.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('extract and analyze images'), the resource ('from local file paths'), and distinguishes from siblings by specifying 'local file paths' (vs. base64 or URL sources). It lists supported analysis types (visual content understanding, OCR, object recognition) and file types, providing comprehensive purpose differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly indicates when to use this tool vs. alternatives by specifying 'from local file paths' and listing supported file types/formats. This clearly distinguishes it from sibling tools extract_image_from_base64 and extract_image_from_url, providing perfect contextual guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/ifmelate/mcp-image-extractor'

If you have feedback or need assistance with the MCP directory API, please join our Discord server