Skip to main content
Glama

generateImageFromReference

Create new images by using existing images as references, applying text-based transformations like cartoon versions or painting styles through configurable AI models.

Instructions

Generate a new image using an existing image as reference. User-configured settings in MCP config will be used as defaults unless specifically overridden.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
promptYesThe text description of what to generate based on the reference image (e.g., "create a cartoon version", "make it look like a painting")
imageUrlYesPublic HTTP(S) URL(s) of reference images. Accepts a string or an array for multi-reference. Local file paths, file uploads, or base64/data URLs are not supported.
modelNoModel name to use for generation (default: user config or "kontext"). Available: "kontext", "nanobanana", "seedream"
seedNoSeed for reproducible results (default: random)
widthNoWidth of the generated image (default: 1024)
heightNoHeight of the generated image (default: 1024)
enhanceNoWhether to enhance the prompt using an LLM before generating (default: true)
safeNoWhether to apply content filtering (default: false)
outputPathNoDirectory path where to save the image (default: user config or "./mcpollinations-output")
fileNameNoName of the file to save (without extension, default: generated from prompt)
formatNoImage format to save as (png, jpeg, jpg, webp - default: png)

Implementation Reference

  • The core handler function that takes a prompt and reference image URL(s), constructs a Pollinations API URL for image-to-image generation, fetches the image, encodes it as base64, saves it to a file in outputPath, and returns the image data with metadata.
    export async function generateImageFromReference(prompt, imageUrl, model = 'kontext', seed = Math.floor(Math.random() * 1000000), width = 1024, height = 1024, enhance = true, safe = false, outputPath = './mcpollinations-output', fileName = '', format = 'png', authConfig = null) {
      if (!prompt || typeof prompt !== 'string') {
        throw new Error('Prompt is required and must be a string');
      }
    
      if (!imageUrl || (typeof imageUrl !== 'string' && !Array.isArray(imageUrl))) {
        throw new Error('Reference image URL(s) are required and must be a string or array of strings');
      }
    
      const imageList = Array.isArray(imageUrl)
        ? imageUrl.filter(Boolean)
        : (typeof imageUrl === 'string' && imageUrl.includes(','))
          ? imageUrl.split(',').map(s => s.trim()).filter(Boolean)
          : [imageUrl];
    
      // Build the query parameters
      const queryParams = new URLSearchParams();
      queryParams.append('model', model);
      for (const u of imageList) {
        queryParams.append('image', u);
      }
      if (seed !== undefined) queryParams.append('seed', seed);
      if (width !== 1024) queryParams.append('width', width);
      if (height !== 1024) queryParams.append('height', height);
    
      // Add enhance parameter if true
      if (enhance) queryParams.append('enhance', 'true');
    
      // Add parameters
      queryParams.append('nologo', 'true'); // Always set nologo to true
      queryParams.append('private', 'true'); // Always set private to true)
      queryParams.append('safe', safe.toString()); // Use the customizable safe parameter
    
      // Construct the URL
      const encodedPrompt = encodeURIComponent(prompt);
      const baseUrl = 'https://image.pollinations.ai';
      let url = `${baseUrl}/prompt/${encodedPrompt}`;
    
      // Add query parameters
      const queryString = queryParams.toString();
      url += `?${queryString}`;
    
      try {
        // Prepare fetch options with optional auth headers
        const fetchOptions = {};
        if (authConfig) {
          fetchOptions.headers = {};
          if (authConfig.token) {
            fetchOptions.headers['Authorization'] = `Bearer ${authConfig.token}`;
          }
          if (authConfig.referrer) {
            fetchOptions.headers['Referer'] = authConfig.referrer;
          }
        }
    
        // Fetch the image from the URL
        const response = await fetch(url, fetchOptions);
    
        if (!response.ok) {
          throw new Error(`Failed to generate image from reference: ${response.statusText}`);
        }
    
        // Get the image data as an ArrayBuffer
        const imageBuffer = await response.arrayBuffer();
    
        // Convert the ArrayBuffer to a base64 string
        const base64Data = Buffer.from(imageBuffer).toString('base64');
    
        // Determine the mime type from the response headers or default to image/jpeg
        const contentType = response.headers.get('content-type') || 'image/jpeg';
    
        // Prepare the result object
        const result = {
          data: base64Data,
          mimeType: contentType,
          metadata: {
            prompt,
            referenceImageUrl: imageUrl,
            width,
            height,
            model,
            seed,
            enhance,
            private: true,
            nologo: true,
            safe
          }
        };
    
        // Always save the image to a file
        // Import required modules
        const fs = await import('fs');
        const path = await import('path');
    
        // Create the output directory if it doesn't exist
        if (!fs.existsSync(outputPath)) {
          fs.mkdirSync(outputPath, { recursive: true });
        }
    
        // Generate a filename if not provided
        let finalFileName = fileName;
        if (!finalFileName) {
          // Create a filename from the prompt (first 20 characters) and timestamp
          const sanitizedPrompt = prompt.replace(/[^a-zA-Z0-9]/g, '_').substring(0, 20);
          const timestamp = Date.now();
          const randomSuffix = Math.floor(Math.random() * 1000);
          finalFileName = `reference_${sanitizedPrompt}_${timestamp}_${randomSuffix}`;
        }
    
        // Ensure the filename has the correct extension
        const extension = format.toLowerCase();
        if (!finalFileName.endsWith(`.${extension}`)) {
          finalFileName += `.${extension}`;
        }
    
        // Check if file already exists and add a number suffix if needed
        let finalFilePath = path.join(outputPath, finalFileName);
        let counter = 1;
        while (fs.existsSync(finalFilePath)) {
          const nameWithoutExt = finalFileName.replace(`.${extension}`, '');
          const numberedFileName = `${nameWithoutExt}_${counter}.${extension}`;
          finalFilePath = path.join(outputPath, numberedFileName);
          counter++;
        }
    
        // Write the image data to the file
        fs.writeFileSync(finalFilePath, Buffer.from(base64Data, 'base64'));
    
        // Add the file path to the result
        result.filePath = finalFilePath;
    
        return result;
    
      } catch (error) {
        log('Error generating image from reference:', error);
        throw error;
      }
    }
  • JSON schema defining the input parameters and validation for the generateImageFromReference tool.
    export const generateImageFromReferenceSchema = {
      name: 'generateImageFromReference',
      description: 'Generate a new image using an existing image as reference. User-configured settings in MCP config will be used as defaults unless specifically overridden.',
      inputSchema: {
        type: 'object',
        properties: {
          prompt: {
            type: 'string',
            description: 'The text description of what to generate based on the reference image (e.g., "create a cartoon version", "make it look like a painting")'
          },
          imageUrl: {
            oneOf: [
              { type: 'string' },
              { type: 'array', items: { type: 'string' } }
            ],
            description: 'Public HTTP(S) URL(s) of reference images. Accepts a string or an array for multi-reference. Local file paths, file uploads, or base64/data URLs are not supported.'
          },
          model: {
            type: 'string',
            description: 'Model name to use for generation (default: user config or "kontext"). Available: "kontext", "nanobanana", "seedream"'
          },
          seed: {
            type: 'number',
            description: 'Seed for reproducible results (default: random)'
          },
          width: {
            type: 'number',
            description: 'Width of the generated image (default: 1024)'
          },
          height: {
            type: 'number',
            description: 'Height of the generated image (default: 1024)'
          },
          enhance: {
            type: 'boolean',
            description: 'Whether to enhance the prompt using an LLM before generating (default: true)'
          },
          safe: {
            type: 'boolean',
            description: 'Whether to apply content filtering (default: false)'
          },
          outputPath: {
            type: 'string',
            description: 'Directory path where to save the image (default: user config or "./mcpollinations-output")'
          },
          fileName: {
            type: 'string',
            description: 'Name of the file to save (without extension, default: generated from prompt)'
          },
          format: {
            type: 'string',
            description: 'Image format to save as (png, jpeg, jpg, webp - default: png)'
          }
        },
        required: ['prompt', 'imageUrl']
      }
    };
  • MCP server tool call dispatcher that handles calls to 'generateImageFromReference', extracts arguments with defaults, invokes the handler function, and formats the response as MCP content with image and text describing the result and file path.
    } else if (name === 'generateImageFromReference') {
      try {
        const { prompt, imageUrl, model = 'kontext', seed, width = defaultConfig.image.width, height = defaultConfig.image.height, enhance = defaultConfig.image.enhance, safe = defaultConfig.image.safe, outputPath = defaultConfig.resources.output_dir, fileName = '', format = 'png' } = args;
        const result = await generateImageFromReference(prompt, imageUrl, model, seed, width, height, enhance, safe, outputPath, fileName, format, finalAuthConfig);
    
        // Prepare the response content
        const content = [
          {
            type: 'image',
            data: result.data,
            mimeType: result.mimeType
          }
        ];
    
        // Prepare the response text
        let responseText = `Generated image from reference: "${prompt}"\nReference image: ${imageUrl}\n\nImage metadata: ${JSON.stringify(result.metadata, null, 2)}`;
    
        // Add file path information if the image was saved to a file
        if (result.filePath) {
          responseText += `\n\nImage saved to: ${result.filePath}`;
        }
    
        content.push({
          type: 'text',
          text: responseText
        });
    
        return { content };
      } catch (error) {
        return {
          content: [
            { type: 'text', text: `Error generating image from reference: ${error.message}` }
          ],
          isError: true
        };
      }
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions config defaults but fails to describe critical behavioral traits such as whether this is a read-only or destructive operation, what permissions or authentication might be required, rate limits, error handling, or what the output looks like (e.g., file saved locally vs. URL returned). This leaves significant gaps for an AI agent to understand how to invoke it safely and effectively.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise and front-loaded, stating the core purpose in the first sentence. The second sentence adds useful but non-essential context about config defaults. There's no wasted verbiage, and both sentences earn their place by providing distinct information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (11 parameters, no annotations, no output schema), the description is incomplete. It lacks crucial behavioral context (e.g., mutation effects, output format, error handling) and doesn't compensate for the absence of annotations or output schema. For a tool with this many parameters and no structured safety hints, the description should do more to guide usage and set expectations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, providing detailed documentation for all 11 parameters. The description adds minimal value beyond this, only hinting at config defaults. Since the schema does the heavy lifting, the baseline score of 3 is appropriate, as the description doesn't significantly enhance parameter understanding beyond what's already in the structured data.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Generate a new image using an existing image as reference.' This specifies both the action (generate) and resource (image from reference). However, it doesn't explicitly differentiate from sibling tools like 'editImage' or 'generateImage', which likely have related but distinct purposes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides minimal usage guidance, mentioning only that 'user-configured settings in MCP config will be used as defaults unless specifically overridden.' It offers no explicit guidance on when to use this tool versus alternatives like 'editImage' or 'generateImage', nor does it mention prerequisites or exclusions for usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/pinkpixel-dev/MCPollinations'

If you have feedback or need assistance with the MCP directory API, please join our Discord server