Skip to main content
Glama
jina-ai

Jina AI Remote MCP Server

Official
by jina-ai

search_images

Find images across the web to illustrate concepts, locate specific pictures, or discover visual resources. Returns images as base64-encoded JPEGs or URLs with metadata.

Instructions

Search for images across the web, similar to Google Images. Use this when you need to find photos, illustrations, diagrams, charts, logos, or any visual content. Perfect for finding images to illustrate concepts, locating specific pictures, or discovering visual resources. Images are returned by default as small base64-encoded JPEG images.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
queryYesImage search terms describing what you want to find (e.g., 'sunset over mountains', 'vintage car illustration', 'data visualization chart')
return_urlNoSet to true to return image URLs, title, shapes, and other metadata. By default, images are downloaded as base64 and returned as rendered images.
tbsNoTime-based search parameter, e.g., 'qdr:h' for past hour, can be qdr:h, qdr:d, qdr:w, qdr:m, qdr:y
locationNoLocation for search results, e.g., 'London', 'New York', 'Tokyo'
glNoCountry code, e.g., 'dz' for Algeria
hlNoLanguage code, e.g., 'zh-cn' for Simplified Chinese

Implementation Reference

  • Primary MCP tool handler for 'search_images': handles input args, calls executeImageSearch helper, optionally downloads and resizes images using downloadImages utility, formats response as MCP content (text metadata or image base64).
    async ({ query, return_url, tbs, location, gl, hl }: SearchImageArgs) => {
    	try {
    		const props = getProps();
    
    		const tokenError = checkBearerToken(props.bearerToken);
    		if (tokenError) {
    			return tokenError;
    		}
    
    		const searchResult = await executeImageSearch({ query, return_url, tbs, location, gl, hl }, props.bearerToken);
    
    		if ('error' in searchResult) {
    			return createErrorResponse(searchResult.error);
    		}
    
    		const data = { results: searchResult.results };
    
    		// Prepare response content - always return as list structure for consistency
    		const contentItems: Array<{ type: 'text'; text: string } | { type: 'image'; data: string; mimeType: string }> = [];
    
    		if (return_url) {
    			// Return each result as individual text items
    			if (data.results && Array.isArray(data.results)) {
    				for (const result of data.results) {
    					contentItems.push({
    						type: "text" as const,
    						text: yamlStringify(result),
    					});
    				}
    			}
    		} else {
    			// Extract image URLs from search results
    			const imageUrls: string[] = [];
    			if (data.results && Array.isArray(data.results)) {
    				for (const result of data.results) {
    					if (result.imageUrl) {
    						imageUrls.push(result.imageUrl);
    					}
    				}
    			}
    
    			if (imageUrls.length === 0) {
    				throw new Error("No image URLs found in search results");
    			}
    
    			// Download and process images (resize to max 800px, convert to JPEG)
    			// 15 second timeout - returns partial results if timeout occurs
    			const downloadResults = await downloadImages(imageUrls, 3, 15000);
    
    			// Add successful downloads as images
    			for (const result of downloadResults) {
    				if (result.success && result.data) {
    					contentItems.push({
    						type: "image" as const,
    						data: result.data,
    						mimeType: result.mimeType,
    					});
    				}
    			}
    
    
    		}
    
    		return {
    			content: contentItems,
    		};
    	} catch (error) {
    		return createErrorResponse(`Error: ${error instanceof Error ? error.message : String(error)}`);
    	}
    },
  • Zod schema defining input parameters for search_images tool: query (required string), return_url (boolean default false), optional tbs/location/gl/hl for search refinement.
    {
    	query: z.string().describe("Image search terms describing what you want to find (e.g., 'sunset over mountains', 'vintage car illustration', 'data visualization chart')"),
    	return_url: z.boolean().default(false).describe("Set to true to return image URLs, title, shapes, and other metadata. By default, images are downloaded as base64 and returned as rendered images."),
    	tbs: z.string().optional().describe("Time-based search parameter, e.g., 'qdr:h' for past hour, can be qdr:h, qdr:d, qdr:w, qdr:m, qdr:y"),
    	location: z.string().optional().describe("Location for search results, e.g., 'London', 'New York', 'Tokyo'"),
    	gl: z.string().optional().describe("Country code, e.g., 'dz' for Algeria"),
    	hl: z.string().optional().describe("Language code, e.g., 'zh-cn' for Simplified Chinese")
    },
  • Registration of 'search_images' tool in registerJinaTools function using McpServer.tool(), conditionally enabled via isToolEnabled check.
    if (isToolEnabled("search_images")) {
    	server.tool(
    		"search_images",
    		"Search for images across the web, similar to Google Images. Use this when you need to find photos, illustrations, diagrams, charts, logos, or any visual content. Perfect for finding images to illustrate concepts, locating specific pictures, or discovering visual resources. Images are returned by default as small base64-encoded JPEG images.",
    		{
    			query: z.string().describe("Image search terms describing what you want to find (e.g., 'sunset over mountains', 'vintage car illustration', 'data visualization chart')"),
    			return_url: z.boolean().default(false).describe("Set to true to return image URLs, title, shapes, and other metadata. By default, images are downloaded as base64 and returned as rendered images."),
    			tbs: z.string().optional().describe("Time-based search parameter, e.g., 'qdr:h' for past hour, can be qdr:h, qdr:d, qdr:w, qdr:m, qdr:y"),
    			location: z.string().optional().describe("Location for search results, e.g., 'London', 'New York', 'Tokyo'"),
    			gl: z.string().optional().describe("Country code, e.g., 'dz' for Algeria"),
    			hl: z.string().optional().describe("Language code, e.g., 'zh-cn' for Simplified Chinese")
    		},
    		async ({ query, return_url, tbs, location, gl, hl }: SearchImageArgs) => {
    			try {
    				const props = getProps();
    
    				const tokenError = checkBearerToken(props.bearerToken);
    				if (tokenError) {
    					return tokenError;
    				}
    
    				const searchResult = await executeImageSearch({ query, return_url, tbs, location, gl, hl }, props.bearerToken);
    
    				if ('error' in searchResult) {
    					return createErrorResponse(searchResult.error);
    				}
    
    				const data = { results: searchResult.results };
    
    				// Prepare response content - always return as list structure for consistency
    				const contentItems: Array<{ type: 'text'; text: string } | { type: 'image'; data: string; mimeType: string }> = [];
    
    				if (return_url) {
    					// Return each result as individual text items
    					if (data.results && Array.isArray(data.results)) {
    						for (const result of data.results) {
    							contentItems.push({
    								type: "text" as const,
    								text: yamlStringify(result),
    							});
    						}
    					}
    				} else {
    					// Extract image URLs from search results
    					const imageUrls: string[] = [];
    					if (data.results && Array.isArray(data.results)) {
    						for (const result of data.results) {
    							if (result.imageUrl) {
    								imageUrls.push(result.imageUrl);
    							}
    						}
    					}
    
    					if (imageUrls.length === 0) {
    						throw new Error("No image URLs found in search results");
    					}
    
    					// Download and process images (resize to max 800px, convert to JPEG)
    					// 15 second timeout - returns partial results if timeout occurs
    					const downloadResults = await downloadImages(imageUrls, 3, 15000);
    
    					// Add successful downloads as images
    					for (const result of downloadResults) {
    						if (result.success && result.data) {
    							contentItems.push({
    								type: "image" as const,
    								data: result.data,
    								mimeType: result.mimeType,
    							});
    						}
    					}
    
    
    				}
    
    				return {
    					content: contentItems,
    				};
    			} catch (error) {
    				return createErrorResponse(`Error: ${error instanceof Error ? error.message : String(error)}`);
    			}
    		},
    	);
    }
  • Core helper function executeImageSearch: performs HTTP POST to svip.jina.ai image search endpoint (type='images'), handles errors, returns search results or error.
    export async function executeImageSearch(
        searchArgs: SearchImageArgs,
        bearerToken: string
    ): Promise<SearchResultOrError> {
        try {
            const response = await fetch('https://svip.jina.ai/', {
                method: 'POST',
                headers: {
                    'Accept': 'application/json',
                    'Content-Type': 'application/json',
                    'Authorization': `Bearer ${bearerToken}`,
                },
                body: JSON.stringify({
                    q: searchArgs.query,
                    type: 'images',
                    ...(searchArgs.tbs && { tbs: searchArgs.tbs }),
                    ...(searchArgs.location && { location: searchArgs.location }),
                    ...(searchArgs.gl && { gl: searchArgs.gl }),
                    ...(searchArgs.hl && { hl: searchArgs.hl })
                }),
            });
    
            if (!response.ok) {
                return { error: `Image search failed for query "${searchArgs.query}": ${response.statusText}` };
            }
    
            const data = await response.json() as any;
            return { query: searchArgs.query, results: data.results || [] };
        } catch (error) {
            return { error: `Image search failed for query "${searchArgs.query}": ${error instanceof Error ? error.message : String(error)}` };
        }
    }
  • TypeScript interface SearchImageArgs defining typed inputs for image search, imported into jina-tools.ts handler.
    export interface SearchImageArgs {
        query: string;
        return_url?: boolean;
        tbs?: string;
        location?: string;
        gl?: string;
        hl?: string;
    }
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behaviors: the search scope ('across the web'), the default return format ('small base64-encoded JPEG images'), and the alternative return option ('return image URLs, title, shapes, and other metadata'). It doesn't mention rate limits, authentication needs, or pagination behavior, but covers the essential operational characteristics.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized with three sentences that each serve a distinct purpose: stating the tool's function, providing usage guidelines, and describing return behavior. It's front-loaded with the core purpose and avoids unnecessary repetition. The final sentence about default return format could be slightly more concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a search tool with 6 parameters (1 required), 100% schema coverage, and no output schema, the description provides good contextual completeness. It covers the tool's purpose, usage scenarios, and key behavioral characteristics. The main gap is the lack of information about response format details beyond the base64/URL distinction, but given the schema coverage and tool complexity, this is reasonably complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all 6 parameters thoroughly. The description mentions the default return behavior (base64 images vs URLs) which relates to the 'return_url' parameter, but doesn't add significant semantic value beyond what's already in the schema descriptions. This meets the baseline expectation when schema coverage is complete.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Search for images across the web') and distinguishes it from siblings by specifying it's for visual content like photos, illustrations, diagrams, etc. It explicitly mentions it's 'similar to Google Images' which provides clear context about its function.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool: 'Use this when you need to find photos, illustrations, diagrams, charts, logos, or any visual content.' It gives three specific use cases: illustrating concepts, locating specific pictures, or discovering visual resources. This clearly distinguishes it from text-based search siblings like search_web or search_arxiv.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/jina-ai/MCP'

If you have feedback or need assistance with the MCP directory API, please join our Discord server