Skip to main content
Glama
dakrin

Gemini MCP Server

by dakrin

generateWithGemini

Generate text content using Google's Gemini 2.5 Pro AI model with configurable parameters like temperature, token limits, safe mode, and search integration.

Instructions

Generate content with Gemini 2.5 Pro Experimental (beta API)

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
promptYesThe prompt to send to Gemini
temperatureNoTemperature (0.0 to 1.0)
maxTokensNoMaximum output tokens
safeModeNoEnable safe mode for sensitive topics
useSearchNoEnable Google Search grounding tool

Implementation Reference

  • The core handler function that executes the tool: constructs request to Gemini 2.5 Pro Experimental beta API, handles optional search grounding, fetches response, parses content, logs extensively, saves full response to /tmp files for verification, appends token usage and search info, and returns markdown-formatted content block.
    async ({ prompt, temperature = 0.9, maxTokens = 32000, safeMode = false, useSearch = false }: GenerationParams, extra) => { console.log("Generating with Gemini 2.5 Pro, prompt:", prompt); try { log("Sending request to beta API: gemini-2.5-pro-exp-03-25"); // Create request body with optional search tool const requestBody: RequestBody = { contents: [ { role: "user", parts: [{ text: prompt }] } ], generationConfig: { temperature: temperature, topP: 1, topK: 64, maxOutputTokens: maxTokens } }; // Add Google Search grounding tool if requested if (useSearch) { console.log("Adding Google Search grounding tool to request"); requestBody.tools = [ { googleSearch: {} // Empty config means use default settings } ]; } // Enhanced logging for debugging console.log(`Request to beta API (${new Date().toISOString()}): ${betaApiEndpoint.substring(0, 100)}...`); try { // Use the non-streaming API const response = await fetch(betaApiEndpoint, { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify(requestBody) }); console.log(`Response received (${new Date().toISOString()}), status: ${response.status}`); if (response.ok) { // Capture the full raw response text for debugging and to ensure nothing is lost const rawResponseText = await response.text(); console.log(`Raw API response length: ${rawResponseText.length} characters`); // Save the raw response to a file for verification try { const fs = await import('fs'); const responseDir = '/tmp/gemini-responses'; if (!fs.existsSync(responseDir)) { fs.mkdirSync(responseDir, { recursive: true }); } const timestamp = new Date().toISOString().replace(/:/g, '-'); const rawFilename = `${responseDir}/raw-response-${timestamp}.json`; fs.writeFileSync(rawFilename, rawResponseText); console.log(`Raw API response saved to ${rawFilename}`); } catch (fsError) { console.error("Error saving raw response to file:", fsError); } if (DEBUG) { console.log("First 1000 chars of raw response:", rawResponseText.substring(0, 1000)); } // Parse the JSON from the raw text to avoid any automatic processing/truncation const data = JSON.parse(rawResponseText); // Log response structure for debugging if (DEBUG) { console.log("API response structure:", JSON.stringify(data).substring(0, 1000) + (JSON.stringify(data).length > 1000 ? "..." : "")); } if (data.candidates && data.candidates[0]?.content?.parts) { // Extract and log the full API response for debugging console.log("Full API response structure:", JSON.stringify(Object.keys(data)).substring(0, 500)); if (DEBUG) { console.log("First candidate structure:", JSON.stringify(Object.keys(data.candidates[0])).substring(0, 500)); console.log("Content parts structure:", JSON.stringify(data.candidates[0].content.parts).substring(0, 500)); } // Ensure we correctly extract all text content let text = ""; // Process all parts that might contain text for (const part of data.candidates[0].content.parts) { if (part.text) { text += part.text; } } log("Success with Gemini 2.5 Pro Experimental!"); console.log("USING MODEL: gemini-2.5-pro-exp-03-25"); // Create token usage information if available let tokenInfo = ""; if (data.usageMetadata) { const { promptTokenCount, candidatesTokenCount, totalTokenCount } = data.usageMetadata; tokenInfo = `\n\n[Token usage: ${promptTokenCount} prompt, ${candidatesTokenCount || 0} response, ${totalTokenCount} total]`; } // Create search grounding data if available let searchInfo = ""; if (useSearch && data.candidates[0].groundingMetadata?.webSearchQueries) { const searchQueries = data.candidates[0].groundingMetadata.webSearchQueries; searchInfo = `\n\n[Search queries: ${searchQueries.join(", ")}]`; } // Check text length for debugging console.log(`Response text length: ${text.length} characters`); const fullText = text + tokenInfo + searchInfo; // Save the full response to a file for verification let savedFilename = ""; try { const fs = await import('fs'); const responseDir = '/tmp/gemini-responses'; // Create directory if it doesn't exist if (!fs.existsSync(responseDir)) { fs.mkdirSync(responseDir, { recursive: true }); } // Save the full response to a timestamped file const timestamp = new Date().toISOString().replace(/:/g, '-'); savedFilename = `${responseDir}/response-${timestamp}.txt`; fs.writeFileSync(savedFilename, fullText); console.log(`Full response saved to ${savedFilename}`); // Also save metadata to a JSON file const metadataFilename = `${responseDir}/metadata-${timestamp}.json`; fs.writeFileSync(metadataFilename, JSON.stringify({ responseLength: fullText.length, promptLength: prompt.length, useSearch: useSearch, timestamp: new Date().toISOString(), tokenInfo: data.usageMetadata || null, searchQueries: useSearch ? (data.candidates[0]?.groundingMetadata?.webSearchQueries || null) : null }, null, 2)); } catch (fsError) { console.error("Error saving response to file:", fsError); } // Include the file path info in the response const fileInfo = savedFilename ? `\n\n[Complete response saved to: ${savedFilename}]` : ""; // Return the full text directly without chunking // This gives Claude a chance to display the entire response if it can // Also includes the file path as a backup console.log(`Sending full response (${fullText.length} characters) with file info`); return { content: [{ type: "text", text: fullText + fileInfo }] }; } else { console.error("Invalid API response format:", JSON.stringify(data).substring(0, 500)); throw new Error("Invalid response format from beta API"); } } else { // Try to parse error response let errorMessage = `HTTP error ${response.status}`; try { const errorData = await response.json(); console.error("API error response:", JSON.stringify(errorData).substring(0, 500)); errorMessage = `API error: ${JSON.stringify(errorData)}`; } catch (e) { // If we can't parse JSON, use text const errorText = await response.text(); console.error("API error text:", errorText.substring(0, 500)); errorMessage = `API error: ${errorText}`; } throw new Error(errorMessage); } } catch (fetchError: any) { console.error("Fetch error:", fetchError.name, fetchError.message); throw fetchError; // Re-throw to be caught by outer catch } } catch (error: any) { // Handle the case where error is undefined if (!error) { console.error("Undefined error caught in Gemini handler"); return { content: [{ type: "text", text: "The model took too long to respond or the connection was interrupted. This sometimes happens with complex topics. Please try again or rephrase your question." }], isError: true }; } // Normal error handling for defined errors console.error("Error with Gemini 2.5 Pro:", error.name || "UnknownError", error.message || "No message", error.stack || "No stack" ); return { content: [{ type: "text", text: `Error using Gemini 2.5 Pro Experimental: ${error.name || 'Unknown'} - ${error.message || 'No error message'}` }], isError: true }; } }
  • Zod input schema validation for the generateWithGemini tool parameters.
    { prompt: z.string().describe("The prompt to send to Gemini"), temperature: z.number().optional().describe("Temperature (0.0 to 1.0)"), maxTokens: z.number().optional().describe("Maximum output tokens"), safeMode: z.boolean().optional().describe("Enable safe mode for sensitive topics"), useSearch: z.boolean().optional().describe("Enable Google Search grounding tool") },
  • src/index.ts:84-307 (registration)
    Registration of the generateWithGemini tool using McpServer.tool() method, providing name, description, input schema, and inline handler function.
    server.tool( "generateWithGemini", "Generate content with Gemini 2.5 Pro Experimental (beta API)", { prompt: z.string().describe("The prompt to send to Gemini"), temperature: z.number().optional().describe("Temperature (0.0 to 1.0)"), maxTokens: z.number().optional().describe("Maximum output tokens"), safeMode: z.boolean().optional().describe("Enable safe mode for sensitive topics"), useSearch: z.boolean().optional().describe("Enable Google Search grounding tool") }, async ({ prompt, temperature = 0.9, maxTokens = 32000, safeMode = false, useSearch = false }: GenerationParams, extra) => { console.log("Generating with Gemini 2.5 Pro, prompt:", prompt); try { log("Sending request to beta API: gemini-2.5-pro-exp-03-25"); // Create request body with optional search tool const requestBody: RequestBody = { contents: [ { role: "user", parts: [{ text: prompt }] } ], generationConfig: { temperature: temperature, topP: 1, topK: 64, maxOutputTokens: maxTokens } }; // Add Google Search grounding tool if requested if (useSearch) { console.log("Adding Google Search grounding tool to request"); requestBody.tools = [ { googleSearch: {} // Empty config means use default settings } ]; } // Enhanced logging for debugging console.log(`Request to beta API (${new Date().toISOString()}): ${betaApiEndpoint.substring(0, 100)}...`); try { // Use the non-streaming API const response = await fetch(betaApiEndpoint, { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify(requestBody) }); console.log(`Response received (${new Date().toISOString()}), status: ${response.status}`); if (response.ok) { // Capture the full raw response text for debugging and to ensure nothing is lost const rawResponseText = await response.text(); console.log(`Raw API response length: ${rawResponseText.length} characters`); // Save the raw response to a file for verification try { const fs = await import('fs'); const responseDir = '/tmp/gemini-responses'; if (!fs.existsSync(responseDir)) { fs.mkdirSync(responseDir, { recursive: true }); } const timestamp = new Date().toISOString().replace(/:/g, '-'); const rawFilename = `${responseDir}/raw-response-${timestamp}.json`; fs.writeFileSync(rawFilename, rawResponseText); console.log(`Raw API response saved to ${rawFilename}`); } catch (fsError) { console.error("Error saving raw response to file:", fsError); } if (DEBUG) { console.log("First 1000 chars of raw response:", rawResponseText.substring(0, 1000)); } // Parse the JSON from the raw text to avoid any automatic processing/truncation const data = JSON.parse(rawResponseText); // Log response structure for debugging if (DEBUG) { console.log("API response structure:", JSON.stringify(data).substring(0, 1000) + (JSON.stringify(data).length > 1000 ? "..." : "")); } if (data.candidates && data.candidates[0]?.content?.parts) { // Extract and log the full API response for debugging console.log("Full API response structure:", JSON.stringify(Object.keys(data)).substring(0, 500)); if (DEBUG) { console.log("First candidate structure:", JSON.stringify(Object.keys(data.candidates[0])).substring(0, 500)); console.log("Content parts structure:", JSON.stringify(data.candidates[0].content.parts).substring(0, 500)); } // Ensure we correctly extract all text content let text = ""; // Process all parts that might contain text for (const part of data.candidates[0].content.parts) { if (part.text) { text += part.text; } } log("Success with Gemini 2.5 Pro Experimental!"); console.log("USING MODEL: gemini-2.5-pro-exp-03-25"); // Create token usage information if available let tokenInfo = ""; if (data.usageMetadata) { const { promptTokenCount, candidatesTokenCount, totalTokenCount } = data.usageMetadata; tokenInfo = `\n\n[Token usage: ${promptTokenCount} prompt, ${candidatesTokenCount || 0} response, ${totalTokenCount} total]`; } // Create search grounding data if available let searchInfo = ""; if (useSearch && data.candidates[0].groundingMetadata?.webSearchQueries) { const searchQueries = data.candidates[0].groundingMetadata.webSearchQueries; searchInfo = `\n\n[Search queries: ${searchQueries.join(", ")}]`; } // Check text length for debugging console.log(`Response text length: ${text.length} characters`); const fullText = text + tokenInfo + searchInfo; // Save the full response to a file for verification let savedFilename = ""; try { const fs = await import('fs'); const responseDir = '/tmp/gemini-responses'; // Create directory if it doesn't exist if (!fs.existsSync(responseDir)) { fs.mkdirSync(responseDir, { recursive: true }); } // Save the full response to a timestamped file const timestamp = new Date().toISOString().replace(/:/g, '-'); savedFilename = `${responseDir}/response-${timestamp}.txt`; fs.writeFileSync(savedFilename, fullText); console.log(`Full response saved to ${savedFilename}`); // Also save metadata to a JSON file const metadataFilename = `${responseDir}/metadata-${timestamp}.json`; fs.writeFileSync(metadataFilename, JSON.stringify({ responseLength: fullText.length, promptLength: prompt.length, useSearch: useSearch, timestamp: new Date().toISOString(), tokenInfo: data.usageMetadata || null, searchQueries: useSearch ? (data.candidates[0]?.groundingMetadata?.webSearchQueries || null) : null }, null, 2)); } catch (fsError) { console.error("Error saving response to file:", fsError); } // Include the file path info in the response const fileInfo = savedFilename ? `\n\n[Complete response saved to: ${savedFilename}]` : ""; // Return the full text directly without chunking // This gives Claude a chance to display the entire response if it can // Also includes the file path as a backup console.log(`Sending full response (${fullText.length} characters) with file info`); return { content: [{ type: "text", text: fullText + fileInfo }] }; } else { console.error("Invalid API response format:", JSON.stringify(data).substring(0, 500)); throw new Error("Invalid response format from beta API"); } } else { // Try to parse error response let errorMessage = `HTTP error ${response.status}`; try { const errorData = await response.json(); console.error("API error response:", JSON.stringify(errorData).substring(0, 500)); errorMessage = `API error: ${JSON.stringify(errorData)}`; } catch (e) { // If we can't parse JSON, use text const errorText = await response.text(); console.error("API error text:", errorText.substring(0, 500)); errorMessage = `API error: ${errorText}`; } throw new Error(errorMessage); } } catch (fetchError: any) { console.error("Fetch error:", fetchError.name, fetchError.message); throw fetchError; // Re-throw to be caught by outer catch } } catch (error: any) { // Handle the case where error is undefined if (!error) { console.error("Undefined error caught in Gemini handler"); return { content: [{ type: "text", text: "The model took too long to respond or the connection was interrupted. This sometimes happens with complex topics. Please try again or rephrase your question." }], isError: true }; } // Normal error handling for defined errors console.error("Error with Gemini 2.5 Pro:", error.name || "UnknownError", error.message || "No message", error.stack || "No stack" ); return { content: [{ type: "text", text: `Error using Gemini 2.5 Pro Experimental: ${error.name || 'Unknown'} - ${error.message || 'No error message'}` }], isError: true }; } } );
  • TypeScript interface defining the input parameters for the generateWithGemini handler, matching the Zod schema.
    export interface GenerationParams { prompt: string; temperature?: number; maxTokens?: number; safeMode?: boolean; useSearch?: boolean; }
  • TypeScript interface for the JSON request body sent to the Gemini beta API, including optional tools for search grounding.
    export interface RequestBody { contents: Array<{ role: string; parts: Array<{ text: string; }>; }>; generationConfig: { temperature: number; topP: number; topK: number; maxOutputTokens: number; }; tools?: Array<{ googleSearch?: Record<string, unknown>; }>; }

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/dakrin/mcp-gemini-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server