Skip to main content
Glama

ai_generate

Generate text responses using Google Gemini AI models by providing prompts, with options to specify model and token limits for tailored outputs.

Instructions

Generate text using Google Gemini. Provide a prompt and optional model name.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
maxTokensNoMaximum tokens in the response
modelNoGemini model namemodels/gemini-2.0-flash-exp
promptYesPrompt to send to Gemini

Implementation Reference

  • The main handler function for the 'ai_generate' tool, which calls the Google Gemini API to generate text based on the provided prompt.
    export default async function aiGenerate({ prompt, model = 'models/gemini-2.0-flash-exp', maxTokens, }: { prompt: string; model?: string; maxTokens?: number; }): Promise<McpResponse> { const apiKey = process.env.GEMINI_API_KEY; if (!apiKey) { logger.error('GEMINI_API_KEY is not set in environment variables'); return { content: [textContent('Error: Gemini API key not configured.')] }; } try { const response = await fetch( 'https://generativelanguage.googleapis.com/v1beta/models/' + encodeURIComponent(model) + ':generateContent?key=' + apiKey, { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ contents: [{ parts: [{ text: prompt }] }], ...(maxTokens ? { generationConfig: { maxOutputTokens: maxTokens } } : {}), }), } ); if (!response.ok) { const errorText = await response.text(); logger.error('Gemini API error', errorText); return { content: [textContent('Gemini API error: ' + errorText)] }; } const data = await response.json(); const text = data.candidates?.[0]?.content?.parts?.[0]?.text || '[No response]'; return { content: [textContent(text)] }; } catch (error) { logger.error('Failed to call Gemini API', error); return { content: [ textContent( 'Error calling Gemini: ' + (error instanceof Error ? error.message : String(error)) ), ], }; } }
  • Input schema for the 'ai_generate' tool using Zod validators for prompt, model, and maxTokens.
    export const argSchema = { prompt: z.string().min(1).describe('Prompt to send to Gemini'), model: z .string() .optional() .default('models/gemini-2.0-flash-exp') .describe('Gemini model name'), maxTokens: z.number().optional().describe('Maximum tokens in the response'), };
  • src/server.ts:115-120 (registration)
    Registration of the 'ai_generate' tool on the MCP server, importing schema and handler from aiGenerate.ts.
    server.tool( 'ai_generate', 'Generate text using Google Gemini. Provide a prompt and optional model name.', (await import('./tools/aiGenerate.ts')).argSchema, (await import('./tools/aiGenerate.ts')).default );

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/ssdeanx/node-code-sandbox-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server