generate_text
Generate coherent and contextually relevant text responses by providing a prompt. Adjust parameters like temperature, max tokens, and top-K to control output creativity and length.
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| maxOutputTokens | No | ||
| prompt | Yes | ||
| stream | No | ||
| temperature | No | ||
| topK | No | ||
| topP | No |
Implementation Reference
- src/gemini_mcp_server.ts:84-154 (handler)The handler function that executes the generate_text tool: calls Gemini API directly with chat history, processes response, and returns text content.private async generateText(params: GenerateTextParams) { try { const { prompt, temperature = 0.7, maxOutputTokens = 8192, topK, topP, stream = false } = params; console.log('Sending message to Gemini:', prompt); // Add user message to history this.chatHistory.push({ role: 'user', parts: [{ text: prompt }] }); // Make a direct API call to Gemini const url = `https://generativelanguage.googleapis.com/v1beta/models/${MODEL_ID}:generateContent?key=${GEMINI_API_KEY}`; const body = { contents: this.chatHistory, generationConfig: { temperature, maxOutputTokens, topK, topP, } }; const response = await fetch(url, { method: 'POST', headers: { 'Content-Type': 'application/json', }, body: JSON.stringify(body), }); if (!response.ok) { const errorText = await response.text(); throw new Error(`Gemini API error (${response.status}): ${errorText}`); } const responseData = await response.json(); if (!responseData.candidates || !responseData.candidates[0] || !responseData.candidates[0].content) { throw new Error('Invalid response from Gemini API'); } // Extract the text from the response const responseText = responseData.candidates[0].content.parts .map((part: any) => part.text || '') .join(''); console.log('Received response from Gemini:', responseText); // Add assistant response to history this.chatHistory.push(responseData.candidates[0].content); return { content: [{ type: "text" as const, text: responseText }] }; } catch (err) { console.error('Error generating content:', err); return { content: [{ type: "text" as const, text: err instanceof Error ? err.message : 'Internal error' }], isError: true }; } }
- src/gemini_mcp_server.ts:48-55 (schema)Zod schema defining input parameters for the generate_text tool, including prompt and optional generation parameters.const generateTextSchema = z.object({ prompt: z.string().min(1), temperature: z.number().min(0).max(1).optional(), maxOutputTokens: z.number().min(1).max(8192).optional(), topK: z.number().min(1).max(40).optional(), topP: z.number().min(0).max(1).optional(), stream: z.boolean().optional(), });
- src/gemini_mcp_server.ts:160-165 (registration)Registration of the generate_text tool on the MCP server using server.tool() with name, schema, and handler.// Register generate_text tool this.server.tool( "generate_text", generateTextSchema.shape, async (args: GenerateTextParams) => this.generateText(args) );
- src/gemini_mcp_server.ts:73-77 (schema)Tool capability declaration in server constructor, specifying description and streaming support.generate_text: { description: "Generate text using Gemini Pro model", streaming: true } }