Skip to main content
Glama

generate_text

Generate coherent and contextually relevant text responses by providing a prompt. Adjust parameters like temperature, max tokens, and top-K to control output creativity and length.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
maxOutputTokensNo
promptYes
streamNo
temperatureNo
topKNo
topPNo

Implementation Reference

  • The handler function that executes the generate_text tool: calls Gemini API directly with chat history, processes response, and returns text content.
    private async generateText(params: GenerateTextParams) { try { const { prompt, temperature = 0.7, maxOutputTokens = 8192, topK, topP, stream = false } = params; console.log('Sending message to Gemini:', prompt); // Add user message to history this.chatHistory.push({ role: 'user', parts: [{ text: prompt }] }); // Make a direct API call to Gemini const url = `https://generativelanguage.googleapis.com/v1beta/models/${MODEL_ID}:generateContent?key=${GEMINI_API_KEY}`; const body = { contents: this.chatHistory, generationConfig: { temperature, maxOutputTokens, topK, topP, } }; const response = await fetch(url, { method: 'POST', headers: { 'Content-Type': 'application/json', }, body: JSON.stringify(body), }); if (!response.ok) { const errorText = await response.text(); throw new Error(`Gemini API error (${response.status}): ${errorText}`); } const responseData = await response.json(); if (!responseData.candidates || !responseData.candidates[0] || !responseData.candidates[0].content) { throw new Error('Invalid response from Gemini API'); } // Extract the text from the response const responseText = responseData.candidates[0].content.parts .map((part: any) => part.text || '') .join(''); console.log('Received response from Gemini:', responseText); // Add assistant response to history this.chatHistory.push(responseData.candidates[0].content); return { content: [{ type: "text" as const, text: responseText }] }; } catch (err) { console.error('Error generating content:', err); return { content: [{ type: "text" as const, text: err instanceof Error ? err.message : 'Internal error' }], isError: true }; } }
  • Zod schema defining input parameters for the generate_text tool, including prompt and optional generation parameters.
    const generateTextSchema = z.object({ prompt: z.string().min(1), temperature: z.number().min(0).max(1).optional(), maxOutputTokens: z.number().min(1).max(8192).optional(), topK: z.number().min(1).max(40).optional(), topP: z.number().min(0).max(1).optional(), stream: z.boolean().optional(), });
  • Registration of the generate_text tool on the MCP server using server.tool() with name, schema, and handler.
    // Register generate_text tool this.server.tool( "generate_text", generateTextSchema.shape, async (args: GenerateTextParams) => this.generateText(args) );
  • Tool capability declaration in server constructor, specifying description and streaming support.
    generate_text: { description: "Generate text using Gemini Pro model", streaming: true } }

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/IA-Entertainment-git-organization/gemini-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server