Skip to main content
Glama

generate_text

Generate text content using Google's Gemini 2.5 Pro AI model. Provide a text prompt to create responses, articles, or creative writing with customizable token limits and temperature settings.

Instructions

Generate text using Gemini 2.5 Pro model

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
promptYesThe text prompt to send to Gemini
maxTokensNoMaximum number of tokens to generate (optional)
temperatureNoTemperature for text generation (0.0 to 2.0)

Implementation Reference

  • The main handler function for the generate_text tool. It destructures the prompt and temperature from args, configures generation parameters, calls the Gemini model to generate content, extracts the text response, and returns it in the expected MCP format.
    private async handleTextGeneration(args: any) { const { prompt, temperature = 1.0 } = args; const generationConfig = { temperature: Math.max(0, Math.min(2, temperature)), maxOutputTokens: args.maxTokens || 1000, }; const result = await this.model.generateContent({ contents: [{ role: "user", parts: [{ text: prompt }] }], generationConfig, }); const response = result.response; const text = response.text(); return { content: [ { type: "text", text: text, }, ], }; }
  • Defines the input schema for the generate_text tool, including properties for prompt (required), maxTokens, and temperature with descriptions and defaults.
    inputSchema: { type: "object", properties: { prompt: { type: "string", description: "The text prompt to send to Gemini", }, maxTokens: { type: "number", description: "Maximum number of tokens to generate (optional)", default: 1000, }, temperature: { type: "number", description: "Temperature for text generation (0.0 to 2.0)", default: 1.0, }, }, required: ["prompt"], },
  • src/index.ts:49-72 (registration)
    The tool registration entry returned by listTools, defining the name, description, and input schema for generate_text.
    { name: "generate_text", description: "Generate text using Gemini 2.5 Pro model", inputSchema: { type: "object", properties: { prompt: { type: "string", description: "The text prompt to send to Gemini", }, maxTokens: { type: "number", description: "Maximum number of tokens to generate (optional)", default: 1000, }, temperature: { type: "number", description: "Temperature for text generation (0.0 to 2.0)", default: 1.0, }, }, required: ["prompt"], }, },
  • src/index.ts:102-103 (registration)
    Switch case in the CallToolRequest handler that routes generate_text calls to the handleTextGeneration method.
    case "generate_text": return await this.handleTextGeneration(args);
Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/lutic1/Google-MCP-Server-'

If you have feedback or need assistance with the MCP directory API, please join our Discord server