generate_text
Generate text content using Google's Gemini 2.5 Pro AI model. Provide a text prompt to create responses, articles, or creative writing with customizable token limits and temperature settings.
Instructions
Generate text using Gemini 2.5 Pro model
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| prompt | Yes | The text prompt to send to Gemini | |
| maxTokens | No | Maximum number of tokens to generate (optional) | |
| temperature | No | Temperature for text generation (0.0 to 2.0) |
Implementation Reference
- src/index.ts:123-147 (handler)The main handler function for the generate_text tool. It destructures the prompt and temperature from args, configures generation parameters, calls the Gemini model to generate content, extracts the text response, and returns it in the expected MCP format.private async handleTextGeneration(args: any) { const { prompt, temperature = 1.0 } = args; const generationConfig = { temperature: Math.max(0, Math.min(2, temperature)), maxOutputTokens: args.maxTokens || 1000, }; const result = await this.model.generateContent({ contents: [{ role: "user", parts: [{ text: prompt }] }], generationConfig, }); const response = result.response; const text = response.text(); return { content: [ { type: "text", text: text, }, ], }; }
- src/index.ts:52-71 (schema)Defines the input schema for the generate_text tool, including properties for prompt (required), maxTokens, and temperature with descriptions and defaults.inputSchema: { type: "object", properties: { prompt: { type: "string", description: "The text prompt to send to Gemini", }, maxTokens: { type: "number", description: "Maximum number of tokens to generate (optional)", default: 1000, }, temperature: { type: "number", description: "Temperature for text generation (0.0 to 2.0)", default: 1.0, }, }, required: ["prompt"], },
- src/index.ts:49-72 (registration)The tool registration entry returned by listTools, defining the name, description, and input schema for generate_text.{ name: "generate_text", description: "Generate text using Gemini 2.5 Pro model", inputSchema: { type: "object", properties: { prompt: { type: "string", description: "The text prompt to send to Gemini", }, maxTokens: { type: "number", description: "Maximum number of tokens to generate (optional)", default: 1000, }, temperature: { type: "number", description: "Temperature for text generation (0.0 to 2.0)", default: 1.0, }, }, required: ["prompt"], }, },
- src/index.ts:102-103 (registration)Switch case in the CallToolRequest handler that routes generate_text calls to the handleTextGeneration method.case "generate_text": return await this.handleTextGeneration(args);