generate_text
Generate text using Google Gemini AI models with options for model selection, system instructions, temperature control, JSON output, and safety settings.
Instructions
Generate text using Google Gemini with advanced features
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| prompt | Yes | The prompt to send to Gemini | |
| model | No | Specific Gemini model to use | gemini-2.5-flash |
| systemInstruction | No | System instruction to guide model behavior | |
| temperature | No | Temperature for generation (0-2) | |
| maxTokens | No | Maximum tokens to generate | |
| topK | No | Top-k sampling parameter | |
| topP | No | Top-p (nucleus) sampling parameter | |
| jsonMode | No | Enable JSON mode for structured output | |
| jsonSchema | No | JSON schema for structured output (when jsonMode is true) | |
| grounding | No | Enable Google Search grounding for up-to-date information | |
| safetySettings | No | Safety settings for content filtering | |
| conversationId | No | ID for maintaining conversation context |
Implementation Reference
- src/enhanced-stdio-server.ts:504-617 (handler)Primary handler implementing generate_text tool: processes args, builds Gemini API request with model config, system instructions, safety, grounding, conversation history, calls generateContent, handles response and errors.private async generateText(id: any, args: any): Promise<MCPResponse> { try { const model = args.model || 'gemini-2.5-flash'; const modelInfo = GEMINI_MODELS[model as keyof typeof GEMINI_MODELS]; if (!modelInfo) { throw new Error(`Unknown model: ${model}`); } // Build generation config const generationConfig: any = { temperature: args.temperature || 0.7, maxOutputTokens: args.maxTokens || 2048, topK: args.topK || 40, topP: args.topP || 0.95 }; // Add JSON mode if requested if (args.jsonMode) { generationConfig.responseMimeType = 'application/json'; if (args.jsonSchema) { generationConfig.responseSchema = args.jsonSchema; } } // Build the request const requestBody: any = { model, contents: [{ parts: [{ text: args.prompt }], role: 'user' }], generationConfig }; // Add system instruction if provided if (args.systemInstruction) { requestBody.systemInstruction = { parts: [{ text: args.systemInstruction }] }; } // Add safety settings if provided if (args.safetySettings) { requestBody.safetySettings = args.safetySettings; } // Add grounding if requested and supported if (args.grounding && modelInfo.features.includes('grounding')) { requestBody.tools = [{ googleSearch: {} }]; } // Handle conversation context if (args.conversationId) { const history = this.conversations.get(args.conversationId) || []; if (history.length > 0) { requestBody.contents = [...history, ...requestBody.contents]; } } // Call the API using the new SDK format const result = await this.genAI.models.generateContent({ model, ...requestBody }); const text = result.text || ''; // Update conversation history if needed if (args.conversationId) { const history = this.conversations.get(args.conversationId) || []; history.push(...requestBody.contents); history.push({ parts: [{ text: text }], role: 'model' }); this.conversations.set(args.conversationId, history); } return { jsonrpc: '2.0', id, result: { content: [{ type: 'text', text: text }], metadata: { model, tokensUsed: result.usageMetadata?.totalTokenCount, candidatesCount: result.candidates?.length || 1, finishReason: result.candidates?.[0]?.finishReason } } }; } catch (error) { console.error('Error in generateText:', error); return { jsonrpc: '2.0', id, error: { code: -32603, message: error instanceof Error ? error.message : 'Internal error' } }; } }
- src/enhanced-stdio-server.ts:195-270 (schema)Input schema defining all parameters for the generate_text tool, including prompt, model selection, generation config, JSON mode, grounding, safety settings, and conversation ID.inputSchema: { type: 'object', properties: { prompt: { type: 'string', description: 'The prompt to send to Gemini' }, model: { type: 'string', description: 'Specific Gemini model to use', enum: Object.keys(GEMINI_MODELS), default: 'gemini-2.5-flash' }, systemInstruction: { type: 'string', description: 'System instruction to guide model behavior' }, temperature: { type: 'number', description: 'Temperature for generation (0-2)', default: 0.7, minimum: 0, maximum: 2 }, maxTokens: { type: 'number', description: 'Maximum tokens to generate', default: 2048 }, topK: { type: 'number', description: 'Top-k sampling parameter', default: 40 }, topP: { type: 'number', description: 'Top-p (nucleus) sampling parameter', default: 0.95 }, jsonMode: { type: 'boolean', description: 'Enable JSON mode for structured output', default: false }, jsonSchema: { type: 'object', description: 'JSON schema for structured output (when jsonMode is true)' }, grounding: { type: 'boolean', description: 'Enable Google Search grounding for up-to-date information', default: false }, safetySettings: { type: 'array', description: 'Safety settings for content filtering', items: { type: 'object', properties: { category: { type: 'string', enum: ['HARM_CATEGORY_HARASSMENT', 'HARM_CATEGORY_HATE_SPEECH', 'HARM_CATEGORY_SEXUALLY_EXPLICIT', 'HARM_CATEGORY_DANGEROUS_CONTENT'] }, threshold: { type: 'string', enum: ['BLOCK_NONE', 'BLOCK_ONLY_HIGH', 'BLOCK_MEDIUM_AND_ABOVE', 'BLOCK_LOW_AND_ABOVE'] } } } }, conversationId: { type: 'string', description: 'ID for maintaining conversation context' } }, required: ['prompt']
- src/enhanced-stdio-server.ts:192-272 (registration)Tool registration in getAvailableTools() method, defining name, description, and referencing the input schema.{ name: 'generate_text', description: 'Generate text using Google Gemini with advanced features', inputSchema: { type: 'object', properties: { prompt: { type: 'string', description: 'The prompt to send to Gemini' }, model: { type: 'string', description: 'Specific Gemini model to use', enum: Object.keys(GEMINI_MODELS), default: 'gemini-2.5-flash' }, systemInstruction: { type: 'string', description: 'System instruction to guide model behavior' }, temperature: { type: 'number', description: 'Temperature for generation (0-2)', default: 0.7, minimum: 0, maximum: 2 }, maxTokens: { type: 'number', description: 'Maximum tokens to generate', default: 2048 }, topK: { type: 'number', description: 'Top-k sampling parameter', default: 40 }, topP: { type: 'number', description: 'Top-p (nucleus) sampling parameter', default: 0.95 }, jsonMode: { type: 'boolean', description: 'Enable JSON mode for structured output', default: false }, jsonSchema: { type: 'object', description: 'JSON schema for structured output (when jsonMode is true)' }, grounding: { type: 'boolean', description: 'Enable Google Search grounding for up-to-date information', default: false }, safetySettings: { type: 'array', description: 'Safety settings for content filtering', items: { type: 'object', properties: { category: { type: 'string', enum: ['HARM_CATEGORY_HARASSMENT', 'HARM_CATEGORY_HATE_SPEECH', 'HARM_CATEGORY_SEXUALLY_EXPLICIT', 'HARM_CATEGORY_DANGEROUS_CONTENT'] }, threshold: { type: 'string', enum: ['BLOCK_NONE', 'BLOCK_ONLY_HIGH', 'BLOCK_MEDIUM_AND_ABOVE', 'BLOCK_LOW_AND_ABOVE'] } } } }, conversationId: { type: 'string', description: 'ID for maintaining conversation context' } }, required: ['prompt'] } },