Skip to main content
Glama

generate_text

Generate text content using Google Gemini AI models with options for structured JSON output, conversation context, and search grounding.

Instructions

Generate text using Google Gemini with advanced features

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
promptYesThe prompt to send to Gemini
modelNoSpecific Gemini model to usegemini-2.5-flash
systemInstructionNoSystem instruction to guide model behavior
temperatureNoTemperature for generation (0-2)
maxTokensNoMaximum tokens to generate
topKNoTop-k sampling parameter
topPNoTop-p (nucleus) sampling parameter
jsonModeNoEnable JSON mode for structured output
jsonSchemaNoJSON schema as a string for structured output (when jsonMode is true)
groundingNoEnable Google Search grounding for up-to-date information
safetySettingsNoSafety settings as JSON string for content filtering
conversationIdNoID for maintaining conversation context

Implementation Reference

  • The handler function that executes the generate_text tool. It validates inputs using the schema, configures the Gemini API request with optional features like system instructions, JSON mode, grounding, and conversation history, calls the API, and returns the response.
    private async generateText(id: any, args: any): Promise<MCPResponse> { try { // Validate parameters const validatedArgs = Validator.validateToolParams(ToolSchemas.generateText, args); const model = validatedArgs.model || 'gemini-2.5-flash'; logger.api(`Generating text with model: ${model}`); const modelInfo = GEMINI_MODELS[model as keyof typeof GEMINI_MODELS]; if (!modelInfo) { throw new Error(`Unknown model: ${model}`); } // Build generation config const generationConfig: any = { temperature: validatedArgs.temperature || 0.7, maxOutputTokens: validatedArgs.maxTokens || 2048, topK: validatedArgs.topK || 40, topP: validatedArgs.topP || 0.95 }; // Add JSON mode if requested if (validatedArgs.jsonMode) { generationConfig.responseMimeType = 'application/json'; if (validatedArgs.jsonSchema) { try { generationConfig.responseSchema = Validator.validateJSON(validatedArgs.jsonSchema); } catch (error) { logger.error('Invalid JSON schema provided:', error); throw new ValidationError('Invalid JSON schema format'); } } } // Build the request const requestBody: any = { model, contents: [ { parts: [ { text: Validator.sanitizeString(validatedArgs.prompt) } ], role: 'user' } ], generationConfig }; // Add system instruction if provided if (validatedArgs.systemInstruction) { requestBody.systemInstruction = { parts: [ { text: Validator.sanitizeString(validatedArgs.systemInstruction) } ] }; } // Add safety settings if provided if (args.safetySettings) { try { requestBody.safetySettings = typeof args.safetySettings === 'string' ? JSON.parse(args.safetySettings) : args.safetySettings; } catch (error) { console.error('Invalid safety settings JSON provided:', error); } } // Add grounding if requested and supported if (args.grounding && modelInfo.features.includes('grounding')) { requestBody.tools = [ { googleSearch: {} } ]; } // Handle conversation context if (args.conversationId) { const history = this.conversations.get(args.conversationId) || []; if (history.length > 0) { requestBody.contents = [...history, ...requestBody.contents]; } } // Call the API using the new SDK format const result = await this.genAI.models.generateContent({ model, ...requestBody }); const text = result.text || ''; // Update conversation history if needed if (args.conversationId) { const history = this.conversations.get(args.conversationId) || []; history.push(...requestBody.contents); history.push({ parts: [ { text } ], role: 'model' }); this.conversations.set(args.conversationId, history); } return { jsonrpc: '2.0', id, result: { content: [ { type: 'text', text } ], metadata: { model, tokensUsed: result.usageMetadata?.totalTokenCount, candidatesCount: result.candidates?.length || 1, finishReason: result.candidates?.[0]?.finishReason } } }; } catch (error) { console.error('Error in generateText:', error); return { jsonrpc: '2.0', id, error: { code: -32603, message: error instanceof Error ? error.message : 'Internal error' } }; } }
  • Zod schema for input validation of generate_text tool parameters, used by Validator.validateToolParams.
    generateText: z.object({ prompt: z.string().min(1, 'Prompt is required'), model: CommonSchemas.geminiModel.optional(), systemInstruction: z.string().optional(), temperature: CommonSchemas.temperature.optional(), maxTokens: CommonSchemas.maxTokens.optional(), topK: CommonSchemas.topK.optional(), topP: CommonSchemas.topP.optional(), jsonMode: z.boolean().optional(), jsonSchema: CommonSchemas.jsonSchema.optional(), grounding: z.boolean().optional(), safetySettings: CommonSchemas.safetySettings.optional(), conversationId: CommonSchemas.conversationId.optional() }),
  • Tool definition/registration in getAvailableTools() method, returned for tools/list MCP call, including name, description, and input schema.
    { name: 'generate_text', description: 'Generate text using Google Gemini with advanced features', inputSchema: { type: 'object', properties: { prompt: { type: 'string', description: 'The prompt to send to Gemini' }, model: { type: 'string', description: 'Specific Gemini model to use', enum: Object.keys(GEMINI_MODELS), default: 'gemini-2.5-flash' }, systemInstruction: { type: 'string', description: 'System instruction to guide model behavior' }, temperature: { type: 'number', description: 'Temperature for generation (0-2)', default: 0.7, minimum: 0, maximum: 2 }, maxTokens: { type: 'number', description: 'Maximum tokens to generate', default: 2048 }, topK: { type: 'number', description: 'Top-k sampling parameter', default: 40 }, topP: { type: 'number', description: 'Top-p (nucleus) sampling parameter', default: 0.95 }, jsonMode: { type: 'boolean', description: 'Enable JSON mode for structured output', default: false }, jsonSchema: { type: 'string', description: 'JSON schema as a string for structured output (when jsonMode is true)' }, grounding: { type: 'boolean', description: 'Enable Google Search grounding for up-to-date information', default: false }, safetySettings: { type: 'string', description: 'Safety settings as JSON string for content filtering' }, conversationId: { type: 'string', description: 'ID for maintaining conversation context' } }, required: ['prompt'] } },
  • Dispatch case in handleToolCall switch statement that routes 'generate_text' tool calls to the handler method.
    case 'generate_text': return await this.generateText(request.id, args);

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/gurveeer/mcp-server-gemini-pro'

If you have feedback or need assistance with the MCP directory API, please join our Discord server