Skip to main content
Glama

generate_text

Generate text using Google's Gemini AI models with customizable parameters like temperature, token limits, and optional features including JSON mode, Google Search grounding, and conversation context.

Instructions

Generate text using Google Gemini with advanced features

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
conversationIdNoID for maintaining conversation context
groundingNoEnable Google Search grounding for up-to-date information
jsonModeNoEnable JSON mode for structured output
jsonSchemaNoJSON schema as a string for structured output (when jsonMode is true)
maxTokensNoMaximum tokens to generate
modelNoSpecific Gemini model to usegemini-2.5-flash
promptYesThe prompt to send to Gemini
safetySettingsNoSafety settings as JSON string for content filtering
systemInstructionNoSystem instruction to guide model behavior
temperatureNoTemperature for generation (0-2)
topKNoTop-k sampling parameter
topPNoTop-p (nucleus) sampling parameter

Implementation Reference

  • The core handler function for the 'generate_text' tool. Validates inputs, constructs the Gemini API request with support for system instructions, JSON mode, grounding, conversation history, and safety settings, calls the API, and returns the generated text with metadata.
    private async generateText(id: any, args: any): Promise<MCPResponse> { try { // Validate parameters const validatedArgs = Validator.validateToolParams(ToolSchemas.generateText, args); const model = validatedArgs.model || 'gemini-2.5-flash'; logger.api(`Generating text with model: ${model}`); const modelInfo = GEMINI_MODELS[model as keyof typeof GEMINI_MODELS]; if (!modelInfo) { throw new Error(`Unknown model: ${model}`); } // Build generation config const generationConfig: any = { temperature: validatedArgs.temperature || 0.7, maxOutputTokens: validatedArgs.maxTokens || 2048, topK: validatedArgs.topK || 40, topP: validatedArgs.topP || 0.95 }; // Add JSON mode if requested if (validatedArgs.jsonMode) { generationConfig.responseMimeType = 'application/json'; if (validatedArgs.jsonSchema) { try { generationConfig.responseSchema = Validator.validateJSON(validatedArgs.jsonSchema); } catch (error) { logger.error('Invalid JSON schema provided:', error); throw new ValidationError('Invalid JSON schema format'); } } } // Build the request const requestBody: any = { model, contents: [ { parts: [ { text: Validator.sanitizeString(validatedArgs.prompt) } ], role: 'user' } ], generationConfig }; // Add system instruction if provided if (validatedArgs.systemInstruction) { requestBody.systemInstruction = { parts: [ { text: Validator.sanitizeString(validatedArgs.systemInstruction) } ] }; } // Add safety settings if provided if (args.safetySettings) { try { requestBody.safetySettings = typeof args.safetySettings === 'string' ? JSON.parse(args.safetySettings) : args.safetySettings; } catch (error) { console.error('Invalid safety settings JSON provided:', error); } } // Add grounding if requested and supported if (args.grounding && modelInfo.features.includes('grounding')) { requestBody.tools = [ { googleSearch: {} } ]; } // Handle conversation context if (args.conversationId) { const history = this.conversations.get(args.conversationId) || []; if (history.length > 0) { requestBody.contents = [...history, ...requestBody.contents]; } } // Call the API using the new SDK format const result = await this.genAI.models.generateContent({ model, ...requestBody }); const text = result.text || ''; // Update conversation history if needed if (args.conversationId) { const history = this.conversations.get(args.conversationId) || []; history.push(...requestBody.contents); history.push({ parts: [ { text } ], role: 'model' }); this.conversations.set(args.conversationId, history); } return { jsonrpc: '2.0', id, result: { content: [ { type: 'text', text } ], metadata: { model, tokensUsed: result.usageMetadata?.totalTokenCount, candidatesCount: result.candidates?.length || 1, finishReason: result.candidates?.[0]?.finishReason } } }; } catch (error) { console.error('Error in generateText:', error); return { jsonrpc: '2.0', id, error: { code: -32603, message: error instanceof Error ? error.message : 'Internal error' } }; } }
  • Tool registration in getAvailableTools() method, defining the name, description, and inputSchema for the 'generate_text' tool advertised to MCP clients.
    { name: 'generate_text', description: 'Generate text using Google Gemini with advanced features', inputSchema: { type: 'object', properties: { prompt: { type: 'string', description: 'The prompt to send to Gemini' }, model: { type: 'string', description: 'Specific Gemini model to use', enum: Object.keys(GEMINI_MODELS), default: 'gemini-2.5-flash' }, systemInstruction: { type: 'string', description: 'System instruction to guide model behavior' }, temperature: { type: 'number', description: 'Temperature for generation (0-2)', default: 0.7, minimum: 0, maximum: 2 }, maxTokens: { type: 'number', description: 'Maximum tokens to generate', default: 2048 }, topK: { type: 'number', description: 'Top-k sampling parameter', default: 40 }, topP: { type: 'number', description: 'Top-p (nucleus) sampling parameter', default: 0.95 }, jsonMode: { type: 'boolean', description: 'Enable JSON mode for structured output', default: false }, jsonSchema: { type: 'string', description: 'JSON schema as a string for structured output (when jsonMode is true)' }, grounding: { type: 'boolean', description: 'Enable Google Search grounding for up-to-date information', default: false }, safetySettings: { type: 'string', description: 'Safety settings as JSON string for content filtering' }, conversationId: { type: 'string', description: 'ID for maintaining conversation context' } }, required: ['prompt'] } },
  • Zod schema definition for 'generate_text' tool parameters used for runtime validation in the handler.
    generateText: z.object({ prompt: z.string().min(1, 'Prompt is required'), model: CommonSchemas.geminiModel.optional(), systemInstruction: z.string().optional(), temperature: CommonSchemas.temperature.optional(), maxTokens: CommonSchemas.maxTokens.optional(), topK: CommonSchemas.topK.optional(), topP: CommonSchemas.topP.optional(), jsonMode: z.boolean().optional(), jsonSchema: CommonSchemas.jsonSchema.optional(), grounding: z.boolean().optional(), safetySettings: CommonSchemas.safetySettings.optional(), conversationId: CommonSchemas.conversationId.optional() }),

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/gurveeer/mcp-server-gemini-pro'

If you have feedback or need assistance with the MCP directory API, please join our Discord server