Skip to main content
Glama

generate_text

Generate text using Google Gemini AI models with customizable parameters like temperature, token limits, JSON output, and safety settings for various content creation tasks.

Instructions

Generate text using Google Gemini with advanced features

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
promptYesThe prompt to send to Gemini
modelNoSpecific Gemini model to usegemini-2.5-flash
systemInstructionNoSystem instruction to guide model behavior
temperatureNoTemperature for generation (0-2)
maxTokensNoMaximum tokens to generate
topKNoTop-k sampling parameter
topPNoTop-p (nucleus) sampling parameter
jsonModeNoEnable JSON mode for structured output
jsonSchemaNoJSON schema for structured output (when jsonMode is true)
groundingNoEnable Google Search grounding for up-to-date information
safetySettingsNoSafety settings for content filtering
conversationIdNoID for maintaining conversation context

Implementation Reference

  • The core handler function for the 'generate_text' tool. It processes input arguments, constructs a request to the Google Gemini API (using the genAI SDK), handles advanced features like JSON mode, grounding, system instructions, conversation history, and returns the generated text response.
    private async generateText(id: any, args: any): Promise<MCPResponse> { try { const model = args.model || 'gemini-2.5-flash'; const modelInfo = GEMINI_MODELS[model as keyof typeof GEMINI_MODELS]; if (!modelInfo) { throw new Error(`Unknown model: ${model}`); } // Build generation config const generationConfig: any = { temperature: args.temperature || 0.7, maxOutputTokens: args.maxTokens || 2048, topK: args.topK || 40, topP: args.topP || 0.95 }; // Add JSON mode if requested if (args.jsonMode) { generationConfig.responseMimeType = 'application/json'; if (args.jsonSchema) { generationConfig.responseSchema = args.jsonSchema; } } // Build the request const requestBody: any = { model, contents: [{ parts: [{ text: args.prompt }], role: 'user' }], generationConfig }; // Add system instruction if provided if (args.systemInstruction) { requestBody.systemInstruction = { parts: [{ text: args.systemInstruction }] }; } // Add safety settings if provided if (args.safetySettings) { requestBody.safetySettings = args.safetySettings; } // Add grounding if requested and supported if (args.grounding && modelInfo.features.includes('grounding')) { requestBody.tools = [{ googleSearch: {} }]; } // Handle conversation context if (args.conversationId) { const history = this.conversations.get(args.conversationId) || []; if (history.length > 0) { requestBody.contents = [...history, ...requestBody.contents]; } } // Call the API using the new SDK format const result = await this.genAI.models.generateContent({ model, ...requestBody }); const text = result.text || ''; // Update conversation history if needed if (args.conversationId) { const history = this.conversations.get(args.conversationId) || []; history.push(...requestBody.contents); history.push({ parts: [{ text: text }], role: 'model' }); this.conversations.set(args.conversationId, history); } return { jsonrpc: '2.0', id, result: { content: [{ type: 'text', text: text }], metadata: { model, tokensUsed: result.usageMetadata?.totalTokenCount, candidatesCount: result.candidates?.length || 1, finishReason: result.candidates?.[0]?.finishReason } } }; } catch (error) { console.error('Error in generateText:', error); return { jsonrpc: '2.0', id, error: { code: -32603, message: error instanceof Error ? error.message : 'Internal error' } }; }
  • The input schema defining all parameters for the generate_text tool, including required prompt, optional model selection, generation parameters, advanced features like JSON mode and grounding.
    inputSchema: { type: 'object', properties: { prompt: { type: 'string', description: 'The prompt to send to Gemini' }, model: { type: 'string', description: 'Specific Gemini model to use', enum: Object.keys(GEMINI_MODELS), default: 'gemini-2.5-flash' }, systemInstruction: { type: 'string', description: 'System instruction to guide model behavior' }, temperature: { type: 'number', description: 'Temperature for generation (0-2)', default: 0.7, minimum: 0, maximum: 2 }, maxTokens: { type: 'number', description: 'Maximum tokens to generate', default: 2048 }, topK: { type: 'number', description: 'Top-k sampling parameter', default: 40 }, topP: { type: 'number', description: 'Top-p (nucleus) sampling parameter', default: 0.95 }, jsonMode: { type: 'boolean', description: 'Enable JSON mode for structured output', default: false }, jsonSchema: { type: 'object', description: 'JSON schema for structured output (when jsonMode is true)' }, grounding: { type: 'boolean', description: 'Enable Google Search grounding for up-to-date information', default: false }, safetySettings: { type: 'array', description: 'Safety settings for content filtering', items: { type: 'object', properties: { category: { type: 'string', enum: ['HARM_CATEGORY_HARASSMENT', 'HARM_CATEGORY_HATE_SPEECH', 'HARM_CATEGORY_SEXUALLY_EXPLICIT', 'HARM_CATEGORY_DANGEROUS_CONTENT'] }, threshold: { type: 'string', enum: ['BLOCK_NONE', 'BLOCK_ONLY_HIGH', 'BLOCK_MEDIUM_AND_ABOVE', 'BLOCK_LOW_AND_ABOVE'] } } } }, conversationId: { type: 'string', description: 'ID for maintaining conversation context' } }, required: ['prompt'] }
  • Dispatch/registration in handleToolCall switch statement that routes 'generate_text' calls to the generateText handler.
    case 'generate_text': return await this.generateText(request.id, args);
  • Tool registration object returned by getAvailableTools(), including name, description, and inputSchema for tools/list endpoint.
    { name: 'generate_text', description: 'Generate text using Google Gemini with advanced features', inputSchema: { type: 'object', properties: { prompt: { type: 'string', description: 'The prompt to send to Gemini' }, model: { type: 'string', description: 'Specific Gemini model to use', enum: Object.keys(GEMINI_MODELS), default: 'gemini-2.5-flash' }, systemInstruction: { type: 'string', description: 'System instruction to guide model behavior' }, temperature: { type: 'number', description: 'Temperature for generation (0-2)', default: 0.7, minimum: 0, maximum: 2 }, maxTokens: { type: 'number', description: 'Maximum tokens to generate', default: 2048 }, topK: { type: 'number', description: 'Top-k sampling parameter', default: 40 }, topP: { type: 'number', description: 'Top-p (nucleus) sampling parameter', default: 0.95 }, jsonMode: { type: 'boolean', description: 'Enable JSON mode for structured output', default: false }, jsonSchema: { type: 'object', description: 'JSON schema for structured output (when jsonMode is true)' }, grounding: { type: 'boolean', description: 'Enable Google Search grounding for up-to-date information', default: false }, safetySettings: { type: 'array', description: 'Safety settings for content filtering', items: { type: 'object', properties: { category: { type: 'string', enum: ['HARM_CATEGORY_HARASSMENT', 'HARM_CATEGORY_HATE_SPEECH', 'HARM_CATEGORY_SEXUALLY_EXPLICIT', 'HARM_CATEGORY_DANGEROUS_CONTENT'] }, threshold: { type: 'string', enum: ['BLOCK_NONE', 'BLOCK_ONLY_HIGH', 'BLOCK_MEDIUM_AND_ABOVE', 'BLOCK_LOW_AND_ABOVE'] } } } }, conversationId: { type: 'string', description: 'ID for maintaining conversation context' } }, required: ['prompt'] } },

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/aliargun/mcp-server-gemini'

If you have feedback or need assistance with the MCP directory API, please join our Discord server