Skip to main content
Glama
georgejeffers

Gemini MCP Server

generate_text

Generate text using Google Gemini AI models with configurable parameters like temperature, model selection, and system instructions for tailored responses.

Instructions

Generate text using Google Gemini models with configurable model, temperature, and system instructions.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
promptYesThe text prompt to send to Gemini
modelNoGemini model to usegemini-2.5-flash
temperatureNoSampling temperature (0-2)
maxOutputTokensNoMaximum number of output tokens
systemInstructionNoSystem instruction to guide model behavior

Implementation Reference

  • The main handler function that executes the generate_text tool. It takes the prompt, model, temperature, maxOutputTokens, and systemInstruction parameters, calls Google Gemini's generateContent API, and returns the text response or a formatted error.
    async ({ prompt, model, temperature, maxOutputTokens, systemInstruction }) => {
      try {
        const response = await ai.models.generateContent({
          model,
          contents: prompt,
          config: { temperature, maxOutputTokens, systemInstruction },
        });
        return { content: [{ type: 'text' as const, text: response.text ?? '' }] };
      } catch (error) {
        return formatToolError(error);
      }
    },
  • Input schema definition using zod for the generate_text tool. Defines required fields (prompt, model) and optional fields (temperature, maxOutputTokens, systemInstruction) with their validation rules and descriptions.
    inputSchema: {
      prompt: z.string().min(1).describe('The text prompt to send to Gemini'),
      model: TextModel.default('gemini-2.5-flash').describe('Gemini model to use'),
      temperature: z.number().min(0).max(2).optional().describe('Sampling temperature (0-2)'),
      maxOutputTokens: z.number().min(1).optional().describe('Maximum number of output tokens'),
      systemInstruction: z.string().optional().describe('System instruction to guide model behavior'),
    },
  • src/index.ts:25-25 (registration)
    Registration of the generate_text tool with the MCP server. Calls the register function from generate-text.ts passing the server and AI client instances.
    registerGenerateText(server, ai);
  • Helper utility function used by generate_text handler to format errors. Converts errors to the standard MCP response format with isError flag.
    export function formatToolError(error: unknown) {
      const text = error instanceof Error ? error.message : String(error);
      return {
        content: [{ type: 'text' as const, text }],
        isError: true,
      };
    }
  • Type definition for TextModel enum used by generate_text schema. Defines valid Gemini model names including 'gemini-2.5-flash', 'gemini-2.5-pro', and various preview models.
    export const TextModel = z.enum([
      'gemini-2.5-flash',
      'gemini-2.5-pro',
      'gemini-3-flash-preview',
      'gemini-3-pro-preview',
      'gemini-3.1-pro-preview',
    ]);
    export type TextModel = z.infer<typeof TextModel>;

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/georgejeffers/gemini-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server