Skip to main content
Glama

gemini_generateContent

Generate complete text responses using Google Gemini models. Process single-turn prompts with optional control over token limits, creativity, and safety settings for tailored content creation.

Instructions

Generates non-streaming text content using a specified Google Gemini model. This tool takes a text prompt and returns the complete generated response from the model. It's suitable for single-turn generation tasks where the full response is needed at once. Optional parameters allow control over generation (temperature, max tokens, etc.) and safety settings.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
generationConfigNoOptional configuration for controlling the generation process.
modelNameNoOptional. The name of the Gemini model to use (e.g., 'gemini-1.5-flash'). If omitted, the server's default model (from GOOGLE_GEMINI_MODEL env var) will be used.
promptYesRequired. The text prompt to send to the Gemini model for content generation.
safetySettingsNoOptional. A list of safety settings to apply, overriding default model safety settings. Each setting specifies a harm category and a blocking threshold.

Implementation Reference

  • The core handler function `processRequest` that executes the `gemini_generate_content` tool logic. It parses arguments, invokes `GeminiService.generateContent` or `generateContentStream`, handles streaming by accumulating chunks, processes function calls from the model, formats responses in MCP content format, and maps errors.
    const processRequest = async (args: unknown) => {
      const typedArgs = args as GeminiGenerateContentArgs;
      logger.debug(`Received ${GEMINI_GENERATE_CONTENT_TOOL_NAME} request:`, {
        model: typedArgs.modelName,
        stream: typedArgs.stream,
        hasFunctionDeclarations: !!typedArgs.functionDeclarations,
      }); // Avoid logging full prompt potentially
    
      try {
        // Extract arguments - Zod parsing happens automatically via server.tool
        const {
          modelName,
          prompt,
          stream,
          functionDeclarations,
          toolConfig,
          generationConfig,
          safetySettings,
          systemInstruction,
          cachedContentName,
          urlContext,
          modelPreferences,
        } = typedArgs;
    
        // Calculate URL context metrics for model selection
        let urlCount = 0;
        let estimatedUrlContentSize = 0;
    
        if (urlContext?.urls) {
          urlCount = urlContext.urls.length;
          // Estimate content size based on configured limits
          const maxContentKb = urlContext.fetchOptions?.maxContentKb || 100;
          estimatedUrlContentSize = urlCount * maxContentKb * 1024; // Convert to bytes
        }
    
        // Prepare parameters object
        const contentParams: GenerateContentParams & {
          functionDeclarations?: unknown;
          toolConfig?: unknown;
        } = {
          prompt,
          modelName,
          generationConfig,
          safetySettings: safetySettings?.map((setting) => ({
            category: setting.category as HarmCategory,
            threshold: setting.threshold as HarmBlockThreshold,
          })),
          systemInstruction,
          cachedContentName,
          urlContext: urlContext?.urls
            ? {
                urls: urlContext.urls,
                fetchOptions: urlContext.fetchOptions,
              }
            : undefined,
          preferQuality: modelPreferences?.preferQuality,
          preferSpeed: modelPreferences?.preferSpeed,
          preferCost: modelPreferences?.preferCost,
          complexityHint: modelPreferences?.complexityHint,
          taskType: modelPreferences?.taskType,
          urlCount,
          estimatedUrlContentSize,
        };
    
        // Add function-related parameters if provided
        if (functionDeclarations) {
          contentParams.functionDeclarations = functionDeclarations;
        }
        if (toolConfig) {
          contentParams.toolConfig = toolConfig;
        }
    
        // Handle streaming vs non-streaming generation
        if (stream) {
          // Use streaming generation
          logger.debug(
            `Using streaming generation for ${GEMINI_GENERATE_CONTENT_TOOL_NAME}`
          );
          let fullText = ""; // Accumulator for chunks
    
          // Call the service's streaming method
          const sdkStream = serviceInstance.generateContentStream(contentParams);
    
          // Iterate over the async generator from the service and collect chunks
          // The StreamableHTTPServerTransport will handle the actual streaming for HTTP transport
          for await (const chunkText of sdkStream) {
            fullText += chunkText; // Append chunk to the accumulator
          }
    
          logger.debug(
            `Stream collected successfully for ${GEMINI_GENERATE_CONTENT_TOOL_NAME}`
          );
    
          // Return the complete text in the standard MCP format
          return {
            content: [
              {
                type: "text" as const,
                text: fullText,
              },
            ],
          };
        } else {
          // Use standard non-streaming generation
          logger.debug(
            `Using standard generation for ${GEMINI_GENERATE_CONTENT_TOOL_NAME}`
          );
          const result = await serviceInstance.generateContent(contentParams);
    
          // Handle function call responses if function declarations were provided
          if (
            functionDeclarations &&
            typeof result === "object" &&
            result !== null
          ) {
            // It's an object response, could be a function call
            const resultObj = result as FunctionCallResponse;
    
            if (
              resultObj.functionCall &&
              typeof resultObj.functionCall === "object"
            ) {
              // It's a function call request
              logger.debug(
                `Function call requested by model: ${resultObj.functionCall.name}`
              );
              // Serialize the function call details into a JSON string
              const functionCallJson = JSON.stringify(resultObj.functionCall);
              return {
                content: [
                  {
                    type: "text" as const, // Return as text type
                    text: functionCallJson, // Embed JSON string in text field
                  },
                ],
              };
            } else if (resultObj.text && typeof resultObj.text === "string") {
              // It's a regular text response
              return {
                content: [
                  {
                    type: "text" as const,
                    text: resultObj.text,
                  },
                ],
              };
            }
          }
    
          // Standard text response
          if (typeof result === "string") {
            return {
              content: [
                {
                  type: "text" as const,
                  text: result,
                },
              ],
            };
          } else {
            // Unexpected response structure from the service
            logger.error(
              `Unexpected response structure from generateContent:`,
              result
            );
            throw new Error(
              "Invalid response structure received from Gemini service."
            );
          }
        }
      } catch (error: unknown) {
        logger.error(
          `Error processing ${GEMINI_GENERATE_CONTENT_TOOL_NAME}:`,
          error
        );
    
        // Use the central error mapping utility
        throw mapAnyErrorToMcpError(error, GEMINI_GENERATE_CONTENT_TOOL_NAME);
      }
    };
  • Zod schema definition for the tool inputs via `GEMINI_GENERATE_CONTENT_PARAMS` and `geminiGenerateContentSchema`, covering model selection, prompt, streaming, function declarations, generation config, safety settings, system instructions, caching, URL context, and model preferences.
    export const GEMINI_GENERATE_CONTENT_PARAMS = {
      modelName: ModelNameSchema,
      prompt: z
        .string()
        .min(1)
        .describe(
          "Required. The text prompt to send to the Gemini model for content generation."
        ),
      stream: z
        .boolean()
        .optional()
        .default(false)
        .describe(
          "Optional. Whether to use streaming generation. Note: Due to SDK limitations, the full response is still returned at once."
        ),
      functionDeclarations: z
        .array(FunctionDeclarationSchema)
        .optional()
        .describe(
          "Optional. An array of function declarations (schemas) that the model can choose to call based on the prompt."
        ),
      toolConfig: toolConfigSchema,
      generationConfig: generationConfigSchema,
      safetySettings: z
        .array(safetySettingSchema)
        .optional()
        .describe(
          "Optional. A list of safety settings to apply, overriding default model safety settings. Each setting specifies a harm category and a blocking threshold."
        ),
      systemInstruction: z
        .string()
        .optional()
        .describe(
          "Optional. A system instruction to guide the model's behavior. Acts as context for how the model should respond."
        ),
      cachedContentName: z
        .string()
        .min(1)
        .optional()
        .describe(
          "Optional. Identifier for cached content in format 'cachedContents/...' to use with this request."
        ),
      urlContext: urlContextSchema,
      modelPreferences: ModelPreferencesSchema,
    };
    
    // Define the complete schema for validation
    export const geminiGenerateContentSchema = z.object(
      GEMINI_GENERATE_CONTENT_PARAMS
    );
  • Direct registration of the tool with the MCP server inside `geminiGenerateContentConsolidatedTool` function using `server.tool(name, description, schema, handler)`.
    server.tool(
      GEMINI_GENERATE_CONTENT_TOOL_NAME,
      GEMINI_GENERATE_CONTENT_TOOL_DESCRIPTION,
      GEMINI_GENERATE_CONTENT_PARAMS, // Pass the Zod schema object directly
      processRequest
    );
  • Higher-level registration of the consolidated tool function via `ToolRegistry` in the central `registerAllTools` function.
    registry.registerTool(
      adaptGeminiServiceTool(
        geminiGenerateContentConsolidatedTool,
        "geminiGenerateContentConsolidatedTool"
      )
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states this is a 'non-streaming' generation tool that 'returns the complete generated response,' which covers the basic operation mode. However, it doesn't mention important behavioral aspects like rate limits, authentication requirements, error conditions, or response format details that would be crucial for an AI agent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly structured with four focused sentences: purpose statement, input/output behavior, usage context, and parameter overview. Every sentence earns its place with zero wasted words, and the most important information (what the tool does) is front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a content generation tool with no annotations and no output schema, the description provides adequate basic information about purpose and usage context. However, it lacks details about the response format, error handling, and operational constraints that would be important for an AI agent to use this tool effectively in production scenarios.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description mentions 'Optional parameters allow control over generation (temperature, max tokens, etc.) and safety settings,' which adds some high-level context about parameter categories but doesn't provide additional semantic meaning beyond what's in the detailed schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Generates non-streaming text content'), resource ('using a specified Google Gemini model'), and scope ('single-turn generation tasks where the full response is needed at once'). It effectively distinguishes from sibling tools like 'gemini_generateContentStream' (streaming) and 'gemini_sendMessage' (chat context).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context about when to use this tool ('suitable for single-turn generation tasks where the full response is needed at once'), which implicitly distinguishes it from streaming and chat-based alternatives. However, it doesn't explicitly name when NOT to use it or mention specific sibling alternatives by name.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/bsmi021/mcp-gemini-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server