MCP Gemini Server

by bsmi021
Verified

gemini_generateContentStream

Streams text content in real-time using Google's Gemini model, ideal for interactive applications or managing lengthy responses. Accepts a text prompt and offers optional parameters to customize generation and safety settings.

Instructions

Generates text content as a stream using a specified Google Gemini model. This tool takes a text prompt and streams back chunks of the generated response as they become available. It's suitable for interactive use cases or handling long responses. Optional parameters allow control over generation and safety settings.

Input Schema

NameRequiredDescriptionDefault
generationConfigNoOptional configuration for controlling the generation process.
modelNameNoOptional. The name of the Gemini model to use (e.g., 'gemini-1.5-flash'). If omitted, the server's default model (from GOOGLE_GEMINI_MODEL env var) will be used.
promptYesRequired. The text prompt to send to the Gemini model for content generation.
safetySettingsNoOptional. A list of safety settings to apply, overriding default model safety settings.

Input Schema (JSON Schema)

{ "$schema": "http://json-schema.org/draft-07/schema#", "additionalProperties": false, "properties": { "generationConfig": { "additionalProperties": false, "description": "Optional configuration for controlling the generation process.", "properties": { "maxOutputTokens": { "description": "Maximum number of tokens to generate in the response.", "minimum": 1, "type": "integer" }, "stopSequences": { "description": "Sequences where the API will stop generating further tokens.", "items": { "type": "string" }, "type": "array" }, "temperature": { "description": "Controls randomness. Lower values (~0.2) make output more deterministic, higher values (~0.8) make it more creative. Default varies by model.", "maximum": 1, "minimum": 0, "type": "number" }, "topK": { "description": "Top-k sampling parameter. The model considers the k most probable tokens. Default varies by model.", "minimum": 1, "type": "integer" }, "topP": { "description": "Nucleus sampling parameter. The model considers only tokens with probability mass summing to this value. Default varies by model.", "maximum": 1, "minimum": 0, "type": "number" } }, "type": "object" }, "modelName": { "description": "Optional. The name of the Gemini model to use (e.g., 'gemini-1.5-flash'). If omitted, the server's default model (from GOOGLE_GEMINI_MODEL env var) will be used.", "minLength": 1, "type": "string" }, "prompt": { "description": "Required. The text prompt to send to the Gemini model for content generation.", "minLength": 1, "type": "string" }, "safetySettings": { "description": "Optional. A list of safety settings to apply, overriding default model safety settings.", "items": { "additionalProperties": false, "description": "Setting for controlling content safety for a specific harm category.", "properties": { "category": { "description": "Category of harmful content to apply safety settings for.", "enum": [ "HARM_CATEGORY_UNSPECIFIED", "HARM_CATEGORY_HATE_SPEECH", "HARM_CATEGORY_SEXUALLY_EXPLICIT", "HARM_CATEGORY_HARASSMENT", "HARM_CATEGORY_DANGEROUS_CONTENT" ], "type": "string" }, "threshold": { "description": "Threshold for blocking harmful content. Higher thresholds block more content.", "enum": [ "HARM_BLOCK_THRESHOLD_UNSPECIFIED", "BLOCK_LOW_AND_ABOVE", "BLOCK_MEDIUM_AND_ABOVE", "BLOCK_ONLY_HIGH", "BLOCK_NONE" ], "type": "string" } }, "required": [ "category", "threshold" ], "type": "object" }, "type": "array" } }, "required": [ "prompt" ], "type": "object" }
ID: fakxcprcnk