generate_text
Generate text using Google's Gemini AI models with customizable parameters like temperature, token limits, and optional features including JSON mode, Google Search grounding, and conversation context.
Instructions
Generate text using Google Gemini with advanced features
Input Schema
Name | Required | Description | Default |
---|---|---|---|
conversationId | No | ID for maintaining conversation context | |
grounding | No | Enable Google Search grounding for up-to-date information | |
jsonMode | No | Enable JSON mode for structured output | |
jsonSchema | No | JSON schema as a string for structured output (when jsonMode is true) | |
maxTokens | No | Maximum tokens to generate | |
model | No | Specific Gemini model to use | gemini-2.5-flash |
prompt | Yes | The prompt to send to Gemini | |
safetySettings | No | Safety settings as JSON string for content filtering | |
systemInstruction | No | System instruction to guide model behavior | |
temperature | No | Temperature for generation (0-2) | |
topK | No | Top-k sampling parameter | |
topP | No | Top-p (nucleus) sampling parameter |
Input Schema (JSON Schema)
{
"properties": {
"conversationId": {
"description": "ID for maintaining conversation context",
"type": "string"
},
"grounding": {
"default": false,
"description": "Enable Google Search grounding for up-to-date information",
"type": "boolean"
},
"jsonMode": {
"default": false,
"description": "Enable JSON mode for structured output",
"type": "boolean"
},
"jsonSchema": {
"description": "JSON schema as a string for structured output (when jsonMode is true)",
"type": "string"
},
"maxTokens": {
"default": 2048,
"description": "Maximum tokens to generate",
"type": "number"
},
"model": {
"default": "gemini-2.5-flash",
"description": "Specific Gemini model to use",
"enum": [
"gemini-2.5-pro",
"gemini-2.5-flash",
"gemini-2.5-flash-lite",
"gemini-2.0-flash",
"gemini-2.0-flash-lite",
"gemini-2.0-pro-experimental",
"gemini-1.5-pro",
"gemini-1.5-flash"
],
"type": "string"
},
"prompt": {
"description": "The prompt to send to Gemini",
"type": "string"
},
"safetySettings": {
"description": "Safety settings as JSON string for content filtering",
"type": "string"
},
"systemInstruction": {
"description": "System instruction to guide model behavior",
"type": "string"
},
"temperature": {
"default": 0.7,
"description": "Temperature for generation (0-2)",
"maximum": 2,
"minimum": 0,
"type": "number"
},
"topK": {
"default": 40,
"description": "Top-k sampling parameter",
"type": "number"
},
"topP": {
"default": 0.95,
"description": "Top-p (nucleus) sampling parameter",
"type": "number"
}
},
"required": [
"prompt"
],
"type": "object"
}