Skip to main content
Glama

AI Studio MCP Server

by eternnoir

generate_content

Generate AI content using Gemini with file analysis, code execution, and web search. Process images, videos, audio, PDFs, and documents to create responses, run code, and research information.

Instructions

Generate content using Gemini with optional file inputs, code execution, and Google search. Supports multiple files: images (JPG, PNG, GIF, WebP, SVG, BMP, TIFF), video (MP4, AVI, MOV, WebM, FLV, MPG, WMV), audio (MP3, WAV, AIFF, AAC, OGG, FLAC), documents (PDF), and text files (TXT, MD, JSON, XML, CSV, HTML). MIME type is auto-detected from file extension.

Example usage:

{ "user_prompt": "Analyze this video", "files": [ { "path": "/path/to/video.mp4" } ] }

PDF to Markdown conversion:

{ "user_prompt": "Convert this PDF to well-formatted Markdown, preserving structure and formatting", "files": [ {"path": "/document.pdf"} ] }

With Google Search:

{ "user_prompt": "What are the latest AI breakthroughs in 2024?", "enable_google_search": true }

With Code Execution:

{ "user_prompt": "Write and run a Python script to calculate prime numbers up to 100", "enable_code_execution": true }

Combining features with thinking mode:

{ "user_prompt": "Research quantum computing and create a Python simulation", "model": "gemini-2.5-pro", "enable_google_search": true, "enable_code_execution": true, "thinking_budget": -1 }

Input Schema

NameRequiredDescriptionDefault
enable_code_executionNoEnable code execution capability for the model
enable_google_searchNoEnable Google search capability for the model
filesNoArray of files to include in generation (optional). Supports images, video, audio, PDFs, and text files.
modelNoGemini model to use (optional)gemini-2.5-flash
system_promptNoSystem prompt to guide the AI behavior (optional)
temperatureNoTemperature for generation (0-2, default 0.2)
thinking_budgetNoThinking budget for models that support thinking mode (-1 for unlimited)
user_promptYesUser prompt for generation

Input Schema (JSON Schema)

{ "properties": { "enable_code_execution": { "default": false, "description": "Enable code execution capability for the model", "type": "boolean" }, "enable_google_search": { "default": false, "description": "Enable Google search capability for the model", "type": "boolean" }, "files": { "description": "Array of files to include in generation (optional). Supports images, video, audio, PDFs, and text files.", "items": { "oneOf": [ { "required": [ "path" ] }, { "required": [ "content" ] } ], "properties": { "content": { "description": "Base64 encoded file content (alternative to path)", "type": "string" }, "path": { "description": "Path to file", "type": "string" }, "type": { "description": "MIME type of the file (optional, auto-detected from file extension if path provided)", "type": "string" } }, "required": [], "type": "object" }, "maxItems": 10, "type": "array" }, "model": { "default": "gemini-2.5-flash", "description": "Gemini model to use (optional)", "type": "string" }, "system_prompt": { "description": "System prompt to guide the AI behavior (optional)", "type": "string" }, "temperature": { "default": 0.2, "description": "Temperature for generation (0-2, default 0.2)", "maximum": 2, "minimum": 0, "type": "number" }, "thinking_budget": { "default": -1, "description": "Thinking budget for models that support thinking mode (-1 for unlimited)", "type": "number" }, "user_prompt": { "description": "User prompt for generation", "type": "string" } }, "required": [ "user_prompt" ], "type": "object" }

Other Tools from AI Studio MCP Server

Related Tools

    MCP directory API

    We provide all the information about MCP servers via our MCP API.

    curl -X GET 'https://glama.ai/api/mcp/v1/servers/eternnoir/aistudio-mcp-server'

    If you have feedback or need assistance with the MCP directory API, please join our Discord server