Skip to main content
Glama

Gemini MCP Server

by mintmcqueen

batch_create

Create batch content generation jobs using Gemini AI models at 50% lower cost with ~24 hour turnaround. Process large-scale requests via JSONL files or inline inputs for automated workflow processing.

Instructions

CREATE BATCH JOB - Create async content generation batch job with Gemini. COST: 50% cheaper than standard API. TURNAROUND: ~24 hours target. WORKFLOW: 1) Prepare JSONL file with requests (or use batch_ingest_content first), 2) Upload file with upload_file, 3) Call batch_create with file URI, 4) Use batch_get_status to monitor progress, 5) Use batch_download_results when complete. SUPPORTS: Inline requests (<20MB) or file-based (JSONL for large batches). Returns batch job ID and initial status.

Input Schema

NameRequiredDescriptionDefault
modelNoGemini model for content generationgemini-2.5-flash
requestsNoInline batch requests (for small batches <20MB). Each request should have 'key' and 'request' fields.
inputFileUriNoURI of uploaded JSONL file (from upload_file tool). Use for large batches or when requests exceed 20MB.
displayNameNoOptional display name for the batch job
outputLocationNoOutput directory for results (defaults to current working directory)
configNoOptional generation config (temperature, maxOutputTokens, etc.)

Input Schema (JSON Schema)

{ "properties": { "config": { "description": "Optional generation config (temperature, maxOutputTokens, etc.)", "properties": { "maxOutputTokens": { "maximum": 500000, "minimum": 1, "type": "number" }, "temperature": { "default": 1, "maximum": 2, "minimum": 0, "type": "number" } }, "type": "object" }, "displayName": { "description": "Optional display name for the batch job", "type": "string" }, "inputFileUri": { "description": "URI of uploaded JSONL file (from upload_file tool). Use for large batches or when requests exceed 20MB.", "type": "string" }, "model": { "default": "gemini-2.5-flash", "description": "Gemini model for content generation", "enum": [ "gemini-2.5-pro", "gemini-2.5-flash", "gemini-2.0-flash-exp" ], "type": "string" }, "outputLocation": { "description": "Output directory for results (defaults to current working directory)", "type": "string" }, "requests": { "description": "Inline batch requests (for small batches <20MB). Each request should have 'key' and 'request' fields.", "type": "array" } }, "type": "object" }

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/mintmcqueen/gemini-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server