Skip to main content
Glama

batch_create

Create async batch content generation jobs with Gemini AI at reduced cost. Upload JSONL files or use inline requests to process large-scale content tasks with ~24-hour turnaround.

Instructions

CREATE BATCH JOB - Create async content generation batch job with Gemini. COST: 50% cheaper than standard API. TURNAROUND: ~24 hours target. WORKFLOW: 1) Prepare JSONL file with requests (or use batch_ingest_content first), 2) Upload file with upload_file, 3) Call batch_create with file URI, 4) Use batch_get_status to monitor progress, 5) Use batch_download_results when complete. SUPPORTS: Inline requests (<20MB) or file-based (JSONL for large batches). Returns batch job ID and initial status.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
modelNoGemini model for content generationgemini-2.5-flash
requestsNoInline batch requests (for small batches <20MB). Each request should have 'key' and 'request' fields.
inputFileUriNoURI of uploaded JSONL file (from upload_file tool). Use for large batches or when requests exceed 20MB.
displayNameNoOptional display name for the batch job
outputLocationNoOutput directory for results (defaults to current working directory)
configNoOptional generation config (temperature, maxOutputTokens, etc.)

Implementation Reference

  • Type definition for the input parameters (schema) of the batch_create tool, defining the structure for batch creation requests to Gemini API.
    export interface BatchCreateParams { model: string; requests?: any[]; inputFileUri?: string; displayName?: string; outputLocation?: string; config?: { systemInstruction?: any; temperature?: number; maxOutputTokens?: number; [key: string]: any; }; }

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/mintmcqueen/gemini-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server