Skip to main content
Glama

Gemini MCP Server

by mintmcqueen

batch_process_embeddings

Process large content batches into embeddings through automated workflow: ingest content, convert to JSONL, upload, create batch job, poll completion, and download 1536-dimensional vector results with metadata.

Instructions

COMPLETE EMBEDDINGS WORKFLOW - End-to-end embeddings batch processing. WORKFLOW: 1) Ingests content, 2) Queries user for task type (or auto-recommends), 3) Converts to JSONL, 4) Uploads, 5) Creates batch job, 6) Polls until complete, 7) Downloads results. BEST FOR: Simple one-call embeddings generation. RETURNS: Embeddings array (1536-dimensional vectors) with metadata.

Input Schema

NameRequiredDescriptionDefault
inputFileYesPath to content file
taskTypeNoEmbedding task type (omit to get interactive prompt)
modelNoEmbedding modelgemini-embedding-001
outputLocationNoOutput directory for results
pollIntervalSecondsNoSeconds between status checks

Input Schema (JSON Schema)

{ "properties": { "inputFile": { "description": "Path to content file", "type": "string" }, "model": { "default": "gemini-embedding-001", "description": "Embedding model", "enum": [ "gemini-embedding-001" ], "type": "string" }, "outputLocation": { "description": "Output directory for results", "type": "string" }, "pollIntervalSeconds": { "default": 30, "description": "Seconds between status checks", "minimum": 10, "type": "number" }, "taskType": { "description": "Embedding task type (omit to get interactive prompt)", "enum": [ "SEMANTIC_SIMILARITY", "CLASSIFICATION", "CLUSTERING", "RETRIEVAL_DOCUMENT", "RETRIEVAL_QUERY", "CODE_RETRIEVAL_QUERY", "QUESTION_ANSWERING", "FACT_VERIFICATION" ], "type": "string" } }, "required": [ "inputFile" ], "type": "object" }

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/mintmcqueen/gemini-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server