Skip to main content
Glama

Gemini MCP Server

by mintmcqueen

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault
GEMINI_API_KEYYesYour Google Gemini API key (starts with 'AIza...'). Get it from Google AI Studio at https://aistudio.google.com/app/apikey

Schema

Prompts

Interactive templates invoked by user choice

NameDescription

No prompts

Resources

Contextual data attached and managed by the client

NameDescription
Available Gemini ModelsList of available Gemini models and their capabilities
Active ConversationsList of active conversation sessions
Uploaded FilesList of currently uploaded files

Tools

Functions exposed to the LLM to take actions

NameDescription
chat

SEND MESSAGE TO GEMINI (with optional files) - Chat with Gemini, optionally including uploaded files for multimodal analysis. TYPICAL USE: 0-2 files for most tasks (code review, document analysis, image description). SCALES TO: 40+ files when needed for comprehensive analysis. WORKFLOW: 1) Upload files first using upload_file (single) or upload_multiple_files (multiple), 2) Pass returned URIs in fileUris array, 3) Include your text prompt in message. The server handles file object caching and proper API formatting. Supports conversation continuity via conversationId. RETURNS: response text, token usage, conversation ID. Files are passed as direct objects to Gemini (not fileData structures). Auto-retrieves missing files from API if not cached.

upload_multiple_files

UPLOAD MULTIPLE FILES EFFICIENTLY - Handles 2-40+ files with smart parallel processing. TYPICAL USE: 2-10 files for multi-document analysis, code reviews, or comparative tasks. SCALES TO: 40+ files for comprehensive dataset processing. FEATURES: Automatic retry (3 attempts), parallel uploads (5 concurrent default), processing state monitoring (waits for ACTIVE state). WORKFLOW: 1) Provide array of file paths, 2) System uploads in optimized batches, 3) Returns URIs for use in chat tool. PERFORMANCE: 2 files = ~30 seconds, 10 files = ~1-2 minutes, 40 files = ~2-3 minutes. Each successful upload returns: originalPath, file object, URI. Failed uploads include error details. Use upload_file for single files instead.

start_conversation

INITIALIZE CONVERSATION SESSION - Creates new conversation context for multi-turn chat with Gemini. Generates unique ID if not provided. Stores message history for context continuity. Returns conversationId to use in subsequent chat calls. USAGE: Call before first chat or to start fresh context. Pass returned ID to chat tool's conversationId parameter for continuation.

clear_conversation

CLEAR CONVERSATION HISTORY - Deletes specified conversation session and all associated message history. Frees memory and resets context. USAGE: Pass conversationId from start_conversation or chat response. Returns confirmation or 'not found' message. Use when switching topics or cleaning up after completion.

upload_file

UPLOAD SINGLE FILE - Standard method for uploading one file to Gemini. BEST FOR: Single documents, images, or code files for immediate analysis. Includes automatic retry and state monitoring until file is ready. WORKFLOW: 1) Upload with auto-detected MIME type, 2) Wait for processing to complete (usually 10-30 seconds), 3) Returns URI for chat tool. RETURNS: fileUri (pass to chat tool), displayName, mimeType, sizeBytes, state. Files auto-delete after 48 hours. For 2+ files, consider upload_multiple_files for efficiency.

list_files

LIST ALL UPLOADED FILES - Retrieves metadata for all files currently in Gemini File API (associated with API key). Updates internal cache with latest file states. RETURNS: Array of files with uri, displayName, mimeType, sizeBytes, createTime, expirationTime, state. Also shows cachedCount indicating files ready for immediate use. USAGE: Check file availability before chat, monitor upload status, audit storage usage (20GB project limit).

get_file

GET FILE METADATA & UPDATE CACHE - Retrieves current metadata for specific file from Gemini API and updates cache. USAGE: Pass fileUri from upload response or list_files. RETURNS: Complete file info including uri, displayName, mimeType, sizeBytes, create/update/expiration times, sha256Hash, state. Automatically adds to cache if missing. USE CASE: Verify file state, check expiration, refresh cache entry.

delete_file

DELETE FILE FROM GEMINI - Permanently removes file from Gemini File API and clears from cache. USAGE: Pass fileUri from upload or list_files. Immediate deletion, cannot be undone. USE CASE: Clean up after processing, manage storage quota, remove sensitive data. NOTE: Files auto-delete after 48 hours if not manually removed.

cleanup_all_files

BULK DELETE ALL FILES - Removes ALL files from Gemini File API associated with current API key. Clears entire cache. RETURNS: Count of deleted vs failed deletions with detailed lists. USE CASE: Complete cleanup after batch processing, reset environment, clear storage quota. WARNING: Irreversible operation affecting all uploaded files.

batch_create

CREATE BATCH JOB - Create async content generation batch job with Gemini. COST: 50% cheaper than standard API. TURNAROUND: ~24 hours target. WORKFLOW: 1) Prepare JSONL file with requests (or use batch_ingest_content first), 2) Upload file with upload_file, 3) Call batch_create with file URI, 4) Use batch_get_status to monitor progress, 5) Use batch_download_results when complete. SUPPORTS: Inline requests (<20MB) or file-based (JSONL for large batches). Returns batch job ID and initial status.

batch_process

COMPLETE BATCH WORKFLOW - End-to-end content generation batch processing. WORKFLOW: 1) Ingests content file (CSV, JSON, TXT, etc.), 2) Converts to JSONL, 3) Uploads to Gemini, 4) Creates batch job, 5) Polls until complete, 6) Downloads and parses results. BEST FOR: Users who want simple one-call solution. RETURNS: Final results with metadata. For more control, use individual tools (batch_ingest_content, batch_create, batch_get_status, batch_download_results).

batch_ingest_content

INTELLIGENT CONTENT INGESTION - Analyzes content file, converts to JSONL for batch processing. WORKFLOW: 1) Detects format (CSV, JSON, TXT, MD), 2) Analyzes structure/complexity, 3) Writes analysis scripts if needed, 4) Converts to proper JSONL format, 5) Validates JSONL structure. SUPPORTS: CSV (converts rows), JSON (wraps objects), TXT/MD (splits by lines/sections). RETURNS: Conversion report with outputFile path, validation status, and any generated scripts.

batch_get_status

GET BATCH JOB STATUS - Check status of running batch job with optional auto-polling. STATES: PENDING (queued), RUNNING (processing), SUCCEEDED (complete), FAILED (error), CANCELLED (user stopped), EXPIRED (timeout). WORKFLOW: 1) Call with batch job name/ID, 2) Optionally enable polling to wait for completion, 3) Returns current state, progress stats, and completion info. USAGE: Pass job name from batch_create response. Enable autoPoll for hands-off waiting.

batch_download_results

DOWNLOAD BATCH RESULTS - Download and parse results from completed batch job. WORKFLOW: 1) Checks job status (must be SUCCEEDED), 2) Downloads result file from Gemini API, 3) Parses JSONL results, 4) Saves to local file, 5) Returns parsed results array. RETURNS: Array of results with original keys, responses, and metadata. Also saves to file in outputLocation.

batch_create_embeddings

CREATE EMBEDDINGS BATCH JOB - Create async embeddings generation batch job. COST: 50% cheaper than standard API. MODEL: gemini-embedding-001 (1536 dimensions). WORKFLOW: 1) Prepare content (use batch_ingest_embeddings for conversion), 2) Select task type (use batch_query_task_type if unsure), 3) Upload file, 4) Call batch_create_embeddings, 5) Monitor with batch_get_status, 6) Download with batch_download_results. TASK TYPES: See batch_query_task_type for descriptions and recommendations.

batch_process_embeddings

COMPLETE EMBEDDINGS WORKFLOW - End-to-end embeddings batch processing. WORKFLOW: 1) Ingests content, 2) Queries user for task type (or auto-recommends), 3) Converts to JSONL, 4) Uploads, 5) Creates batch job, 6) Polls until complete, 7) Downloads results. BEST FOR: Simple one-call embeddings generation. RETURNS: Embeddings array (1536-dimensional vectors) with metadata.

batch_ingest_embeddings

EMBEDDINGS CONTENT INGESTION - Specialized ingestion for embeddings batch processing. WORKFLOW: 1) Analyzes content structure, 2) Extracts text for embedding, 3) Formats as JSONL with proper embedContent structure including task_type, 4) Validates format. OPTIMIZED FOR: Text extraction from various formats (CSV columns, JSON fields, TXT lines, MD sections). RETURNS: JSONL file ready for batch_create_embeddings with task_type embedded in each request.

batch_query_task_type

INTERACTIVE TASK TYPE SELECTOR - Helps choose optimal embedding task type with recommendations. WORKFLOW: 1) Optionally analyzes sample content, 2) Shows all 8 task types with descriptions, 3) Provides AI recommendation based on context, 4) Returns selected task type. TASK TYPES: SEMANTIC_SIMILARITY (compare text similarity), CLASSIFICATION (categorize text), CLUSTERING (group similar items), RETRIEVAL_DOCUMENT (index for search), RETRIEVAL_QUERY (search queries), CODE_RETRIEVAL_QUERY (code search), QUESTION_ANSWERING (Q&A systems), FACT_VERIFICATION (check claims).

batch_cancel

CANCEL BATCH JOB - Request cancellation of running batch job. WORKFLOW: 1) Sends cancel request to Gemini API, 2) Job transitions to CANCELLED state, 3) Processing stops (may take a few seconds), 4) Partial results may be available. USE CASE: Stop long-running job due to errors, changed requirements, or cost management. NOTE: Cannot cancel SUCCEEDED or FAILED jobs.

batch_delete

DELETE BATCH JOB - Permanently delete batch job and associated data. WORKFLOW: 1) Validates job exists, 2) Deletes job metadata from Gemini API, 3) Removes from internal tracking. USE CASE: Clean up completed/failed jobs, manage job history, free storage. WARNING: Irreversible operation. Results will be lost if not downloaded first. Recommended to download results before deletion.

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/mintmcqueen/gemini-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server