Skip to main content
Glama

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault
GEMINI_API_KEYNoYour Gemini API key for authentication
GEMINI_TIMEOUTNoCommand timeout in seconds (10-3600)300
PROMETHEUS_PORTNoPrometheus metrics endpoint port8000
RETRY_MAX_DELAYNoMaximum delay between retries in seconds (5.0-300.0)30.0
GEMINI_LOG_LEVELNoLogging level (DEBUG, INFO, WARNING, ERROR)INFO
RETRY_BASE_DELAYNoBase delay for exponential backoff in seconds (0.1-10.0)1.0
ENABLE_MONITORINGNoMaster control for all monitoring featuresfalse
ENABLE_PROMETHEUSNoEnable Prometheus metrics collectionfalse
GEMINI_EVAL_LIMITNogemini_eval_plan character limit500000
GEMINI_REDIS_HOSTNoRedis host for conversation storagelocalhost
GEMINI_REDIS_PORTNoRedis port6479
ENABLE_STDIN_DEBUGNoEnable stdin debugging
OPENROUTER_API_KEYNoOpenRouter API key for 400+ models
RETRY_MAX_ATTEMPTSNoMaximum retry attempts (1-10)3
GEMINI_COMMAND_PATHNoPath to Gemini CLI executablegemini
GEMINI_PROMPT_LIMITNogemini_prompt character limit100000
GEMINI_REVIEW_LIMITNogemini_review_code character limit300000
GEMINI_VERIFY_LIMITNogemini_verify_solution character limit800000
JSONRPC_STRICT_MODENoEnable strict JSON-RPC validationtrue
ENABLE_HEALTH_CHECKSNoEnable health check systemfalse
ENABLE_OPENTELEMETRYNoEnable OpenTelemetry distributed tracingfalse
GEMINI_OUTPUT_FORMATNoResponse format (json, text)json
GEMINI_SANDBOX_LIMITNogemini_sandbox character limit200000
CLOUDFLARE_ACCOUNT_IDNoCloudflare Account ID
CLOUDFLARE_GATEWAY_IDNoCloudflare Gateway ID
GEMINI_ENABLE_FALLBACKNoEnable automatic model fallbacktrue
GEMINI_SUMMARIZE_LIMITNogemini_summarize character limit400000
OPENTELEMETRY_ENDPOINTNoOpenTelemetry endpointhttps://otel-collector:4317
GEMINI_RATE_LIMIT_WINDOWNoTime window in seconds60
JSONRPC_MAX_REQUEST_SIZENoMax JSON-RPC request size in bytes (1MB default)1048576
OPENROUTER_DEFAULT_MODELNoDefault OpenRouter modelopenai/gpt-4.1-nano
JSONRPC_MAX_NESTING_DEPTHNoMax object/array nesting depth10
GEMINI_RATE_LIMIT_REQUESTSNoRequests per time window100
OPENROUTER_MAX_FILE_TOKENSNoPer-file token limit for @filename50000
OPENTELEMETRY_SERVICE_NAMENoService name for tracinggemini-cli-mcp-server
GEMINI_CONVERSATION_ENABLEDNoEnable conversation historytrue
GEMINI_CONVERSATION_STORAGENoStorage backend (redis, memory, auto)redis
OPENROUTER_ENABLE_STREAMINGNoEnable streaming responsestrue
OPENROUTER_MAX_TOTAL_TOKENSNoTotal prompt token limit150000
GEMINI_SUMMARIZE_FILES_LIMITNogemini_summarize_files character limit800000
CLOUDFLARE_AI_GATEWAY_ENABLEDNoEnable Cloudflare AI Gatewayfalse
CLOUDFLARE_AI_GATEWAY_TIMEOUTNoGateway timeout in seconds300
OPENROUTER_COST_LIMIT_PER_DAYNoDaily cost limit in USD10.0
GEMINI_CONVERSATION_MAX_TOKENSNoToken history limit20000
GEMINI_SUBPROCESS_MAX_CPU_TIMENoSubprocess CPU time limit in seconds300
GEMINI_SUBPROCESS_MAX_MEMORY_MBNoSubprocess memory limit in MB512
GEMINI_CONVERSATION_MAX_MESSAGESNoMessage history limit10
CLOUDFLARE_AI_GATEWAY_MAX_RETRIESNoMaximum retry attempts3
GEMINI_CONVERSATION_EXPIRATION_HOURSNoAuto-cleanup time in hours24

Tools

Functions exposed to the LLM to take actions

NameDescription

No tools

Prompts

Interactive templates invoked by user choice

NameDescription

No prompts

Resources

Contextual data attached and managed by the client

NameDescription

No resources

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/centminmod/gemini-cli-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server