Skip to main content
Glama

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault
AI_PROVIDERNoThe AI provider to use. Either 'vertex' for Vertex AI or 'gemini' for Gemini API
AI_MAX_RETRIESNoMaximum number of retries for transient API errors3
AI_TEMPERATURENoThe temperature parameter for AI generation0.0
GEMINI_API_KEYNoYour Gemini API Key (required if AI_PROVIDER='gemini')
GEMINI_MODEL_IDNoThe Gemini model ID to use (if AI_PROVIDER='gemini')gemini-2.5-pro
VERTEX_MODEL_IDNoThe Vertex AI model ID to use (if AI_PROVIDER='vertex')gemini-2.5-pro
AI_USE_STREAMINGNoWhether to use streaming API for AI responsestrue
AI_RETRY_DELAY_MSNoDelay in milliseconds between retries1000
AI_MAX_OUTPUT_TOKENSNoMaximum number of output tokens for AI generation65536
GOOGLE_CLOUD_PROJECTNoYour GCP Project ID (required if AI_PROVIDER='vertex')
GOOGLE_CLOUD_LOCATIONNoThe Google Cloud location/region (specific to Vertex AI)us-central1
GOOGLE_APPLICATION_CREDENTIALSNoPath to your service account key JSON file (if using Service Account Key for Vertex authentication)

Capabilities

Server capabilities have not been inspected yet.

Tools

Functions exposed to the LLM to take actions

NameDescription

No tools

Prompts

Interactive templates invoked by user choice

NameDescription

No prompts

Resources

Contextual data attached and managed by the client

NameDescription

No resources

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/shariqriazz/google-ai-search-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server