Server Configuration
Describes the environment variables required to run the server.
| Name | Required | Description | Default |
|---|---|---|---|
| AI_PROVIDER | No | The AI provider to use. Either 'vertex' for Vertex AI or 'gemini' for Gemini API | |
| AI_MAX_RETRIES | No | Maximum number of retries for transient API errors | 3 |
| AI_TEMPERATURE | No | The temperature parameter for AI generation | 0.0 |
| GEMINI_API_KEY | No | Your Gemini API Key (required if AI_PROVIDER='gemini') | |
| GEMINI_MODEL_ID | No | The Gemini model ID to use (if AI_PROVIDER='gemini') | gemini-2.5-pro |
| VERTEX_MODEL_ID | No | The Vertex AI model ID to use (if AI_PROVIDER='vertex') | gemini-2.5-pro |
| AI_USE_STREAMING | No | Whether to use streaming API for AI responses | true |
| AI_RETRY_DELAY_MS | No | Delay in milliseconds between retries | 1000 |
| AI_MAX_OUTPUT_TOKENS | No | Maximum number of output tokens for AI generation | 65536 |
| GOOGLE_CLOUD_PROJECT | No | Your GCP Project ID (required if AI_PROVIDER='vertex') | |
| GOOGLE_CLOUD_LOCATION | No | The Google Cloud location/region (specific to Vertex AI) | us-central1 |
| GOOGLE_APPLICATION_CREDENTIALS | No | Path to your service account key JSON file (if using Service Account Key for Vertex authentication) |
Capabilities
Server capabilities have not been inspected yet.
Tools
Functions exposed to the LLM to take actions
| Name | Description |
|---|---|
No tools | |
Prompts
Interactive templates invoked by user choice
| Name | Description |
|---|---|
No prompts | |
Resources
Contextual data attached and managed by the client
| Name | Description |
|---|---|
No resources | |