Server Configuration
Describes the environment variables required to run the server.
| Name | Required | Description | Default |
|---|---|---|---|
| GEMINI_API_KEY | No | Optional API key | |
| GEMINI_API_BASE_URL | No | AIStudioProxyAPI endpoint | http://127.0.0.1:2048 |
| GEMINI_PROJECT_ROOT | No | Root directory for file resolution | $PWD |
Capabilities
Features and capabilities supported by this server
| Capability | Details |
|---|---|
| tools | {
"listChanged": false
} |
| prompts | {
"listChanged": false
} |
| resources | {
"subscribe": false,
"listChanged": false
} |
| experimental | {} |
Tools
Functions exposed to the LLM to take actions
| Name | Description |
|---|---|
| gemini_chat | Send a message to Google Gemini and get a response.
Args:
params (GeminiChatInput): Chat parameters including:
- prompt (str): The prompt to send
- file (Optional[list[str]]): Files to include (text, code, images)
- session_id (Optional[str]): Session ID for multi-turn chat, use 'last' for recent
- model (Optional[str]): Override model selection
- system_prompt (Optional[str]): System context
- temperature (Optional[float]): Creativity (0.0-2.0)
- max_tokens (Optional[int]): Max response length
- response_format: Output format - 'markdown' or 'json'
Returns:
str: Response with SESSION_ID for continuation.
Examples:
- Simple: prompt="What is AI?"
- With file: prompt="Review", file=["main.py"]
- With image: prompt="Describe", file=["photo.jpg"]
- Continue: prompt="Tell me more", session_id="last" |
| gemini_list_models | List available Gemini models. |
Prompts
Interactive templates invoked by user choice
| Name | Description |
|---|---|
No prompts | |
Resources
Contextual data attached and managed by the client
| Name | Description |
|---|---|
No resources | |