MCP Server GLM Vision
A Model Context Protocol (MCP) server that integrates GLM-4.5V from Z.AI with Claude Code.
Features
Image Analysis: Analyze images using GLM-4.5V's vision capabilities
Local File Support: Analyze local image files or URLs
Configurable: Easy setup with environment variables
Installation
Prerequisites
Python 3.10 or higher
GLM API key from Z.AI
Claude Code installed
Setup
Clone or create the project directory:
cd /path/to/your/projectCreate and activate virtual environment:
python3 -m venv env source env/bin/activate # On Windows: env\Scripts\activateInstall dependencies:
pip install -r requirements.txt # or with uv (recommended) uv pip install -r requirements.txtSet up environment variables:
cp .env.example .env # Edit .env with your GLM API key from Z.AIAdd the server to Claude Code:
# Using uv (recommended) uv run mcp install -e . --name "GLM Vision Server" # Or manually add to Claude Desktop configuration: claude mcp add-json --scope user glm-vision '{ "type": "stdio", "command": "/path/to/your/project/env/bin/python", "args": ["/path/to/your/project/glm-vision.py"], "env": {"GLM_API_KEY": "your_api_key_here"} }'
Configuration
Set these environment variables in your .env
file:
Variable | Description | Default |
| Your GLM API key from Z.AI | (required) |
| GLM API base URL |
|
| Model name to use |
|
Usage
Available Tools
glm-vision
Analyze an image file using GLM-4.5V's vision capabilities. Supports both local files and URLs.
Parameters:
image_path
(required): Local file path or URL of the image to analyzeprompt
(required): What to ask about the imagetemperature
(optional): Response randomness (0.0-1.0, default: 0.7)thinking
(optional): Enable thinking mode to see model's reasoning process (default: false)max_tokens
(optional): Maximum tokens in response (max 64K, default: 2048)
Example:
Testing
Test the server using the MCP Inspector:
Development
Running Tests
Troubleshooting
API Key Issues: Make sure your
GLM_API_KEY
is correctly set in the environmentConnection Problems: Check your internet connection and API endpoint
Model Errors: Verify that the model name (
GLM_MODEL
) is correct and available
License
MIT License - see LICENSE file for details.
Contributing
Fork the repository
Create a feature branch
Make your changes
Add tests if applicable
Submit a pull request
Support
For issues related to the GLM API, contact Z.AI support. For MCP server issues, please create an issue in the repository.
This server cannot be installed
Enables image analysis using GLM-4.5V's vision capabilities from Z.AI. Supports analyzing both local image files and URLs with customizable prompts and parameters.