MCP HydroCoder Vision
Server Configuration
Describes the environment variables required to run the server.
| Name | Required | Description | Default |
|---|---|---|---|
| VISION_MODEL | No | Model name to use | Qwen3-VL-4B-Instruct |
| LM_STUDIO_URL | No | LM Studio API endpoint | http://localhost:1234/v1/chat/completions |
Capabilities
Features and capabilities supported by this server
| Capability | Details |
|---|---|
| tools | {} |
Tools
Functions exposed to the LLM to take actions
| Name | Description |
|---|---|
| analyzeImageB | Analyze an image and return a detailed description. Uses local Qwen3 VL 4B model via LM Studio. |
| extractTextC | Extract text from an image (OCR). Supports multiple languages. |
| describeForCodeC | Analyze a UI/design image and generate corresponding code (HTML/CSS/JS, Vue, React, etc.). |
Prompts
Interactive templates invoked by user choice
| Name | Description |
|---|---|
No prompts | |
Resources
Contextual data attached and managed by the client
| Name | Description |
|---|---|
No resources | |
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/hydroCoderClaud/mcp-hydrocoder-vision'
If you have feedback or need assistance with the MCP directory API, please join our Discord server