Skip to main content
Glama

MCP Vision Relay

by ah-wq

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault
QWEN_CLI_COMMANDNoPath to the Qwen CLI executable
GEMINI_CLI_COMMANDNoPath to the Gemini CLI executable
MCP_IMAGE_TEMP_DIRNoDirectory for storing downloaded/decoded temporary image files
QWEN_DEFAULT_MODELNoDefault model name for Qwen (e.g., qwen2.5-omni-medium)
MCP_MAX_IMAGE_BYTESNoMaximum allowed image size in bytes
QWEN_DEFAULT_PROMPTNoDefault prompt for Qwen image analysis
GEMINI_DEFAULT_MODELNoDefault model name for Gemini (e.g., gemini-2.0-flash)
GEMINI_OUTPUT_FORMATNoControls Gemini output format (text or json)
GEMINI_DEFAULT_PROMPTNoDefault prompt for Gemini image analysis
MCP_COMMAND_TIMEOUT_MSNoGlobal timeout in milliseconds for CLI commands
MCP_ALLOWED_IMAGE_EXTENSIONSNoList of allowed image file extensions

Schema

Prompts

Interactive templates invoked by user choice

NameDescription

No prompts

Resources

Contextual data attached and managed by the client

NameDescription

No resources

Tools

Functions exposed to the LLM to take actions

NameDescription
gemini_analyze_image

Use Google Gemini CLI to describe or analyze an image using multimodal capabilities.

qwen_analyze_image

Use Qwen CLI to describe or analyze an image with its multimodal capabilities.

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/ah-wq/mcp-vision-relay'

If you have feedback or need assistance with the MCP directory API, please join our Discord server