Skip to main content
Glama

VLLM MCP Server

by StanleyChanH
.env.example633 B
# OpenAI Configuration OPENAI_API_KEY=your_openai_api_key_here OPENAI_BASE_URL=https://api.openai.com/v1 OPENAI_DEFAULT_MODEL=gpt-4o OPENAI_SUPPORTED_MODELS=gpt-4o,gpt-4o-mini,gpt-4-turbo,gpt-4-vision-preview # Dashscope Configuration (阿里云) DASHSCOPE_API_KEY=your_dashscope_api_key_here DASHSCOPE_DEFAULT_MODEL=qwen-vl-plus DASHSCOPE_SUPPORTED_MODELS=qwen-vl-plus,qwen-vl-max,qwen-vl-chat,qwen2-vl-7b-instruct,qwen2-vl-72b-instruct # Server Configuration VLLM_MCP_HOST=localhost VLLM_MCP_PORT=8080 VLLM_MCP_TRANSPORT=stdio VLLM_MCP_LOG_LEVEL=INFO # Optional: Custom configuration file path VLLM_MCP_CONFIG_PATH=./config.json

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/StanleyChanH/vllm-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server