Skip to main content
Glama

VLLM MCP Server

by StanleyChanH
config.json591 B
{ "host": "localhost", "port": 8080, "transport": "stdio", "log_level": "INFO", "max_connections": 100, "request_timeout": 120, "providers": [ { "provider_type": "openai", "api_key": "${OPENAI_API_KEY}", "base_url": "${OPENAI_BASE_URL}", "default_model": "gpt-4o", "max_tokens": 4000, "temperature": 0.7, "timeout": 60 }, { "provider_type": "dashscope", "api_key": "${DASHSCOPE_API_KEY}", "default_model": "qwen-vl-plus", "max_tokens": 4000, "temperature": 0.7, "timeout": 60 } ] }

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/StanleyChanH/vllm-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server