Server Configuration
Describes the environment variables required to run the server.
| Name | Required | Description | Default |
|---|---|---|---|
| PYTHONPATH | No | Python path for module resolution | src |
| VLLM_MCP_HOST | No | Server host (optional) | localhost |
| VLLM_MCP_PORT | No | Server port (optional) | 8080 |
| OPENAI_API_KEY | No | Your OpenAI API key | |
| OPENAI_BASE_URL | No | OpenAI base URL (optional) | https://api.openai.com/v1 |
| DASHSCOPE_API_KEY | Yes | Your Dashscope API key | |
| VLLM_MCP_LOG_LEVEL | No | Log level (optional) | INFO |
| VLLM_MCP_TRANSPORT | No | Transport type (optional) | stdio |
| OPENAI_DEFAULT_MODEL | No | Default OpenAI model to use | gpt-4o |
| DASHSCOPE_DEFAULT_MODEL | No | Default Dashscope model to use | qwen-vl-plus |
| OPENAI_SUPPORTED_MODELS | No | Comma-separated list of supported OpenAI models | gpt-4o,gpt-4o-mini,gpt-4-turbo,gpt-4-vision-preview |
| DASHSCOPE_SUPPORTED_MODELS | No | Comma-separated list of supported Dashscope models | qwen-vl-plus,qwen-vl-max,qwen-vl-chat,qwen2-vl-7b-instruct,qwen2-vl-72b-instruct |
Schema
Prompts
Interactive templates invoked by user choice
| Name | Description |
|---|---|
No prompts | |
Resources
Contextual data attached and managed by the client
| Name | Description |
|---|---|
No resources | |
Tools
Functions exposed to the LLM to take actions
| Name | Description |
|---|---|
No tools | |