Skip to main content
Glama

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault
PYTHONPATHNoPython path for module resolutionsrc
VLLM_MCP_HOSTNoServer host (optional)localhost
VLLM_MCP_PORTNoServer port (optional)8080
OPENAI_API_KEYNoYour OpenAI API key
OPENAI_BASE_URLNoOpenAI base URL (optional)https://api.openai.com/v1
DASHSCOPE_API_KEYYesYour Dashscope API key
VLLM_MCP_LOG_LEVELNoLog level (optional)INFO
VLLM_MCP_TRANSPORTNoTransport type (optional)stdio
OPENAI_DEFAULT_MODELNoDefault OpenAI model to usegpt-4o
DASHSCOPE_DEFAULT_MODELNoDefault Dashscope model to useqwen-vl-plus
OPENAI_SUPPORTED_MODELSNoComma-separated list of supported OpenAI modelsgpt-4o,gpt-4o-mini,gpt-4-turbo,gpt-4-vision-preview
DASHSCOPE_SUPPORTED_MODELSNoComma-separated list of supported Dashscope modelsqwen-vl-plus,qwen-vl-max,qwen-vl-chat,qwen2-vl-7b-instruct,qwen2-vl-72b-instruct

Capabilities

Server capabilities have not been inspected yet.

Tools

Functions exposed to the LLM to take actions

NameDescription
generate_multimodal_response

Generate response from multimodal model.

        Args:
            model: Model name to use
            prompt: Text prompt
            image_urls: Optional list of image URLs
            file_paths: Optional list of file paths
            system_prompt: Optional system prompt
            max_tokens: Maximum tokens to generate
            temperature: Generation temperature
            provider: Optional provider name (openai, dashscope)

        Returns:
            Generated response text
        
list_available_providers

List available model providers and their configurations.

        Returns:
            JSON string of available providers and their models
        
validate_multimodal_request

Validate if a multimodal request is supported.

        Args:
            model: Model name to validate
            image_count: Number of images in request
            file_count: Number of files in request
            provider: Optional provider name

        Returns:
            Validation result
        

Prompts

Interactive templates invoked by user choice

NameDescription

No prompts

Resources

Contextual data attached and managed by the client

NameDescription

No resources

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/StanleyChanH/vllm-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server