Provides access to Alibaba Cloud's Dashscope multimodal Qwen-VL models, allowing text models to analyze images and other media formats through the Dashscope API
Enables text-only models to access OpenAI's multimodal GPT-4 Vision models for processing images and generating responses that combine visual and textual understanding
VLLM MCP Server
A Model Context Protocol (MCP) server that enables text models to call multimodal models. This server supports both OpenAI and Dashscope (Alibaba Cloud) multimodal models, allowing text-only models to process images and other media formats through standardized MCP tools.
GitHub Repository: https://github.com/StanleyChanH/vllm-mcp
Features
Multi-Provider Support: OpenAI GPT-4 Vision and Dashscope Qwen-VL models
Multiple Transport Options: STDIO, HTTP, and Server-Sent Events (SSE)
Flexible Deployment: Docker, Docker Compose, and local development
Easy Configuration: JSON configuration files and environment variables
Comprehensive Tooling: MCP tools for model interaction, validation, and provider management
Quick Start
Prerequisites
Python 3.11+
uv package manager
API keys for OpenAI and/or Dashscope (阿里云)
Installation & Setup
Clone the repository:
git clone https://github.com/StanleyChanH/vllm-mcp.git cd vllm-mcpSet up environment:
cp .env.example .env # Edit .env with your API keys nano .env # or use your preferred editorConfigure API keys (in
.env
file):# Dashscope (阿里云) - Required for basic functionality DASHSCOPE_API_KEY=sk-your-dashscope-api-key # OpenAI - Optional OPENAI_API_KEY=sk-your-openai-api-keyInstall dependencies:
uv syncVerify setup:
uv run python test_simple.py
Running the Server
Start the server (STDIO transport - default):
./scripts/start.shStart with HTTP transport:
./scripts/start.sh --transport http --host 0.0.0.0 --port 8080Development mode with hot reload:
./scripts/start-dev.sh
Testing & Verification
List available models:
uv run python examples/list_models.pyRun basic tests:
uv run python test_simple.pyTest MCP tools:
uv run python examples/client_example.py
Docker Deployment
Build and run with Docker Compose:
# Create .env file with your API keys cp .env.example .env # Start the service docker-compose up -dBuild manually:
docker build -t vllm-mcp . docker run -p 8080:8080 --env-file .env vllm-mcp
Configuration
Environment Variables
Configuration File
Create a config.json
file:
MCP Tools
The server provides the following MCP tools:
generate_multimodal_response
Generate responses from multimodal models.
Parameters:
model
(string): Model name to useprompt
(string): Text promptimage_urls
(array, optional): List of image URLsfile_paths
(array, optional): List of file pathssystem_prompt
(string, optional): System promptmax_tokens
(integer, optional): Maximum tokens to generatetemperature
(number, optional): Generation temperatureprovider
(string, optional): Provider name (auto-detected if not specified)
Example:
list_available_providers
List available model providers and their supported models.
Example:
validate_multimodal_request
Validate if a multimodal request is supported by the specified provider.
Parameters:
model
(string): Model name to validateimage_count
(integer, optional): Number of imagesfile_count
(integer, optional): Number of filesprovider
(string, optional): Provider name
Supported Models
OpenAI
gpt-4o
gpt-4o-mini
gpt-4-turbo
gpt-4-vision-preview
Dashscope
qwen-vl-plus
qwen-vl-max
qwen-vl-chat
qwen2-vl-7b-instruct
qwen2-vl-72b-instruct
Model Selection
Using Environment Variables
You can configure default models and supported models through environment variables:
Listing Available Models
Use the list_available_providers
tool to see all available models:
Model Selection Examples
Model Configuration File
You can also configure models in config.json
:
Client Integration
Python Client
MCP Client Configuration
Add to your MCP client configuration:
Development
Project Structure
Adding New Providers
Create a new provider class in
src/vllm_mcp/providers/
Implement the required methods:
generate_response()
is_model_supported()
validate_request()
Register the provider in
src/vllm_mcp/server.py
Update configuration schema
Running Tests
Deployment Options
STDIO Transport (Default)
Best for MCP client integrations and local development.
HTTP Transport
Suitable for web service deployments.
SSE Transport
For real-time streaming responses.
Troubleshooting
Common Issues
Import Error: No module named 'vllm_mcp'
# Make sure you're in the project root and run: uv sync export PYTHONPATH="src:$PYTHONPATH"API Key Not Found
# Ensure your .env file is properly configured: cp .env.example .env # Edit .env with your actual API keysDashscope API Errors
Verify your API key is valid and active
Check if you have sufficient quota
Ensure network connectivity to Dashscope services
Server Startup Issues
# Check for port conflicts: lsof -i :8080 # Try a different port: ./scripts/start.sh --port 8081Docker Issues
# Rebuild Docker image: docker-compose down docker-compose build --no-cache docker-compose up -d
Debug Mode
Enable debug logging for troubleshooting:
Getting Help
Check SETUP_GUIDE.md for detailed setup instructions
Run
uv run python test_simple.py
to verify basic functionalityReview logs for error messages and warnings
License
MIT License
Contributing
Fork the repository
Create a feature branch
Make your changes
Add tests if applicable
Submit a pull request
Support
Issues: GitHub Issues
Documentation: Wiki
Acknowledgments
This server cannot be installed
remote-capable server
The server can be hosted and run remotely because it primarily relies on remote services or has no dependency on the local environment.
Enables text-only models to process images and other media formats by providing access to multimodal models from OpenAI and Dashscope (Alibaba Cloud). Supports flexible deployment options and comprehensive tooling for multimodal AI interactions.