MCP vLLM Benchmarking Tool
This is proof of concept on how to use MCP to interactively benchmark vLLM.
We are not new to benchmarking, read our blog:
This is just an exploration of possibilities with MCP.
Usage
- Clone the repository
- Add it to your MCP servers:
Then you can prompt for example like this:
Todo:
- Due to some random outputs by vllm it may show that it found some invalid json. I have not really looked into it yet.
This server cannot be installed
remote-capable server
The server can be hosted and run remotely because it primarily relies on remote services or has no dependency on the local environment.
An interactive tool that enables users to benchmark vLLM endpoints through MCP, allowing performance testing of LLM models with customizable parameters.
Related MCP Servers
- -securityAlicense-qualityA comprehensive toolkit that enhances LLM capabilities through the Model Context Protocol, allowing LLMs to interact with external services including command-line operations, file management, Figma integration, and audio processing.Last updated -17PythonApache 2.0
- -securityAlicense-qualityAn MCP server that allows agents to test and compare LLM prompts across OpenAI and Anthropic models, supporting single tests, side-by-side comparisons, and multi-turn conversations.Last updated -PythonMIT License
- AsecurityFlicenseAqualityAn experimental MCP gateway that provides specialized LLM enhancement prompts based on the L1B3RT4S repository, primarily intended to enhance weaker models' capabilities.Last updated -12,0127JavaScript
- -security-license-qualityAn MCP server that enables LLMs to autonomously reverse engineer applications through Cutter, allowing them to decompile binaries, analyze code, and rename methods programmatically.Last updated -1PythonApache 2.0