Skip to main content
Glama

MCP vLLM Benchmarking Tool

by Eliovp-BV

MCP vLLM Benchmarking Tool

This is proof of concept on how to use MCP to interactively benchmark vLLM.

We are not new to benchmarking, read our blog:

Benchmarking vLLM

This is just an exploration of possibilities with MCP.

Usage

  1. Clone the repository
  2. Add it to your MCP servers:
{ "mcpServers": { "mcp-vllm": { "command": "uv", "args": [ "run", "/Path/TO/mcp-vllm-benchmarking-tool/server.py" ] } } }

Then you can prompt for example like this:

Do a vllm benchmark for this endpoint: http://10.0.101.39:8888 benchmark the following model: deepseek-ai/DeepSeek-R1-Distill-Llama-8B run the benchmark 3 times with each 32 num prompts, then compare the results, but ignore the first iteration as that is just a warmup.

Todo:

  • Due to some random outputs by vllm it may show that it found some invalid json. I have not really looked into it yet.
-
security - not tested
F
license - not found
-
quality - not tested

remote-capable server

The server can be hosted and run remotely because it primarily relies on remote services or has no dependency on the local environment.

An interactive tool that enables users to benchmark vLLM endpoints through MCP, allowing performance testing of LLM models with customizable parameters.

  1. Usage
    1. Todo:

      Related MCP Servers

      • A
        security
        A
        license
        A
        quality
        An MCP server that provides LLMs access to other LLMs
        Last updated -
        4
        425
        57
        JavaScript
        MIT License
      • -
        security
        F
        license
        -
        quality
        An MCP server that allows Claude to interact with local LLMs running in LM Studio, providing access to list models, generate text, and use chat completions through local models.
        Last updated -
        8
        Python
      • -
        security
        A
        license
        -
        quality
        An MCP server that allows agents to test and compare LLM prompts across OpenAI and Anthropic models, supporting single tests, side-by-side comparisons, and multi-turn conversations.
        Last updated -
        Python
        MIT License
      • A
        security
        F
        license
        A
        quality
        A lightweight MCP server that provides a unified interface to various LLM providers including OpenAI, Anthropic, Google Gemini, Groq, DeepSeek, and Ollama.
        Last updated -
        6
        545
        Python

      View all related MCP servers

      MCP directory API

      We provide all the information about MCP servers via our MCP API.

      curl -X GET 'https://glama.ai/api/mcp/v1/servers/Eliovp-BV/mcp-vllm-benchmark'

      If you have feedback or need assistance with the MCP directory API, please join our Discord server