Skip to main content
Glama
Jiansen

MCP Doctor

by Jiansen

Problem

Most MCP servers are built with only one audience in mind (usually human developers reading a README). But a successful MCP server needs to satisfy three audiences simultaneously:

  • Distribution platforms (Registry, Smithery, PulseMCP, Glama) need structured metadata

  • Human users need clear purpose, trust signals, and low install friction

  • AI agents need unambiguous tool descriptions, declared side effects, and token-efficient responses

MCP Doctor checks all six dimensions of "contract quality" and gives you actionable recommendations.

Quick Start

pip install mcp-doctor
mcp-doctor check /path/to/your-mcp-server

What It Checks

Dimension

Question

Task Clarity

Is the server's purpose immediately clear?

Trust & Safety

Are side effects, permissions, and safety boundaries declared?

Interface Quality

Are tools well-named, well-described, and well-typed?

Token Efficiency

Will tool responses fit an agent's context budget?

Install Friction

How fast can someone go from discovery to first use?

Cross-platform Readiness

Is metadata complete for Registry, Smithery, PulseMCP, Glama?

Usage

CLI

# Terminal output (default, rule-based)
mcp-doctor check /path/to/server

# JSON output (for CI/CD)
mcp-doctor check /path/to/server --format json

# Markdown output (for README badges or reports)
mcp-doctor check /path/to/server --format markdown

# AI-enhanced evaluation (requires OPENAI_API_KEY)
mcp-doctor check /path/to/server --mode ai

# AI mode with a specific model
mcp-doctor check /path/to/server --mode ai --model gpt-4o

As an MCP Server

MCP Doctor is also an MCP server, so AI agents can use it to check other servers:

{
  "mcpServers": {
    "mcp-doctor": {
      "command": "python",
      "args": ["-m", "mcp_doctor.mcp_server"]
    }
  }
}

Tools:

  • check_server(path, format, mode, model) — Run all 6 checks. Use mode="ai" for LLM-enhanced review.

  • list_dimensions() — List what MCP Doctor evaluates

For AI Agents: Quick Install

One-liner (paste this to your AI agent):

Install mcp-doctor (pip install mcp-doctor) and add it to your MCP config with command python -m mcp_doctor.mcp_server. Then use the check_server tool to check any MCP server's quality.

Step by step:

  1. Install: pip install mcp-doctor

  2. Add to your MCP configuration:

    {
      "mcpServers": {
        "mcp-doctor": {
          "command": "python",
          "args": ["-m", "mcp_doctor.mcp_server"]
        }
      }
    }
  3. Use the check_server tool with a path to any MCP server repo

  4. If MCP is not available, fall back to CLI: mcp-doctor check /path/to/server --format json

Agent workflow: check server -> review scores -> identify lowest dimensions -> follow recommendations

Scoring

Each dimension is scored 0-100 and graded A/B/C/D:

  • A (90+): Exemplary

  • B (75-89): Good, minor gaps

  • C (55-74): Adequate, notable gaps

  • D (<55): Needs significant improvement

Theoretical Foundation

MCP Doctor is based on the Contract Quality Framework — the principle that a successful MCP product's promises (what it does, what it costs, what risks it carries) should be equally parseable by platforms, humans, and agents.

The framework draws from:

  • Anthropic's "Writing effective tools for agents" (tool description engineering, namespacing, token efficiency)

  • Official MCP Registry requirements (server.json schema, namespace verification)

  • Cross-platform analysis of Smithery, PulseMCP, Glama ranking signals

Evaluation Modes

Mode

Flag

Deterministic

Network

API Key

Rule-based (default)

--mode rule

Yes

No

No

AI-enhanced

--mode ai

No

Yes (LLM API)

OPENAI_API_KEY

Rule-based mode is fully offline and deterministic. No network calls, no LLM, no API keys needed.

AI mode sends a metadata summary (server name, tool definitions, README preview, rule-based scores) to an OpenAI-compatible API for qualitative review. No source code is sent. Reports include model name and version for reproducibility.

AI mode supports any OpenAI-compatible provider:

  • OPENAI_API_KEY — API key (required for AI mode)

  • OPENAI_BASE_URL — Custom endpoint (for Ollama, vLLM, LM Studio, Azure, etc.)

  • MCP_DOCTOR_MODEL — Model name (default: gpt-4o-mini)

Install AI support: pip install mcp-doctor[ai]

Trust & Permissions

MCP Doctor is read-only. It never modifies, deletes, or creates files.

What it accesses

Why

server.json

Parse MCP metadata

README.md

Check documentation quality

LICENSE

Verify license presence

*.py, *.json in src/

Detect tool definitions

All MCP tools have readOnlyHint: true and destructiveHint: false annotations.

Feedback

Found a bug? Have a suggestion? We'd love to hear from you:

If you're an agent, include the JSON output of check_server in your report for context.

Contributing

git clone https://github.com/Jiansen/mcp-doctor.git
cd mcp-doctor
pip install -e ".[dev,ai]"
ruff check src/ tests/
pytest tests/ -v

If MCP Doctor helped you improve your server, consider giving it a star on GitHub — it helps others discover the tool.

Star on GitHub

License

MIT

-
security - not tested
A
license - permissive license
-
quality - not tested

Resources

Unclaimed servers have limited discoverability.

Looking for Admin?

If you are the server author, to access and configure the admin panel.

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Jiansen/mcp-doctor'

If you have feedback or need assistance with the MCP directory API, please join our Discord server