Skip to main content
Glama

MindBridge MCP Server ⚡ The AI Router for Big Brain Moves

MindBridge is your AI command hub — a Model Context Protocol (MCP) server built to unify, organize, and supercharge your LLM workflows.

Forget vendor lock-in. Forget juggling a dozen APIs.
MindBridge connects your apps to any model, from OpenAI and Anthropic to Ollama and DeepSeek — and lets them talk to each other like a team of expert consultants.

Need raw speed? Grab a cheap model.
Need complex reasoning? Route it to a specialist.
Want a second opinion? MindBridge has that built in.

This isn't just model aggregation. It's model orchestration.


Core Features 🔥

What it does

Why you should use it

Multi-LLM Support

Instantly switch between OpenAI, Anthropic, Google, DeepSeek, OpenRouter, Ollama (local models), and OpenAI-compatible APIs.

Reasoning Engine Aware

Smart routing to models built for deep reasoning like Claude, GPT-4o, DeepSeek Reasoner, etc.

getSecondOpinion Tool

Ask multiple models the same question to compare responses side-by-side.

OpenAI-Compatible API Layer

Drop MindBridge into any tool expecting OpenAI endpoints (Azure, Together.ai, Groq, etc.).

Auto-Detects Providers

Just add your keys. MindBridge handles setup & discovery automagically.

Flexible as Hell

Configure everything via env vars, MCP config, or JSON — it's your call.


Why MindBridge?

"Every LLM is good at something. MindBridge makes them work together."

Perfect for:

  • Agent builders

  • Multi-model workflows

  • AI orchestration engines

  • Reasoning-heavy tasks

  • Building smarter AI dev environments

  • LLM-powered backends

  • Anyone tired of vendor walled gardens


Installation 🛠️

Option 1: Install from npm (Recommended)

# Install globally npm install -g @pinkpixel/mindbridge # use with npx npx @pinkpixel/mindbridge

Option 2: Install from source

  1. Clone the repository:

    git clone https://github.com/pinkpixel-dev/mindbridge.git cd mindbridge
  2. Install dependencies:

    chmod +x install.sh ./install.sh
  3. Configure environment variables:

    cp .env.example .env

    Edit .env and add your API keys for the providers you want to use.

Configuration ⚙️

Environment Variables

The server supports the following environment variables:

  • OPENAI_API_KEY: Your OpenAI API key

  • ANTHROPIC_API_KEY: Your Anthropic API key

  • DEEPSEEK_API_KEY: Your DeepSeek API key

  • GOOGLE_API_KEY: Your Google AI API key

  • OPENROUTER_API_KEY: Your OpenRouter API key

  • OLLAMA_BASE_URL: Ollama instance URL (default: http://localhost:11434)

  • OPENAI_COMPATIBLE_API_KEY: (Optional) API key for OpenAI-compatible services

  • OPENAI_COMPATIBLE_API_BASE_URL: Base URL for OpenAI-compatible services

  • OPENAI_COMPATIBLE_API_MODELS: Comma-separated list of available models

MCP Configuration

For use with MCP-compatible IDEs like Cursor or Windsurf, you can use the following configuration in your mcp.json file:

{ "mcpServers": { "mindbridge": { "command": "npx", "args": [ "-y", "@pinkpixel/mindbridge" ], "env": { "OPENAI_API_KEY": "OPENAI_API_KEY_HERE", "ANTHROPIC_API_KEY": "ANTHROPIC_API_KEY_HERE", "GOOGLE_API_KEY": "GOOGLE_API_KEY_HERE", "DEEPSEEK_API_KEY": "DEEPSEEK_API_KEY_HERE", "OPENROUTER_API_KEY": "OPENROUTER_API_KEY_HERE" }, "provider_config": { "openai": { "default_model": "gpt-4o" }, "anthropic": { "default_model": "claude-3-5-sonnet-20241022" }, "google": { "default_model": "gemini-2.0-flash" }, "deepseek": { "default_model": "deepseek-chat" }, "openrouter": { "default_model": "openai/gpt-4o" }, "ollama": { "base_url": "http://localhost:11434", "default_model": "llama3" }, "openai_compatible": { "api_key": "API_KEY_HERE_OR_REMOVE_IF_NOT_NEEDED", "base_url": "FULL_API_URL_HERE", "available_models": ["MODEL1", "MODEL2"], "default_model": "MODEL1" } }, "default_params": { "temperature": 0.7, "reasoning_effort": "medium" }, "alwaysAllow": [ "getSecondOpinion", "listProviders", "listReasoningModels" ] } } }

Replace the API keys with your actual keys. For the OpenAI-compatible configuration, you can remove the api_key field if the service doesn't require authentication.

Usage 💫

Starting the Server

Development mode with auto-reload:

npm run dev

Production mode:

npm run build npm start

When installed globally:

mindbridge

Available Tools

  1. getSecondOpinion

    { provider: string; // LLM provider name model: string; // Model identifier prompt: string; // Your question or prompt systemPrompt?: string; // Optional system instructions temperature?: number; // Response randomness (0-1) maxTokens?: number; // Maximum response length reasoning_effort?: 'low' | 'medium' | 'high'; // For reasoning models }
  2. listProviders

    • Lists all configured providers and their available models

    • No parameters required

  3. listReasoningModels

    • Lists models optimized for reasoning tasks

    • No parameters required

Example Usage 📝

// Get an opinion from GPT-4o { "provider": "openai", "model": "gpt-4o", "prompt": "What are the key considerations for database sharding?", "temperature": 0.7, "maxTokens": 1000 } // Get a reasoned response from OpenAI's o1 model { "provider": "openai", "model": "o1", "prompt": "Explain the mathematical principles behind database indexing", "reasoning_effort": "high", "maxTokens": 4000 } // Get a reasoned response from DeepSeek { "provider": "deepseek", "model": "deepseek-reasoner", "prompt": "What are the tradeoffs between microservices and monoliths?", "reasoning_effort": "high", "maxTokens": 2000 } // Use an OpenAI-compatible provider { "provider": "openaiCompatible", "model": "YOUR_MODEL_NAME", "prompt": "Explain the concept of eventual consistency in distributed systems", "temperature": 0.5, "maxTokens": 1500 }

Development 🔧

  • npm run lint: Run ESLint

  • npm run format: Format code with Prettier

  • npm run clean: Clean build artifacts

  • npm run build: Build the project

Contributing

PRs welcome! Help us make AI workflows less dumb.


License

MIT — do whatever, just don't be evil.


Made with ❤️ by Pink Pixel

Related MCP Servers

  • A
    security
    -
    license
    A
    quality
    Provides integration with OpenRouter.ai, allowing access to various AI models through a unified interface.
    Last updated -
    58
    58
    Apache 2.0
  • A
    security
    A
    license
    A
    quality
    Enables AI agents to interact with multiple LLM providers (OpenAI, Anthropic, Google, DeepSeek) through a standardized interface, making it easy to switch between models or use multiple models in the same application.
    Last updated -
    1
    6
    MIT License
    • Linux
    • Apple
  • -
    security
    -
    license
    -
    quality
    A unified Model Context Protocol Gateway that bridges LLM interfaces with various tools and services, providing OpenAI API compatibility and supporting both synchronous and asynchronous tool execution.
    Last updated -
    1
  • -
    security
    A
    license
    -
    quality
    A middleware system that connects large language models (LLMs) with various tool services through an OpenAI-compatible API, enabling enhanced AI assistant capabilities with features like file operations, web browsing, and database management.
    Last updated -
    3
    MIT License

View all related MCP servers

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/pinkpixel-dev/mindbridge-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server