Skip to main content
Glama

Claude Concilium

License: MIT Node.js 20+ MCP Protocol Servers Version Smoke Tests

Multi-agent AI consultation framework for Claude Code via MCP.

Get a second (and third) opinion from other LLMs when Claude Code alone isn't enough.

Claude Code ──┬── OpenAI (Codex CLI) ──► Opinion A
              ├── Gemini (gemini-cli) ─► Opinion B
              │
              └── Synthesis ◄── Consensus or iterate

The Problem

Claude Code is powerful, but one brain can miss bugs, overlook edge cases, or get stuck in a local optimum. Critical decisions benefit from diverse perspectives.

The Solution

Concilium runs parallel consultations with multiple LLMs through standard MCP protocol. Each LLM server wraps a CLI tool — no API keys needed for the primary providers (they use OAuth).

Key features:

  • Parallel consultation with 2+ AI agents

  • Production-grade fallback chains with error detection

  • Each MCP server works standalone or as part of Concilium

  • Plug & play: clone, npm install, add to .mcp.json

Architecture

┌─────────────────────────────────────────────────────────┐
│                     Claude Code                          │
│                                                          │
│  "Review this code for race conditions"                  │
│                                                          │
│  ┌──────────────┐  ┌──────────────┐                      │
│  │  MCP Call #1  │  │  MCP Call #2  │   (parallel)        │
│  └──────┬───────┘  └──────┬───────┘                      │
│         │                  │                              │
└─────────┼──────────────────┼──────────────────────────────┘
          │                  │
          ▼                  ▼
   ┌──────────────┐   ┌──────────────┐
   │  mcp-openai  │   │  mcp-gemini  │     Primary agents
   │  (codex exec)│   │ (gemini -p)  │
   └──────┬───────┘   └──────┬───────┘
          │                  │
          ▼                  ▼
   ┌──────────────┐   ┌──────────────┐
   │   OpenAI     │   │   Google     │     LLM providers
   │   (OAuth)    │   │   (OAuth)    │
   └──────────────┘   └──────────────┘

   Fallback chain (on quota/error):
   OpenAI → Qwen → DeepSeek
   Gemini → Qwen → DeepSeek

Quickstart

1. Clone and install

git clone https://github.com/spyrae/claude-concilium.git
cd claude-concilium

# Install dependencies for each server
cd servers/mcp-openai && npm install && cd ../..
cd servers/mcp-gemini && npm install && cd ../..
cd servers/mcp-qwen && npm install && cd ../..

# Verify all servers work (no CLI tools required)
node test/smoke-test.mjs

Expected output:

PASS mcp-openai  (Tools: openai_chat, openai_review)
PASS mcp-gemini  (Tools: gemini_chat, gemini_analyze)
PASS mcp-qwen    (Tools: qwen_chat)
All tests passed.

2. Set up providers

Pick at least 2 providers:

Provider

Auth

Free Tier

Setup

OpenAI

codex login (OAuth)

ChatGPT Plus weekly credits

Setup guide

Gemini

Google OAuth

1000 req/day

Setup guide

Qwen

OAuth or API key

Varies

Setup guide

DeepSeek

API key

Pay-per-use (cheap)

Setup guide

3. Add to Claude Code

Copy config/mcp.json.example and update paths:

# Edit the example with your actual paths
cp config/mcp.json.example .mcp.json
# Update "/path/to/claude-concilium" with actual path

Or add servers individually to your existing .mcp.json:

{
  "mcpServers": {
    "mcp-openai": {
      "type": "stdio",
      "command": "node",
      "args": ["/absolute/path/to/servers/mcp-openai/server.js"],
      "env": {
        "CODEX_HOME": "~/.codex-minimal"
      }
    },
    "mcp-gemini": {
      "type": "stdio",
      "command": "node",
      "args": ["/absolute/path/to/servers/mcp-gemini/server.js"]
    }
  }
}

4. Install the skill (optional)

Copy the Concilium skill to your Claude Code commands:

cp skill/ai-concilium.md ~/.claude/commands/ai-concilium.md

Now use /ai-concilium in Claude Code to trigger a multi-agent consultation.

MCP Servers

Each server can be used independently — you don't need all of them.

Server

CLI Tool

Auth

Tools

mcp-openai

codex

OAuth (ChatGPT Plus)

openai_chat, openai_review

mcp-gemini

gemini

Google OAuth

gemini_chat, gemini_analyze

mcp-qwen

qwen

OAuth / API key

qwen_chat

DeepSeek uses the existing deepseek-mcp-server npm package — no custom server needed.

How It Works

Consultation Flow

  1. Formulate — describe the problem concisely (under 500 chars)

  2. Send in parallel — OpenAI + Gemini get the same prompt

  3. Handle errors — if a provider fails, fallback chain kicks in (Qwen → DeepSeek)

  4. Synthesize — compare responses, find consensus

  5. Iterate (optional) — resolve disagreements with follow-up questions

  6. Decide — apply the synthesized solution

Error Detection

All servers detect provider-specific errors and return structured responses:

Error Type

Meaning

Action

QUOTA_EXCEEDED

Rate/credit limit hit

Use fallback provider

AUTH_EXPIRED / AUTH_REQUIRED

Token needs refresh

Re-authenticate CLI

AUTH_NOT_CONFIGURED

Qwen auth type not set

Set QWEN_AUTH_TYPE env var

MODEL_NOT_SUPPORTED

Model unavailable on plan

Use default model

Timeout

Process hung

Auto-killed, use fallback

Fallback Chain

Primary:   OpenAI ──────────────► Response
           (QUOTA_EXCEEDED?)
                    │
Fallback 1: Qwen ──┴────────────► Response
           (timeout?)
                    │
Fallback 2: DeepSeek ───────────► Response (always available)

When to Use Concilium

Scenario

Recommended Agents

Code review

OpenAI + Gemini (parallel)

Architecture decision

OpenAI + Gemini → iterate if disagree

Stuck bug (3+ attempts)

All available agents

Performance optimization

Gemini (1M context) + OpenAI

Security review

OpenAI + Gemini + manual verification

Docker

Run any server in a container:

# Build
docker build -t claude-concilium .

# Run a specific server (mcp-openai | mcp-gemini | mcp-qwen)
docker run -i --rm -e SERVER=mcp-openai claude-concilium
docker run -i --rm -e SERVER=mcp-gemini claude-concilium

Note: The servers wrap CLI tools (codex, gemini, qwen) that require local authentication. Mount your auth credentials when running:

# OpenAI (Codex)
docker run -i --rm -e SERVER=mcp-openai \
  -v ~/.codex:/root/.codex:ro \
  claude-concilium

# Gemini
docker run -i --rm -e SERVER=mcp-gemini \
  -v ~/.config/gemini:/root/.config/gemini:ro \
  claude-concilium

Customization

See docs/customization.md for:

  • Adding your own LLM provider

  • Modifying the fallback chain

  • MCP server template

  • Custom prompt strategies

Documentation

Changelog

v2.0.0 (2026-03-02)

mcp-qwen:

  • Prompt delivery via stdin (-p -) instead of command argument — safe for any content, no length limits

  • OAuth auth-type support via QWEN_AUTH_TYPE env var (e.g., qwen-oauth)

  • New error detection: AUTH_NOT_CONFIGURED (catches "no auth type is selected")

  • Graceful shutdown handler (SIGTERM)

mcp-openai:

  • Default timeout increased from 90s to 180s (codex exec can be slow on complex prompts)

All servers:

  • Version bumped to 2.0.0

  • Updated documentation and setup guides

v0.1.0 (2025-12-15)

  • Initial release with 3 MCP servers (OpenAI, Gemini, Qwen)

  • Concilium skill with fallback chains

  • Smoke test suite

  • Docker support

License

MIT

Install Server
A
security – no known vulnerabilities
A
license - permissive license
A
quality - confirmed to work

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/spyrae/claude-concilium'

If you have feedback or need assistance with the MCP directory API, please join our Discord server