claude-concilium
This server integrates with OpenAI via the Codex CLI, enabling general chat interactions and automated code reviews as part of a multi-agent consultation workflow.
openai_chat: Send arbitrary text prompts to OpenAI with configurable working directory (cwd), model override, and timeout (default 90s). Returns structured error responses for quota limits (e.g.,QUOTA_EXCEEDED), enabling fallback to other providers.openai_review: Perform automated code reviews on Git repositories with flexible targeting:Review uncommitted changes (default)
Review changes against a specific base branch
Review a specific commit by SHA
Provide custom review instructions (e.g., "Focus on error handling and race conditions")
Configurable timeout (default 120s) and working directory
Multi-agent use: Works alongside other LLM servers (Gemini, Qwen) within the Claude Concilium framework for diverse, multi-perspective AI consultations, or operates standalone as an MCP server.
Claude Concilium
Multi-agent AI consultation framework for Claude Code via MCP.
Get a second (and third) opinion from other LLMs when Claude Code alone isn't enough.
Claude Code ──┬── OpenAI (Codex CLI) ──► Opinion A
├── Gemini (gemini-cli) ─► Opinion B
│
└── Synthesis ◄── Consensus or iterateThe Problem
Claude Code is powerful, but one brain can miss bugs, overlook edge cases, or get stuck in a local optimum. Critical decisions benefit from diverse perspectives.
The Solution
Concilium runs parallel consultations with multiple LLMs through standard MCP protocol. Each LLM server wraps a CLI tool — no API keys needed for the primary providers (they use OAuth).
Key features:
Parallel consultation with 2+ AI agents
Production-grade fallback chains with error detection
Each MCP server works standalone or as part of Concilium
Plug & play: clone,
npm install, add to.mcp.json
Architecture
┌─────────────────────────────────────────────────────────┐
│ Claude Code │
│ │
│ "Review this code for race conditions" │
│ │
│ ┌──────────────┐ ┌──────────────┐ │
│ │ MCP Call #1 │ │ MCP Call #2 │ (parallel) │
│ └──────┬───────┘ └──────┬───────┘ │
│ │ │ │
└─────────┼──────────────────┼──────────────────────────────┘
│ │
▼ ▼
┌──────────────┐ ┌──────────────┐
│ mcp-openai │ │ mcp-gemini │ Primary agents
│ (codex exec)│ │ (gemini -p) │
└──────┬───────┘ └──────┬───────┘
│ │
▼ ▼
┌──────────────┐ ┌──────────────┐
│ OpenAI │ │ Google │ LLM providers
│ (OAuth) │ │ (OAuth) │
└──────────────┘ └──────────────┘
Fallback chain (on quota/error):
OpenAI → Qwen → DeepSeek
Gemini → Qwen → DeepSeekQuickstart
1. Clone and install
git clone https://github.com/spyrae/claude-concilium.git
cd claude-concilium
# Install dependencies for each server
cd servers/mcp-openai && npm install && cd ../..
cd servers/mcp-gemini && npm install && cd ../..
cd servers/mcp-qwen && npm install && cd ../..
# Verify all servers work (no CLI tools required)
node test/smoke-test.mjsExpected output:
PASS mcp-openai (Tools: openai_chat, openai_review)
PASS mcp-gemini (Tools: gemini_chat, gemini_analyze)
PASS mcp-qwen (Tools: qwen_chat)
All tests passed.2. Set up providers
Pick at least 2 providers:
Provider | Auth | Free Tier | Setup |
OpenAI |
| ChatGPT Plus weekly credits | |
Gemini | Google OAuth | 1000 req/day | |
Qwen | OAuth or API key | Varies | |
DeepSeek | API key | Pay-per-use (cheap) |
3. Add to Claude Code
Copy config/mcp.json.example and update paths:
# Edit the example with your actual paths
cp config/mcp.json.example .mcp.json
# Update "/path/to/claude-concilium" with actual pathOr add servers individually to your existing .mcp.json:
{
"mcpServers": {
"mcp-openai": {
"type": "stdio",
"command": "node",
"args": ["/absolute/path/to/servers/mcp-openai/server.js"],
"env": {
"CODEX_HOME": "~/.codex-minimal"
}
},
"mcp-gemini": {
"type": "stdio",
"command": "node",
"args": ["/absolute/path/to/servers/mcp-gemini/server.js"]
}
}
}4. Install the skill (optional)
Copy the Concilium skill to your Claude Code commands:
cp skill/ai-concilium.md ~/.claude/commands/ai-concilium.mdNow use /ai-concilium in Claude Code to trigger a multi-agent consultation.
MCP Servers
Each server can be used independently — you don't need all of them.
Server | CLI Tool | Auth | Tools |
| OAuth (ChatGPT Plus) |
| |
| Google OAuth |
| |
| OAuth / API key |
|
DeepSeek uses the existing deepseek-mcp-server npm package — no custom server needed.
How It Works
Consultation Flow
Formulate — describe the problem concisely (under 500 chars)
Send in parallel — OpenAI + Gemini get the same prompt
Handle errors — if a provider fails, fallback chain kicks in (Qwen → DeepSeek)
Synthesize — compare responses, find consensus
Iterate (optional) — resolve disagreements with follow-up questions
Decide — apply the synthesized solution
Error Detection
All servers detect provider-specific errors and return structured responses:
Error Type | Meaning | Action |
| Rate/credit limit hit | Use fallback provider |
| Token needs refresh | Re-authenticate CLI |
| Qwen auth type not set | Set |
| Model unavailable on plan | Use default model |
Timeout | Process hung | Auto-killed, use fallback |
Fallback Chain
Primary: OpenAI ──────────────► Response
(QUOTA_EXCEEDED?)
│
Fallback 1: Qwen ──┴────────────► Response
(timeout?)
│
Fallback 2: DeepSeek ───────────► Response (always available)When to Use Concilium
Scenario | Recommended Agents |
Code review | OpenAI + Gemini (parallel) |
Architecture decision | OpenAI + Gemini → iterate if disagree |
Stuck bug (3+ attempts) | All available agents |
Performance optimization | Gemini (1M context) + OpenAI |
Security review | OpenAI + Gemini + manual verification |
Docker
Run any server in a container:
# Build
docker build -t claude-concilium .
# Run a specific server (mcp-openai | mcp-gemini | mcp-qwen)
docker run -i --rm -e SERVER=mcp-openai claude-concilium
docker run -i --rm -e SERVER=mcp-gemini claude-conciliumNote: The servers wrap CLI tools (codex, gemini, qwen) that require local authentication. Mount your auth credentials when running:
# OpenAI (Codex)
docker run -i --rm -e SERVER=mcp-openai \
-v ~/.codex:/root/.codex:ro \
claude-concilium
# Gemini
docker run -i --rm -e SERVER=mcp-gemini \
-v ~/.config/gemini:/root/.config/gemini:ro \
claude-conciliumCustomization
See docs/customization.md for:
Adding your own LLM provider
Modifying the fallback chain
MCP server template
Custom prompt strategies
Documentation
Architecture — flow diagrams, error handling, design decisions
OpenAI Setup — Codex CLI, ChatGPT Plus, minimal config
Gemini Setup — gemini-cli, Google OAuth
Qwen Setup — Qwen CLI, DashScope
DeepSeek Setup — API key, npm package
Customization — add your own LLM, modify chains
Changelog
v2.0.0 (2026-03-02)
mcp-qwen:
Prompt delivery via stdin (
-p -) instead of command argument — safe for any content, no length limitsOAuth auth-type support via
QWEN_AUTH_TYPEenv var (e.g.,qwen-oauth)New error detection:
AUTH_NOT_CONFIGURED(catches "no auth type is selected")Graceful shutdown handler (SIGTERM)
mcp-openai:
Default timeout increased from 90s to 180s (codex exec can be slow on complex prompts)
All servers:
Version bumped to 2.0.0
Updated documentation and setup guides
v0.1.0 (2025-12-15)
Initial release with 3 MCP servers (OpenAI, Gemini, Qwen)
Concilium skill with fallback chains
Smoke test suite
Docker support
License
MIT
Maintenance
Tools
Appeared in Searches
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/spyrae/claude-concilium'
If you have feedback or need assistance with the MCP directory API, please join our Discord server