brainstorm-mcp orchestrates multi-round AI brainstorming debates between multiple language models (GPT, Gemini, DeepSeek, Groq, Ollama, etc.) with Claude as an active participant, delivering diverse perspectives and a synthesized conclusion.
Run multi-round debates (
brainstorm): Submit a topic and have AI models debate, critique, and refine ideas across 1–10 rounds, culminating in a final synthesized output from a designated synthesizer model.Specify which models to include (e.g.
openai:gpt-4o,deepseek:deepseek-chat)Configure number of rounds (default: 3, max: 10)
Choose a synthesizer model for final consolidated output (with fallback)
Provide a custom system prompt to guide debate style or constraints
Enable/disable Claude's active participation as a debater via
brainstorm_respond
List configured providers (
list_providers): View all configured AI providers, their default models, and API key status.Add providers dynamically (
add_provider): Register any OpenAI-compatible API at runtime — including custom or self-hosted models like Ollama.
Additional capabilities: parallel model execution per round, per-model timeouts and fault tolerance (debate continues if a model fails), automatic context truncation near limits, cost/token estimation, and session management with 10-minute TTL and automatic cleanup.
Enables local LLMs to participate in multi-round brainstorming debates, allowing them to critique other models' ideas and refine their own positions within the debate workflow.
Integrates OpenAI models like GPT-4o, o3, and o4 to participate in structured brainstorming debates and serve as synthesizers for final consolidated outputs.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@brainstorm-mcpbrainstorm the best database architecture for a global fintech app"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
brainstorm-mcp
An MCP server that runs multi-round brainstorming debates between AI models. Connect it to Claude Code (or any MCP client) and let GPT, Gemini, DeepSeek, Groq, Ollama, and others debate your ideas — with Claude as an active participant in every round.
No more single-perspective answers. brainstorm-mcp pits multiple LLMs against each other so you get diverse viewpoints, critiques, and a consolidated synthesis.
Features
Claude as participant — Claude debates alongside external models, bringing full conversation context
Multi-round debates — Models see and critique each other's responses across rounds
Parallel execution — All models respond concurrently within each round
Per-model timeouts — 2-minute timeout per API call, one slow model won't block others
Context truncation — Automatically truncates history when approaching context limits
Cost estimation — Shows estimated token usage and cost per debate
Resilient — One model failing doesn't abort the debate
Synthesizer fallback — If the primary synthesizer fails, tries other models
Session management — Interactive sessions with 10-minute TTL, automatic cleanup
GPT-5.x / o3 / o4 compatible — Automatically uses
max_completion_tokensfor newer OpenAI modelsCross-platform — Works on macOS, Windows, and Linux
How It Works
You ask Claude: "Brainstorm the best architecture for a real-time app"
The tool sends the topic to all configured AI models in parallel (Round 1)
Claude reads their responses and contributes its own perspective
All models (including Claude) see each other's responses and refine their positions (Rounds 2-N)
A synthesizer model produces a final consolidated output
You get back a structured debate with the synthesis
Claude doesn't just orchestrate — it debates alongside GPT, Gemini, DeepSeek, and others.
Quick Start
With Claude Code
Add to your project's .mcp.json:
{
"mcpServers": {
"brainstorm": {
"command": "npx",
"args": ["-y", "brainstorm-mcp"],
"env": {
"OPENAI_API_KEY": "sk-...",
"DEEPSEEK_API_KEY": "sk-..."
}
}
}
}With Claude Desktop
Add to your Claude Desktop config (claude_desktop_config.json):
{
"mcpServers": {
"brainstorm": {
"command": "npx",
"args": ["-y", "brainstorm-mcp"],
"env": {
"OPENAI_API_KEY": "sk-...",
"DEEPSEEK_API_KEY": "sk-..."
}
}
}
}Manual install
npm install -g brainstorm-mcp
brainstorm-mcpThen just ask Claude:
"Brainstorm the best way to handle authentication in a microservices architecture"
Interactive Mode (Claude as Participant)
By default, Claude actively participates in every round of the debate:
Round 1: External models respond to the topic independently
Claude's turn: Claude reads their responses and contributes its own perspective via
brainstorm_respondRound 2: External models see Claude's response alongside everyone else's, and refine
Claude's turn: Claude refines its position based on the new responses
Repeat until all rounds are complete, then synthesis runs automatically
This means Claude brings its full conversation context into the debate — it knows what you've been working on, what you've discussed, and can contribute meaningfully rather than just passing messages.
To run a non-interactive debate (external models only, no Claude participation):
"Brainstorm with participate=false about..."
Configuration
Option 1: Environment Variables (simplest)
Just set API keys as env vars — the server auto-detects providers:
OPENAI_API_KEY=sk-...
OPENAI_DEFAULT_MODEL=gpt-4o
GEMINI_API_KEY=AIza...
GEMINI_DEFAULT_MODEL=gemini-2.5-flash
DEEPSEEK_API_KEY=sk-...
DEEPSEEK_DEFAULT_MODEL=deepseek-chat
GROQ_API_KEY=gsk_...Option 2: Config File (full control)
Set BRAINSTORM_CONFIG to point to a JSON config file:
{
"providers": {
"openai": {
"model": "gpt-4o",
"apiKeyEnv": "OPENAI_API_KEY"
},
"gemini": {
"model": "gemini-2.5-flash",
"apiKeyEnv": "GEMINI_API_KEY"
},
"deepseek": {
"model": "deepseek-chat",
"apiKeyEnv": "DEEPSEEK_API_KEY"
},
"groq": {
"model": "llama-3.3-70b-versatile",
"apiKeyEnv": "GROQ_API_KEY"
},
"ollama": {
"model": "llama3.1",
"baseURL": "http://localhost:11434/v1"
}
}
}Known providers (openai, gemini, deepseek, groq, mistral, together) don't need a baseURL — it's auto-detected.
Field | Required | Description |
| Yes | Default model ID to use |
| No | Environment variable name for the API key. Omit for local models (Ollama) |
| No | API endpoint. Auto-detected for known providers |
Tools
Tool | Description |
| Run a multi-round debate between configured AI models |
| Submit Claude's response for the current round of an interactive session |
| Show all configured providers, models, and API key status |
| Dynamically add a provider at runtime |
brainstorm Parameters
Parameter | Type | Default | Description |
| string | required | What to brainstorm about |
| string[] | all providers | Specific models as |
| number | 3 | Number of debate rounds (1-10) |
| string | first model | Model for final synthesis |
| string | — | Custom system prompt for all models |
| boolean | true | Whether Claude joins as an active debater |
brainstorm_respond Parameters
Parameter | Type | Description |
| string | Session ID from the brainstorm tool |
| string | Claude's contribution (min 50 chars) |
Usage Examples
Basic brainstorm
"Brainstorm the pros and cons of microservices vs monolith for a startup"
Targeted models
"Use brainstorm with models openai:gpt-4o and deepseek:deepseek-chat to debate whether React or Vue is better for enterprise apps"
Deep dive with more rounds
"Brainstorm with 5 rounds: what's the best database strategy for a social media app with 10M users?"
Privacy Policy
brainstorm-mcp itself does not collect any user data. It acts as a proxy to the AI providers you configure. Your prompts and debate content are sent to the respective provider APIs (OpenAI, DeepSeek, Groq, etc.) according to their privacy policies. For local models (Ollama), all data stays on your machine.
Development
git clone https://github.com/spranab/brainstorm-mcp.git
cd brainstorm-mcp
npm install
npm run build
npm startSupport
Repository: https://github.com/spranab/brainstorm-mcp
License
MIT