Provides AI-powered question answering and plan review capabilities by connecting to OpenAI's GPT-4 API, allowing users to ask contextual questions and get structured feedback on planning documents with multiple analysis depths
brain-trust - AI-Powered Q&A and Plan Review MCP Server
๐ง Your trusted brain trust for getting AI help with questions and plan reviews.
A simple, powerful FastMCP server with just 3 tools that connect Cursor to OpenAI for intelligent question answering and plan analysis.
๐ฏ What is brain-trust?
brain-trust is a Model Context Protocol (MCP) server that gives your AI agents direct access to OpenAI for:
Asking questions with optional context
Reviewing planning documents with multiple analysis depths
Getting expert answers tailored to your specific situation
Think of it as phoning a friend (OpenAI) when you need help!
โจ The 3 Simple Tools
1. ๐ phone_a_friend
Ask OpenAI any question, with optional context for better answers.
2. ๐ review_plan
Get AI-powered feedback on planning documents with structured analysis.
Review Levels:
quick
- Basic structure and completeness checkstandard
- Detailed analysis with suggestionscomprehensive
- Deep analysis with alternativesexpert
- Professional-level review with best practices
Returns:
Overall score (0.0-1.0)
Strengths (list)
Weaknesses (list)
Suggestions (list)
Detailed feedback (text)
3. โค๏ธ health_check
Check server status and configuration.
๐ Quick Start
Prerequisites
Python 3.12+
OpenAI API key
Docker (optional, but recommended)
Option 1: Docker (Recommended)
The server starts immediately without requiring an OpenAI API key. Configure the API key in your MCP client (see below).
Option 2: Local Python
๐ง Configure in Cursor
Quick Install Button
Click the button to install:
Or install manually:
Go to Cursor Settings
-> MCP
-> Add new MCP Server
. Name it "brain-trust", use HTTP transport:
URL:
http://localhost:8000/mcp
Transport:
http
Environment Variables: Add
OPENAI_API_KEY
with your OpenAI API key
Add to ~/.cursor/mcp.json
How it works:
The
OPENAI_API_KEY
from the MCP client configuration is automatically passed to each tool callThe server receives the API key with each request and uses it to authenticate with OpenAI
Optional: You can override the model and max_tokens per tool call
Important: Make sure Docker is running and the server is started before using in Cursor!
๐ก Usage Examples
Example 1: Quick Question
Ask OpenAI directly:
Example 2: Context-Aware Question
Get answers specific to your situation:
Example 3: Plan Review
Get feedback on a planning document:
Example 4: Comprehensive Plan Analysis
Get deep analysis with specific focus:
๐๏ธ Architecture
Flow:
Agent calls MCP tool with API key from MCP client config
brain-trust server receives request with API key via HTTP
Server creates OpenAI client with provided API key
Server formats prompt and calls OpenAI API
OpenAI returns AI-generated response
Server returns structured response to agent
๐ณ Docker Setup
The server runs in Docker with:
FastMCP Server: Python 3.12, running on port 8000
Nginx: Reverse proxy for HTTP requests
Health Checks: Every 30 seconds
Non-root User: Security best practice
๐ ๏ธ Configuration
Environment Variables
The server requires minimal configuration. Create a .env
file if needed:
Note: OpenAI API key is NOT required as an environment variable. The API key is passed directly from the MCP client with each tool call.
MCP Client Configuration (Required)
Configure your OpenAI API key in the MCP client settings (e.g., Cursor's ~/.cursor/mcp.json
):
How it works:
You configure the API key in your MCP client
The MCP client automatically passes the key to tool calls
The server uses the key to authenticate with OpenAI per-request
No API key storage on the server side
Benefits:
โ No API keys in Docker containers or environment files
โ Secure key management via MCP client
โ Different clients can use different API keys
โ Per-request authentication
๐ API Endpoints
When running locally:
MCP Endpoint:
http://localhost:8000/mcp
Health Check:
http://localhost:8000/health
Test the health endpoint:
๐งช Testing
Test that the server is working:
๐ Project Structure
๐ Security
โ No API keys in Docker - API keys are passed per-request from MCP client
โ No environment file secrets - No
.env
file with API keys requiredโ Per-request authentication - Each request uses client-provided credentials
โ Non-root Docker user - Runs as
mcpuser
in containerโ Input validation - Pydantic models validate all inputs
โ Error handling - Comprehensive error handling and logging
โ Client-side key management - Keys managed securely by MCP client
๐ Troubleshooting
Server won't start
Cursor can't connect
Verify server is running:
curl http://localhost:8000/health
Check MCP config in
~/.cursor/mcp.json
Restart Cursor after config changes
Ensure
OPENAI_API_KEY
is set in MCP client config
OpenAI API errors
Verify API key is correct and active in
~/.cursor/mcp.json
Check OpenAI account has credits
Ensure API key has proper permissions
View logs:
docker-compose logs -f
"API key required" errors
The API key must be configured in your MCP client (not in Docker):
Open
~/.cursor/mcp.json
Add
OPENAI_API_KEY
to theenv
sectionRestart Cursor
The API key is automatically passed with each tool call
Tools not showing in Cursor
Restart Docker:
docker-compose restart
Restart Cursor completely
Check MCP settings are correct
๐ฆ Development
Local Development
Note: The server starts without requiring an OpenAI API key. The API key is provided by the MCP client when calling tools.
Making Changes
Edit
server.py
for tool changesRebuild Docker:
docker-compose up -d --build
Restart Cursor to pick up changes
Adding New Tools
See plans/compare-options-tool.md
for an example of how to propose and plan new tools.
๐ Documentation
FINAL_TOOLS.md - Complete tool documentation with examples
SIMPLIFIED_TOOLS.md - Notes on simplification from original design
PARAMETER_DESCRIPTIONS_ADDED.md - Parameter documentation details
plans/ - Detailed planning documents and proposals
๐ฏ Why brain-trust?
Simple
Only 3 tools to learn
Direct, straightforward usage
No complex context management
Powerful
Full OpenAI GPT-4 capabilities
Context-aware answers
Multiple review levels
Practical
Solves real problems (questions, plan reviews)
Easy to integrate with Cursor
Production-ready with Docker
Extensible
Easy to add new tools
Clean, maintainable codebase
Well-documented for contributions
๐ค Contributing
We welcome contributions! To add a new tool:
Create a plan in
plans/your-tool-name.md
Implement the tool in
server.py
Add tests and documentation
Submit a pull request
See plans/compare-options-tool.md
for an example plan.
๐ License
MIT License - see LICENSE file for details
๐ Acknowledgments
Built with FastMCP
Inspired by the Model Context Protocol specification
Uses OpenAI's GPT-4 for intelligent responses
Questions? Issues? Feedback?
Open an issue or reach out! We're here to help. ๐ง โจ
This server cannot be installed
hybrid server
The server is able to function both locally and remotely, depending on the configuration or use case.
Enables AI agents to ask questions and review planning documents by connecting to OpenAI's GPT-4. Provides context-aware question answering and multi-level plan analysis with structured feedback including strengths, weaknesses, and suggestions.
- ๐ฏ What is brain-trust?
- โจ The 3 Simple Tools
- ๐ Quick Start
- ๐ง Configure in Cursor
- ๐ก Usage Examples
- ๐๏ธ Architecture
- ๐ณ Docker Setup
- ๐ ๏ธ Configuration
- ๐ API Endpoints
- ๐งช Testing
- ๐ Project Structure
- ๐ Security
- ๐ Troubleshooting
- ๐ฆ Development
- ๐ Documentation
- ๐ฏ Why brain-trust?
- ๐ค Contributing
- ๐ License
- ๐ Acknowledgments