Provides AI-powered question answering and plan review capabilities by connecting to OpenAI's GPT-4 API, allowing users to ask contextual questions and get structured feedback on planning documents with multiple analysis depths
Ask MCP - Hosted OpenAI MCP Server (v0.3.0)
π§ Connect your IDE to OpenAI for intelligent question answering and structured plan reviews.
A hosted FastMCP server with 3 simple tools that connect your IDE directly to OpenAI. No local installation needed.
π - Try it instantly in your browser with setup guides for 8+ IDEs!
π What's New in v0.1.2
β DEEP_DIVE Review Level - Technical FMEA-style analysis for implementation planning
π Master Review Framework - 10-point structured evaluation across all review levels
π Comprehensive Logging - Full request/response tracing with environment-aware API key masking
β Professional Test Suite - 18 pytest tests with 92% code coverage
π¨ Pre-commit Hooks - Automated code quality with black, isort, flake8, mypy
π³ Enhanced Docker Config - Environment variable passthrough for easier configuration
π Complete Documentation - Logging guide, testing guide, header configuration examples
See Release Notes v0.1.2 for full details.
π― What is brain-trust?
brain-trust is a Model Context Protocol (MCP) server that gives your AI agents direct access to OpenAI for:
Asking questions with optional context
Reviewing planning documents with multiple analysis depths
Getting expert answers tailored to your specific situation
Think of it as phoning a friend (OpenAI) when you need help!
β¨ The 3 Simple Tools
1. π phone_a_friend
Ask OpenAI any question, with optional context for better answers.
2. π review_plan
Get AI-powered feedback on planning documents using the Master Review Framework - a structured 10-point evaluation system.
Master Review Framework Dimensions:
Structure & Organization
Completeness
Clarity
Assumptions & Dependencies
Risks
Feasibility
Alternatives
Validation
Stakeholders
Long-term Sustainability
Review Levels (Progressive Depth):
quick- Basic checklist (1-2 suggestions)standard- Standard analysis (2-3 questions)comprehensive- Detailed coverage (3-5 questions)deep_dive- NEW! Technical FMEA-style analysis (4-6 questions)expert- Professional enterprise-level review (5-7 strategic questions)
Returns:
Overall score (0.0-1.0)
Strengths (list)
Weaknesses (list)
Suggestions (list)
Detailed feedback (structured analysis)
Review level used
Timestamp
3. β€οΈ health_check
Check server status and configuration.
π Quick Start
Prerequisites
Python 3.12+
OpenAI API key
Docker (optional, but recommended)
Option 1: Docker (Recommended)
The server starts immediately without requiring an OpenAI API key. Configure the API key in your MCP client (see below).
Option 2: Local Python
π§ Configure in Cursor
Quick Install Button
Click the button to install:
Or install manually:
Go to Cursor Settings -> MCP -> Add new MCP Server. Name it "brain-trust", use HTTP transport:
URL:
http://localhost:8000/mcpTransport:
httpEnvironment Variables: Add
OPENAI_API_KEYwith your OpenAI API key
Add to ~/.cursor/mcp.json
How it works:
The
OPENAI_API_KEYfrom the MCP client configuration is set as an environment variable for the serverThe server reads the API key from the environment and uses it to authenticate with OpenAI
Optional: You can override the model and max_tokens per tool call
Important: Make sure Docker is running and the server is started before using in Cursor!
π‘ Usage Examples
Example 1: Quick Question
Ask OpenAI directly:
Example 2: Context-Aware Question
Get answers specific to your situation:
Example 3: Plan Review
Get feedback on a planning document:
Example 4: Comprehensive Plan Analysis
Get deep analysis with specific focus:
ποΈ Architecture
Flow:
Agent calls MCP tool with API key from MCP client config
brain-trust server receives request with API key via HTTP
Server creates OpenAI client with provided API key
Server formats prompt and calls OpenAI API
OpenAI returns AI-generated response
Server returns structured response to agent
π³ Docker Setup
The server runs in Docker with:
FastMCP Server: Python 3.12, running on port 8000
Nginx: Reverse proxy for HTTP requests
Health Checks: Every 30 seconds
Non-root User: Security best practice
π οΈ Configuration
Environment Variables
The server supports environment-based configuration. Create a .env file:
Logging Modes:
Development (DEBUG):
Full API keys visible in logs (for debugging)
All request/response details logged
Complete header information
Production (INFO):
API keys masked (first 8 + last 4 chars only)
Essential information only
Reduced sensitive data logging
See docs/LOGGING.md for comprehensive logging documentation.
Note: OpenAI API key is NOT required as an environment variable for production. The API key is passed directly from the MCP client with each tool call.
MCP Client Configuration (Required)
Configure your OpenAI API key in the MCP client settings (e.g., Cursor's ~/.cursor/mcp.json):
How it works:
You configure the API key in your MCP client
The MCP client automatically passes the key to tool calls
The server uses the key to authenticate with OpenAI per-request
No API key storage on the server side
Benefits:
β No API keys in Docker containers or environment files
β Secure key management via MCP client
β Different clients can use different API keys
β Per-request authentication
π API Endpoints
When running locally:
MCP Endpoint:
http://localhost:8000/mcpHealth Check:
http://localhost:8000/health
Test the health endpoint:
π§ͺ Testing
Quick Test
Test that the server is working:
Test Suite
Run the comprehensive pytest test suite:
Test Coverage:
β 18 tests total
β 8 unit tests (logging, utilities)
β 10 integration tests (real OpenAI API calls)
β 92% code coverage
β All MCP tools tested
β All 5 review levels tested
Requirements:
Tests require
OPENAI_API_KEYin.envfile for integration testsUnit tests run without API key
Tests automatically skip if API key not available
See tests/README.md for complete testing documentation.
π Project Structure
π Security
β No API keys in Docker - API keys are passed per-request from MCP client
β No environment file secrets - No
.envfile with API keys requiredβ Per-request authentication - Each request uses client-provided credentials
β Non-root Docker user - Runs as
mcpuserin containerβ Input validation - Pydantic models validate all inputs
β Error handling - Comprehensive error handling and logging
β Client-side key management - Keys managed securely by MCP client
π Troubleshooting
Server won't start
Cursor can't connect
Verify server is running:
curl http://localhost:8000/healthCheck MCP config in
~/.cursor/mcp.jsonRestart Cursor after config changes
Ensure
OPENAI_API_KEYis set in MCP client config
OpenAI API errors
Verify API key is correct and active in
~/.cursor/mcp.jsonCheck OpenAI account has credits
Ensure API key has proper permissions
View logs:
docker-compose logs -f
"API key required" errors
The API key must be configured in your MCP client (not in Docker):
Open
~/.cursor/mcp.jsonAdd
OPENAI_API_KEYto theenvsectionRestart Cursor
The API key is automatically passed with each tool call
Tools not showing in Cursor
Restart Docker:
docker-compose restartRestart Cursor completely
Check MCP settings are correct
π¦ Development
Local Development
Note: The server starts without requiring an OpenAI API key. The API key is provided by the MCP client when calling tools.
Code Quality
Pre-commit Hooks:
Automated code quality checks run on every commit:
Commits are blocked if any check fails. The hook is automatically set up in .git/hooks/pre-commit.
Manual Quality Checks:
Making Changes
Create a feature branch
Make your changes to
server.pyRun tests:
pytest tests/Pre-commit hooks will run automatically on commit
Rebuild Docker:
docker-compose up -d --buildRestart Cursor to pick up changes
Adding New Tools
Create a plan in
plans/your-tool-name.mdImplement the tool in
server.pywith@mcp.tool()decoratorAdd tests in
tests/test_tools.pyUpdate documentation
Submit a pull request
See plans/compare-options-tool.md for an example plan.
π Documentation
Core Documentation
README.md (this file) - Overview and quick start
docs/LOGGING.md - Comprehensive logging system guide
docs/HEADER_IMPLEMENTATION.md - Header-based configuration guide
docs/MCP_CLIENT_HEADERS.md - Client configuration options
tests/README.md - Testing documentation and examples
Release Notes
release_notes/RELEASE_NOTES_v0.1.2.md - Latest release (current)
release_notes/RELEASE_NOTES_v0.1.1.md - Previous release
Examples
examples/server_with_headers.py - HTTP header configuration example
Planning Documents
plans/ - Detailed planning documents and proposals
contextual-qa-mcp-server.md
technical-implementation.md
quick-start-guide.md
compare-options-tool.md
β Features
Master Review Framework
10-point structured evaluation for comprehensive plan analysis
5 progressive review levels from quick to expert
FMEA-style failure analysis in deep_dive mode
Enterprise-grade reviews with RACI, TCO, SLOs
Comprehensive Logging
Full request/response tracing for debugging
Environment-aware masking (debug vs production)
5+ log events per request with structured JSON output
API key validation at every step
Professional Testing
92% code coverage with 18 pytest tests
10 integration tests with real OpenAI API calls
Automatic skipping if API key not available
Type-safe with full mypy compliance
Development Tools
Pre-commit hooks enforce code quality automatically
Auto-activate venv in VS Code/Cursor workspace
Docker support for easy deployment
HTTP header config support (optional)
π― Why brain-trust?
Simple
Only 3 tools to learn
Direct, straightforward usage
No complex context management
Clear, comprehensive documentation
Powerful
Use your favorite GPT Model
Context-aware answers
5 progressive review levels
Master Review Framework with 10-point analysis
Practical
Solves real problems (questions, plan reviews)
Easy to integrate with Cursor
Production-ready with Docker
92% test coverage ensures reliability
Extensible
Easy to add new tools
Clean, maintainable codebase
Well-documented for contributions
Professional testing infrastructure
π€ Contributing
We welcome contributions! Here's how to contribute:
Adding a New Tool
Plan: Create a plan in
plans/your-tool-name.mdImplement: Add tool to
server.pywith@mcp.tool()decoratorTest: Add tests in
tests/test_tools.pyDocument: Update README and add to
docs/if neededQuality: Pre-commit hooks will run automatically
Submit: Create a pull request
See plans/compare-options-tool.md for an example plan.
Code Standards
Python 3.12+ with type hints
Black formatting (line length 88)
isort for import sorting
flake8 for linting
mypy for type checking
pytest for testing (aim for >80% coverage)
Conventional commits for commit messages
Running Tests
Documentation Standards
Add docstrings to all public functions
Update README.md for user-facing changes
Add examples for new features
Keep docs/ up to date
Follow existing documentation style
π License
MIT License - see LICENSE file for details
π Acknowledgments
Built with FastMCP - Fast, Pythonic MCP framework
Inspired by the Model Context Protocol specification
Uses whichever OpenAI models you prefer for intelligent responses
Testing powered by pytest and pytest-asyncio
Logging with structlog
Thanks to all contributors who provided feedback on the review framework and logging system!
π Project Stats
Tools: 3 (phone_a_friend, review_plan, health_check)
Review Levels: 5 (quick, standard, comprehensive, deep_dive, expert)
π Links
Repository: https://github.com/bernierllc/brain-trust-mcp
Issues: https://github.com/bernierllc/brain-trust-mcp/issues
FastMCP Docs: https://gofastmcp.com
MCP Specification: https://modelcontextprotocol.io/
Questions? Issues? Feedback?
Open an issue or reach out! We're here to help. π§ β¨
This server cannot be installed
hybrid server
The server is able to function both locally and remotely, depending on the configuration or use case.
Enables AI agents to ask questions and review planning documents by connecting to OpenAI's GPT-4. Provides context-aware question answering and multi-level plan analysis with structured feedback including strengths, weaknesses, and suggestions.
- π What's New in v0.1.2
- π― What is brain-trust?
- β¨ The 3 Simple Tools
- π Quick Start
- π§ Configure in Cursor
- π‘ Usage Examples
- ποΈ Architecture
- π³ Docker Setup
- π οΈ Configuration
- π API Endpoints
- π§ͺ Testing
- π Project Structure
- π Security
- π Troubleshooting
- π¦ Development
- π Documentation
- β Features
- π― Why brain-trust?
- π€ Contributing
- π License
- π Acknowledgments
- π Project Stats
- π Links