Utilizes Google's Gemini models (Gemini 2.5 Pro, Gemini 2.5 Flash) to conduct code reviews when provided with a Google API key
Uses OpenAI models (GPT-4.1, O4 Mini, O3 Mini) to perform structured or freeform code reviews when provided with an OpenAI API key
claude-code-review-mcp
An MCP (Model Context Protocol) server that provides code review functionality using OpenAI, Google, and Anthropic models. It serves as a "second opinion" tool for code review that can be used with any MCP client, including Claude Code, Claude Desktop, Cursor, and Windsurf.
Features
- Multi-Provider Support: Leverages OpenAI, Google's Gemini, and Anthropic's Claude models for code reviews
- Two Review Types: Choose between structured review (with categorized feedback) or freeform narrative review
- Context-Aware: Include project structure, related files, commit messages, and dependencies for more relevant reviews
- Intelligent Code Processing: Automatically detects programming languages, handles large files, and formats output appropriately
- Robust Error Handling: Includes retry logic for API failures and graceful error recovery
- MCP Compatible: Works with any MCP client (Claude Code, Claude Desktop, Cursor, Windsurf)
- Easy Setup: Simple configuration via environment variables
Installation
Global Installation
Usage with npx (no installation)
Configuration
The server requires at least one of the following API keys:
OPENAI_API_KEY
: Your OpenAI API keyGOOGLE_API_KEY
: Your Google Gemini API keyANTHROPIC_API_KEY
: Your Anthropic API key
Optional configuration:
PORT
: Server port (default: dynamic - an available port will be chosen)HOST
: Server host (default: 127.0.0.1)LOG_LEVEL
: Log level (0=DEBUG, 1=INFO, 2=WARN, 3=ERROR; default: 1)
Available Models
OpenAI Models (requires OPENAI_API_KEY)
gpt-4.1
- OpenAI GPT-4.1o4-mini
- OpenAI O4 Minio3-mini
- OpenAI O3 Mini
Google Models (requires GOOGLE_API_KEY)
gemini-2.5-pro-preview-05-06
- Google Gemini 2.5 Progemini-2.5-flash-preview-04-17
- Google Gemini 2.5 Flash
Anthropic Models (requires ANTHROPIC_API_KEY)
claude-3-opus-20240229
- Anthropic Claude 3 Opusclaude-3-sonnet-20240229
- Anthropic Claude 3 Sonnetclaude-3-haiku-20240307
- Anthropic Claude 3 Haiku
Available Tools
The MCP server provides three tools:
1. reviewCodeStructured
Provides a detailed, structured code review with the following sections:
- Overall summary
- Code quality (strengths and weaknesses)
- Bugs (with severity and suggested fixes)
- Improvement suggestions
- Security issues (if any)
2. reviewCodeFreeform
Provides a narrative code review in free-form text format, suitable for general impressions and conversational feedback.
3. listModels
Lists all available models based on provided API keys, including model IDs and human-readable names.
Integration with Claude Code
To add this MCP server to Claude Code:
You can also create a custom slash command by creating a file at .claude/commands/review-with.md
:
Claude Code supports custom slash commands that you can create to easily interact with the MCP server. Create these commands in the .claude/commands/
directory within your project to enable powerful code review workflows.
Basic Setup
First, create the commands directory if it doesn't exist:
Model Listing Command
Create a command to list available models:
Basic Code Review Command
Create a simple review command that accepts a model name:
Structured Review Command
Create a command specifically for structured reviews:
Freeform Review Command
Create a command for narrative-style reviews:
Review Specific File Command
Create a command to review a specific file:
Focus-Specific Review Commands
Create commands for specialized reviews:
Comprehensive Project Review Command
Create a command for reviewing code with full project context:
Before and After Review Command
Create a command to compare code changes:
Using Custom Slash Commands
Once you've created these commands, you can use them in Claude Code by typing /project:
followed by the command name. For example:
Tips for Custom Commands
- Command Discovery: Type
/project:
in Claude Code to see a list of available commands - Default Models: If you don't specify a model, the command will use the default model (typically o4-mini if available)
- Multiple Reviews: You can get multiple perspectives by running reviews with different models
- Project Context: For the most relevant reviews, use commands that include project context
- Specialized Focus: Use the focus-specific commands when you have particular concerns about security, performance, etc.
Example Workflow
A typical workflow might look like:
- Work on code with Claude Code
- Run
/project:list-review-models
to see available options - Run
/project:structured-review gemini-2.5-pro-preview-05-06
to get a structured review from Google's model - Compare with Claude's suggestions
- Make improvements based on both perspectives
- Run
/project:diff-review
to review the changes
These custom commands enable smooth integration between Claude Code and the claude-code-review-mcp server, providing valuable "second opinions" for your code.
Example Usage
Starting the MCP Server
Using with MCP Clients
Once the server is running, you can connect to it from any MCP client like Claude Code, Claude Desktop, Cursor, or Windsurf using the server's URL. The server will display the actual URL and port in its startup logs (using a dynamically assigned port to avoid conflicts).
Input Schema
All review tools accept the following input:
Output Schema
Structured Review Output
Freeform Review Output
List Models Output
MCP Client Integration
Claude Code
- Add the MCP server:
- Use in Claude Code:
Claude Desktop
In Claude Desktop settings, configure the MCP as follows:
The server uses advanced JSON output sanitization for full compatibility with all MCP clients, including Claude Desktop.
Cursor and Windsurf
Follow the specific MCP configuration guidelines for your client, using the same command and environment variables.
Troubleshooting
API Key Issues
- "Model X is not available": Ensure you've provided the appropriate API key for the model.
- No API keys provided: You must provide at least one of OPENAI_API_KEY, GOOGLE_API_KEY, or ANTHROPIC_API_KEY.
- Suggested model: The server will suggest alternative models if your requested model is not available.
Rate Limiting and API Errors
- If you encounter rate limits or API errors, the error message will indicate the issue.
- Consider using a different model if one provider is experiencing issues.
Security Considerations
- API keys are never logged or exposed
- Code contents are minimally logged for privacy
- Dependencies are kept minimal to reduce security surface
- Request handling includes input validation and sanitization
- Error messages are designed to avoid leaking sensitive information
Compatibility
- Requires Node.js 18.0.0 or later
- Works on Linux, macOS, and Windows (via WSL if necessary)
- Compatible with all MCP clients (Claude Code, Claude Desktop, Cursor, Windsurf)
- Graceful handling of large code files and project contexts
- Automatic retry mechanism for transient API failures
Development
License
MIT
Contributors
- Praney Behl (@praneybehl)
This server cannot be installed
An MCP server that provides code review functionality using OpenAI, Google, and Anthropic models, serving as a "second opinion" tool that works with any MCP client.
Related MCP Servers
- -security-license-qualityAn MCP server that automatically generates documentation, test plans, and code reviews for code repositories by analyzing directory structures and code files using AI models via OpenRouter API.Last updated -3TypeScriptCreative Commons Zero v1.0 Universal
- -securityFlicense-qualityA server that implements the Model Context Protocol (MCP) for orchestrating code reviews using a multi-agent system with Melchior, Balthasar, and Casper agents.Last updated -Python
- -securityAlicense-qualityAn MCP server that reviews code with the sarcastic and cynical tone of a grumpy senior developer, helping identify issues in PRs and providing feedback on code quality.Last updated -2210JavaScriptMIT License
- -securityAlicense-qualityA server that integrates the MCP library with OpenAI's API, allowing users to interact with various tools, such as the weather tool, through natural language queries.Last updated -PythonMIT License