Utilizes Google's AI infrastructure, including Gemini and Vertex AI, to power automated code quality assessments and requirement validation.
Uses Google Cloud's Vertex AI platform to access high-performance models for code review workflows and security analysis.
Serves as the default AI provider for analyzing code for bugs, scoring quality, and validating alignment with requirements documents.
Integrated via Azure OpenAI to provide ChatGPT-powered code reviews, best practice suggestions, and unified diff generation for code improvements.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@MCP Code Reviewerreview auth.py for security vulnerabilities and propose fixes"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
MCP Code Refiner
A powerful second-layer LLM MCP server that refines and reviews code using AI. Perfect for improving AI-generated code or enhancing human-written code through natural language feedback.
What is This?
This is an MCP (Model Context Protocol) server that adds code refinement and review capabilities to any MCP client like Claude Desktop. It acts as a "second layer" AI that specializes in code improvement, working alongside your primary AI assistant.
Use it to:
Refine code generated by ChatGPT, Claude, or any AI with natural language feedback
Get comprehensive code reviews with security and performance analysis
Iteratively improve code until it meets your standards
Learn from AI-suggested improvements
Features
Code Refinement - Improve code with natural language feedback ("make it more logical", "add error handling")
Code Review - AI-powered analysis for bugs, security, performance, and best practices
Multi-Model Support - Choose between Gemini, Claude, or OpenAI models
Plug & Play - Works with Claude Desktop and any MCP client
Smart Prompts - Optimized prompts for high-quality, actionable results
Diff View - See exactly what changes before applying them
Quick Start
Prerequisites
Python 3.10 or higher
At least one AI provider API key (Gemini recommended for free tier)
1. Clone and Install
2. Configure API Keys
Create a .env file from the example:
Edit .env and add at least ONE API key:
Get API keys from:
Gemini: https://ai.google.dev/
Claude: https://console.anthropic.com/
OpenAI: https://platform.openai.com/api-keys
3. Connect to Claude Desktop
Edit your Claude Desktop config file:
macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
Windows: %APPDATA%\Claude\claude_desktop_config.json
Add the server configuration:
Important: Replace /absolute/path/to/mcp_code_review/ with the actual path on your system.
Restart Claude Desktop to load the server.
Usage
Once configured, just talk to Claude naturally in Claude Desktop. The tools are automatically available!
Code Refinement
Improve existing code with natural language instructions:
You: "Refine ./my_script.py to make it more logical and add error handling"
Claude will:
Call
refine_code_toolwith your requestShow you a diff of proposed changes
Explain what was changed and why
Ask for your approval
Apply changes with
apply_refinement_toolif you confirm
Code Review
Get comprehensive code analysis:
You: "Review ./server.py for security issues and performance problems"
Claude will:
Call
review_code_toolon the fileShow issues found with severity levels (high/medium/low)
Highlight code strengths
Provide an overall quality score
Suggest specific improvements
Real-World Examples
Refinement:
"Make ./app.py more performant by optimizing loops"
"Simplify the logic in ./utils/helper.py"
"Add comprehensive error handling to ./api/routes.py"
"Refactor ./legacy_code.py to follow modern Python best practices"
"Add type hints and docstrings to ./calculator.py"
Review:
"Review ./authentication.py for security vulnerabilities"
"Check ./database.py for SQL injection risks"
"Analyze ./api_client.py for error handling issues"
"Review ./main.py and suggest improvements"
Available Models
Configure via ai_provider parameter or Claude will use the default (gemini).
Gemini (Google)
gemini- Gemini 2.0 Flash (fast, free tier)gemini-pro- Gemini 1.5 Pro (more capable)
Claude (Anthropic)
claudeorclaude-sonnet- Claude 3.5 Sonnet (high quality)claude-opus- Claude 3 Opus (most capable)claude-haiku- Claude 3.5 Haiku (fastest)
OpenAI
openaiorgpt-4o- GPT-4o (balanced)gpt-4- GPT-4 Turbogpt-3.5- GPT-3.5 Turbo (fastest)
MCP Tools Reference
This server provides three MCP tools that Claude Desktop can call automatically:
1. refine_code_tool
Purpose: Improves existing code based on natural language feedback using a second-layer LLM.
Parameters:
user_request(string, required) - What you want to improve (e.g., "make it more logical", "add error handling")file_path(string, required) - Path to the code file to refineai_provider(string, optional) - AI model to use (default: "gemini")
Returns:
2. review_code_tool
Purpose: Analyzes code for bugs, security vulnerabilities, performance issues, and quality.
Parameters:
file_path(string, required) - Path to the code file to reviewai_provider(string, optional) - AI model to use (default: "gemini")
Returns:
3. apply_refinement_tool
Purpose: Applies refined code to the file after user approval.
Parameters:
file_path(string, required) - Path to the file to updaterefined_code(string, required) - The improved code fromrefine_code_tool
Returns:
Important: Only use this after the user has reviewed and approved the changes!
Testing
Test the server without Claude Desktop:
This runs a simple test client to verify the server works.
Project Structure
How It Works
This server implements a "second-layer LLM" architecture:
You interact with Claude Desktop (first-layer AI) using natural language
Claude understands your intent and calls the appropriate MCP tool
MCP Server receives the request and invokes a second-layer LLM specialized for code tasks
Second-layer LLM analyzes or refines the code using optimized prompts
Results are returned to Claude with diffs, explanations, and suggestions
Claude presents the results to you for review
You approve or reject the changes
Changes are applied only after your confirmation
This two-layer approach combines Claude's conversational abilities with specialized code analysis/refinement models.
Use Cases
1. Refining AI-Generated Code
First LLM generates code → Use this to improve it
2. Code Review Assistant
Get AI-powered feedback on your code
3. Iterative Improvement
Keep refining until perfect
4. Learning Tool
See how AI would improve your code and learn from it
Requirements
Python 3.10 or higher
At least one AI provider API key (Gemini recommended for free tier)
Dependencies listed in
requirements.txt:fastmcp- FastMCP frameworkmcp- Model Context Protocol SDKlitellm- Multi-provider LLM wrapperrich- Terminal formattingpython-dotenv- Environment variable management
Troubleshooting
Server Not Appearing in Claude Desktop
Check that the path in
claude_desktop_config.jsonis absolute, not relativeVerify the Python path is correct (use
which pythonin your activated venv)Check Claude Desktop logs for errors:
macOS:
~/Library/Logs/Claude/Windows:
%APPDATA%\Claude\logs\
Restart Claude Desktop after config changes
API Key Errors
Verify your API key is correct in the
.envfileMake sure the key is also in the
claude_desktop_config.jsonenv sectionCheck that you have API credits/quota remaining
Try using a different AI provider as a fallback
File Path Issues
Always use absolute paths or paths relative to where you run the command
On Windows, use forward slashes
/or escaped backslashes\\Verify the file exists:
ls /path/to/file.py
Module Import Errors
Ensure virtual environment is activated
Reinstall dependencies:
pip install -r requirements.txt --upgradeCheck Python version:
python --version(must be 3.10+)
Testing the Server
Run the test client to verify the server works:
This bypasses Claude Desktop and tests the MCP server directly.
Contributing
Contributions are welcome! Here's how you can help:
Report bugs - Open an issue with details about the problem
Suggest features - Share ideas for new capabilities
Improve prompts - The prompt templates in
prompts/can always be refinedAdd AI providers - Extend support for additional LLM providers
Submit PRs - Fix bugs, add features, improve documentation
License
MIT License - see LICENSE file for details
Acknowledgments
Built with:
FastMCP - FastMCP framework for building MCP servers
LiteLLM - Unified interface for multiple LLM providers
MCP Protocol - Model Context Protocol specification
Resources
Questions or issues? Open an issue on GitHub or check the troubleshooting section above.