# MCP Code Refiner
> A powerful **second-layer LLM** MCP server that refines and reviews code using AI. Perfect for improving AI-generated code or enhancing human-written code through natural language feedback.
## What is This?
This is an [MCP (Model Context Protocol)](https://modelcontextprotocol.io/) server that adds code refinement and review capabilities to any MCP client like Claude Desktop. It acts as a "second layer" AI that specializes in code improvement, working alongside your primary AI assistant.
**Use it to:**
- Refine code generated by ChatGPT, Claude, or any AI with natural language feedback
- Get comprehensive code reviews with security and performance analysis
- Iteratively improve code until it meets your standards
- Learn from AI-suggested improvements
## Features
- **Code Refinement** - Improve code with natural language feedback ("make it more logical", "add error handling")
- **Code Review** - AI-powered analysis for bugs, security, performance, and best practices
- **Multi-Model Support** - Choose between Gemini, Claude, or OpenAI models
- **Plug & Play** - Works with Claude Desktop and any MCP client
- **Smart Prompts** - Optimized prompts for high-quality, actionable results
- **Diff View** - See exactly what changes before applying them
## Quick Start
### Prerequisites
- Python 3.10 or higher
- At least one AI provider API key (Gemini recommended for free tier)
### 1. Clone and Install
```bash
git clone https://github.com/yourusername/mcp_code_review.git
cd mcp_code_review
python -m venv .venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
pip install -r requirements.txt
```
### 2. Configure API Keys
Create a `.env` file from the example:
```bash
cp .env.example .env
```
Edit `.env` and add **at least ONE** API key:
```bash
# Recommended: Google Gemini (free tier available)
GOOGLE_API_KEY=your-gemini-api-key-here
# Alternative: Anthropic Claude
ANTHROPIC_API_KEY=your-anthropic-api-key-here
# Alternative: OpenAI
OPENAI_API_KEY=your-openai-api-key-here
```
**Get API keys from:**
- Gemini: https://ai.google.dev/
- Claude: https://console.anthropic.com/
- OpenAI: https://platform.openai.com/api-keys
### 3. Connect to Claude Desktop
Edit your Claude Desktop config file:
**macOS:** `~/Library/Application Support/Claude/claude_desktop_config.json`
**Windows:** `%APPDATA%\Claude\claude_desktop_config.json`
Add the server configuration:
```json
{
"mcpServers": {
"code-refiner": {
"command": "python",
"args": ["/absolute/path/to/mcp_code_review/mcp_server.py"],
"env": {
"GOOGLE_API_KEY": "your-gemini-api-key"
}
}
}
}
```
**Important:** Replace `/absolute/path/to/mcp_code_review/` with the actual path on your system.
Restart Claude Desktop to load the server.
## Usage
Once configured, just talk to Claude naturally in Claude Desktop. The tools are automatically available!
### Code Refinement
Improve existing code with natural language instructions:
**You:** "Refine ./my_script.py to make it more logical and add error handling"
**Claude will:**
1. Call `refine_code_tool` with your request
2. Show you a diff of proposed changes
3. Explain what was changed and why
4. Ask for your approval
5. Apply changes with `apply_refinement_tool` if you confirm
### Code Review
Get comprehensive code analysis:
**You:** "Review ./server.py for security issues and performance problems"
**Claude will:**
1. Call `review_code_tool` on the file
2. Show issues found with severity levels (high/medium/low)
3. Highlight code strengths
4. Provide an overall quality score
5. Suggest specific improvements
### Real-World Examples
**Refinement:**
- "Make ./app.py more performant by optimizing loops"
- "Simplify the logic in ./utils/helper.py"
- "Add comprehensive error handling to ./api/routes.py"
- "Refactor ./legacy_code.py to follow modern Python best practices"
- "Add type hints and docstrings to ./calculator.py"
**Review:**
- "Review ./authentication.py for security vulnerabilities"
- "Check ./database.py for SQL injection risks"
- "Analyze ./api_client.py for error handling issues"
- "Review ./main.py and suggest improvements"
## Available Models
Configure via `ai_provider` parameter or Claude will use the default (gemini).
### Gemini (Google)
- `gemini` - Gemini 2.0 Flash (fast, free tier)
- `gemini-pro` - Gemini 1.5 Pro (more capable)
### Claude (Anthropic)
- `claude` or `claude-sonnet` - Claude 3.5 Sonnet (high quality)
- `claude-opus` - Claude 3 Opus (most capable)
- `claude-haiku` - Claude 3.5 Haiku (fastest)
### OpenAI
- `openai` or `gpt-4o` - GPT-4o (balanced)
- `gpt-4` - GPT-4 Turbo
- `gpt-3.5` - GPT-3.5 Turbo (fastest)
## MCP Tools Reference
This server provides three MCP tools that Claude Desktop can call automatically:
### 1. `refine_code_tool`
**Purpose:** Improves existing code based on natural language feedback using a second-layer LLM.
**Parameters:**
- `user_request` (string, required) - What you want to improve (e.g., "make it more logical", "add error handling")
- `file_path` (string, required) - Path to the code file to refine
- `ai_provider` (string, optional) - AI model to use (default: "gemini")
**Returns:**
```json
{
"status": "success",
"explanation": "Added error handling and simplified logic...",
"diff": "--- original\n+++ refined\n...",
"refined_code": "def improved_function():\n ...",
"file_path": "./app.py"
}
```
### 2. `review_code_tool`
**Purpose:** Analyzes code for bugs, security vulnerabilities, performance issues, and quality.
**Parameters:**
- `file_path` (string, required) - Path to the code file to review
- `ai_provider` (string, optional) - AI model to use (default: "gemini")
**Returns:**
```json
{
"status": "success",
"issues": [
{
"severity": "high",
"category": "security",
"issue": "SQL injection vulnerability",
"line": 42,
"suggestion": "Use parameterized queries..."
}
],
"strengths": ["Good error handling", "Clear naming"],
"overall_assessment": "Code is functional but has security concerns...",
"score": 7
}
```
### 3. `apply_refinement_tool`
**Purpose:** Applies refined code to the file after user approval.
**Parameters:**
- `file_path` (string, required) - Path to the file to update
- `refined_code` (string, required) - The improved code from `refine_code_tool`
**Returns:**
```json
{
"status": "success",
"message": "Code successfully applied to ./app.py"
}
```
**Important:** Only use this after the user has reviewed and approved the changes!
## Testing
Test the server without Claude Desktop:
```bash
python client.py
```
This runs a simple test client to verify the server works.
## Project Structure
```
mcp_code_review/
├── mcp_server.py # Main MCP server entry point
├── client.py # Test client for local testing
├── requirements.txt # Python dependencies
├── .env.example # Environment variables template
├── .env # Your API keys (git-ignored)
│
├── tools/ # MCP tool implementations
│ ├── __init__.py
│ ├── file_ops.py # File read/write utilities
│ ├── code_refinement.py # Code refinement logic
│ └── code_review.py # Code review logic
│
├── prompts/ # AI prompt templates
│ ├── code_refinement.txt # Refinement prompt template
│ └── code_review.txt # Review prompt template
│
└── utils/ # Helper utilities
├── __init__.py
├── llm_client.py # LiteLLM wrapper for multi-provider support
└── diff_generator.py # Unified diff generation
```
## How It Works
This server implements a "second-layer LLM" architecture:
1. **You** interact with Claude Desktop (first-layer AI) using natural language
2. **Claude** understands your intent and calls the appropriate MCP tool
3. **MCP Server** receives the request and invokes a second-layer LLM specialized for code tasks
4. **Second-layer LLM** analyzes or refines the code using optimized prompts
5. **Results** are returned to Claude with diffs, explanations, and suggestions
6. **Claude** presents the results to you for review
7. **You** approve or reject the changes
8. **Changes** are applied only after your confirmation
This two-layer approach combines Claude's conversational abilities with specialized code analysis/refinement models.
## Use Cases
### 1. Refining AI-Generated Code
First LLM generates code → Use this to improve it
### 2. Code Review Assistant
Get AI-powered feedback on your code
### 3. Iterative Improvement
Keep refining until perfect
### 4. Learning Tool
See how AI would improve your code and learn from it
## Requirements
- Python 3.10 or higher
- At least one AI provider API key (Gemini recommended for free tier)
- Dependencies listed in `requirements.txt`:
- `fastmcp` - FastMCP framework
- `mcp` - Model Context Protocol SDK
- `litellm` - Multi-provider LLM wrapper
- `rich` - Terminal formatting
- `python-dotenv` - Environment variable management
## Troubleshooting
### Server Not Appearing in Claude Desktop
1. Check that the path in `claude_desktop_config.json` is absolute, not relative
2. Verify the Python path is correct (use `which python` in your activated venv)
3. Check Claude Desktop logs for errors:
- **macOS:** `~/Library/Logs/Claude/`
- **Windows:** `%APPDATA%\Claude\logs\`
4. Restart Claude Desktop after config changes
### API Key Errors
- Verify your API key is correct in the `.env` file
- Make sure the key is also in the `claude_desktop_config.json` env section
- Check that you have API credits/quota remaining
- Try using a different AI provider as a fallback
### File Path Issues
- Always use absolute paths or paths relative to where you run the command
- On Windows, use forward slashes `/` or escaped backslashes `\\`
- Verify the file exists: `ls /path/to/file.py`
### Module Import Errors
- Ensure virtual environment is activated
- Reinstall dependencies: `pip install -r requirements.txt --upgrade`
- Check Python version: `python --version` (must be 3.10+)
### Testing the Server
Run the test client to verify the server works:
```bash
python client.py
```
This bypasses Claude Desktop and tests the MCP server directly.
## Contributing
Contributions are welcome! Here's how you can help:
1. **Report bugs** - Open an issue with details about the problem
2. **Suggest features** - Share ideas for new capabilities
3. **Improve prompts** - The prompt templates in `prompts/` can always be refined
4. **Add AI providers** - Extend support for additional LLM providers
5. **Submit PRs** - Fix bugs, add features, improve documentation
## License
MIT License - see LICENSE file for details
## Acknowledgments
Built with:
- [FastMCP](https://github.com/jlowin/fastmcp) - FastMCP framework for building MCP servers
- [LiteLLM](https://docs.litellm.ai/) - Unified interface for multiple LLM providers
- [MCP Protocol](https://modelcontextprotocol.io/) - Model Context Protocol specification
## Resources
- [FastMCP Documentation](https://github.com/jlowin/fastmcp)
- [LiteLLM Documentation](https://docs.litellm.ai/)
- [MCP Protocol Specification](https://modelcontextprotocol.io/)
- [Claude Desktop Setup Guide](https://docs.anthropic.com/claude/docs/mcp)
---
**Questions or issues?** Open an issue on GitHub or check the troubleshooting section above.