Skip to main content
Glama

MCP Code Reviewer

An MCP (Model Context Protocol) server that provides AI-powered code review with human-in-the-loop confirmation. Built with FastMCP and LiteLLM for multi-provider AI support.

Features

  • AI-Powered Code Review: Analyze code for bugs, security vulnerabilities, best practices, and performance issues

  • Requirements Validation: Compare code against requirements documents to ensure alignment

  • Automated Fix Proposals: Generate code changes to address identified issues

  • Human Confirmation: CLI prompts or token-based approval workflow before applying changes

  • Multi-Provider Support: Use Gemini (via API - default), Claude (via Vertex AI), or ChatGPT (via Azure)

  • Safe File Operations: Automatic backups before modifications, file size limits, and path validation

Project Structure

. ├── mcp_server.py # Main MCP server entry point ├── tools/ # MCP tool implementations │ ├── __init__.py │ ├── file_ops.py # File reading/writing utilities │ ├── code_review.py # Code review tool │ ├── requirements.py # Requirements validation tool │ ├── change_proposal.py # Change proposal and diffing │ └── confirmation.py # Human-in-the-loop confirmation ├── prompts/ # AI system prompts │ ├── code_review.txt # System prompt for code review │ ├── requirements.txt # System prompt for requirements validation │ └── fix_proposal.txt # System prompt for generating fixes ├── utils/ # Utility modules │ ├── __init__.py │ ├── llm_client.py # LiteLLM wrapper/client │ └── diff_generator.py # Diff generation utilities ├── .backups/ # Automatic backup files ├── requirements.txt # Python dependencies ├── .env.example # Example environment variables └── .gitignore # Git ignore rules

Installation

1. Clone or navigate to the project directory

cd clock_in_clock_out

2. Create virtual environment

python -m venv .venv

3. Activate virtual environment

Windows:

.venv\Scripts\activate

Mac/Linux:

source .venv/bin/activate

4. Install dependencies

pip install -r requirements.txt

5. Set up environment variables

Copy .env.example to .env and configure:

cp .env.example .env

Edit .env with your credentials:

# Google Vertex AI (for Claude) VERTEX_PROJECT=your-gcp-project-id VERTEX_LOCATION=us-central1 # Azure OpenAI (for ChatGPT) AZURE_API_KEY=your-azure-api-key AZURE_API_BASE=https://your-resource.openai.azure.com/ AZURE_API_VERSION=2024-02-15-preview # Google Gemini API GOOGLE_API_KEY=your-gemini-api-key # Default Settings DEFAULT_AI_PROVIDER=gemini MAX_FILE_SIZE_MB=5 BACKUP_DIRECTORY=.backups/

Usage

Running the MCP Server

python mcp_server.py

Running the Interactive Client

The project includes a simple interactive client to test the MCP server:

python client.py

This will:

  1. Connect to the MCP server automatically

  2. Show an interactive menu with all available tools

  3. Guide you through using each tool with prompts

Available Client Options:

  • List Available Tools

  • Read File

  • Review Code

  • Validate Against Requirements

  • Propose Changes

  • Full Review Workflow

Available Tools

1. read_file_tool

Read content from a local file.

{ "tool": "read_file_tool", "arguments": { "file_path": "C:/path/to/file.py" } }

2. review_code_tool

Perform AI-powered code quality review.

{ "tool": "review_code_tool", "arguments": { "file_path": "C:/path/to/code.py", "ai_provider": "gemini", "review_aspects": ["bugs", "security", "best_practices", "performance"] } }

Returns:

  • Review summary

  • List of issues with severity and line numbers

  • Overall quality score (0-10)

3. validate_requirements_tool

Validate code against requirements document.

{ "tool": "validate_requirements_tool", "arguments": { "code_file_path": "C:/path/to/app.py", "requirements_file_path": "C:/path/to/requirements.md", "ai_provider": "claude" } }

Returns:

  • Alignment score

  • Missing requirements

  • Extra functionality

  • Recommendations

4. propose_changes_tool

Generate proposed code changes to fix issues.

{ "tool": "propose_changes_tool", "arguments": { "file_path": "C:/path/to/code.py", "issues_to_fix": ["Fix SQL injection", "Add error handling"], "ai_provider": "gemini", "return_mode": "diff" } }

Returns:

  • Original content

  • Proposed content

  • Unified diff

  • Change summary

5. confirm_and_apply_tool

Show changes and apply after confirmation.

{ "tool": "confirm_and_apply_tool", "arguments": { "file_path": "C:/path/to/code.py", "proposed_content": "...", "diff": "...", "confirmation_mode": "cli_prompt" } }

Confirmation Modes:

  • cli_prompt: Interactive CLI prompt (requires user input)

  • return_for_approval: Returns approval token for later application

6. apply_approved_tool

Apply previously approved changes using token.

{ "tool": "apply_approved_tool", "arguments": { "approval_token": "abc123xyz" } }

7. full_review_workflow

Complete workflow: review → validate → propose → confirm.

{ "tool": "full_review_workflow", "arguments": { "code_file_path": "C:/path/to/app.py", "requirements_file_path": "C:/path/to/requirements.md", "ai_provider": "gemini", "auto_fix": true, "confirmation_mode": "cli_prompt" } }

Example Workflows

Quick Code Review

# Review a file for issues review_code_tool( file_path="app.py", ai_provider="gemini" )

Review and Auto-Fix

# Full workflow with automatic fix proposals full_review_workflow( code_file_path="app.py", requirements_file_path="requirements.md", auto_fix=True, confirmation_mode="cli_prompt" )

Manual Approval Flow

# Step 1: Get proposed changes result = propose_changes_tool( file_path="app.py", issues_to_fix=["Fix SQL injection"] ) # Step 2: Request approval token approval = confirm_and_apply_tool( file_path="app.py", proposed_content=result["proposed_content"], diff=result["diff"], confirmation_mode="return_for_approval" ) # Step 3: Review the diff manually... # Step 4: Apply when ready apply_approved_tool( approval_token=approval["approval_token"] )

AI Provider Configuration

Claude (via Google Vertex AI)

Set up Google Cloud credentials and enable Vertex AI API.

ChatGPT (via Azure)

Create Azure OpenAI resource and deploy GPT-4 model.

Gemini (via API)

Get API key from Google AI Studio.

Safety Features

  • Automatic Backups: Files are backed up to .backups/ before modification

  • File Size Limits: Prevents memory issues with large files (default 5MB)

  • Path Validation: Prevents directory traversal attacks

  • Human Confirmation: Changes require explicit approval

Development

Testing Individual Tools

from tools.code_review import review_code result = review_code("test_file.py", "claude") print(result)

Custom Prompts

Edit files in prompts/ directory to customize AI behavior:

  • code_review.txt: Code review criteria

  • requirements.txt: Requirements validation approach

  • fix_proposal.txt: Fix generation instructions

Troubleshooting

LiteLLM Connection Issues

Ensure environment variables are correctly set for your chosen provider.

File Size Errors

Adjust MAX_FILE_SIZE_MB in .env file.

Backup Directory

Ensure .backups/ directory exists or set custom path in BACKUP_DIRECTORY.

Resources

License

MIT

-
security - not tested
F
license - not found
-
quality - not tested

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/whtan410/mcp_code_review'

If you have feedback or need assistance with the MCP directory API, please join our Discord server