Skip to main content
Glama
Kirachon

Context Engine MCP Server

by Kirachon

review_changes

Analyze code diffs to identify issues in correctness, security, performance, maintainability, style, and documentation with prioritized findings and actionable suggestions.

Instructions

Review code changes from a diff using AI-powered analysis.

This tool performs a structured code review on a unified diff, identifying issues across correctness, security, performance, maintainability, style, and documentation.

Key Features:

  • Structured output with findings, priority levels (P0-P3), and confidence scores

  • Changed lines filter: focuses on modified code (can be toggled)

  • Confidence scoring: each finding has a 0-1 confidence score

  • Actionable suggestions: includes fix suggestions where applicable

Priority Levels:

  • P0 (Critical): Must fix before merge - bugs, security vulnerabilities

  • P1 (High): Should fix before merge - likely bugs, significant issues

  • P2 (Medium): Consider fixing - code smells, minor issues

  • P3 (Low): Nice to have - style issues, minor improvements

Categories:

  • correctness: Bugs, logic errors, edge cases

  • security: Vulnerabilities, injection risks, auth issues

  • performance: Inefficiencies, memory leaks, N+1 queries

  • maintainability: Code clarity, modularity, complexity

  • style: Formatting, naming conventions

  • documentation: Comments, docstrings, API docs

Output Schema: Returns JSON with: findings[], overall_correctness, overall_explanation, overall_confidence_score, changes_summary, and metadata.

Usage Examples:

  1. Basic review: Provide diff content

  2. Focused review: Set categories="security,correctness"

  3. Strict review: Set confidence_threshold=0.8

  4. Include context lines: Set changed_lines_only=false

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
diffYesThe unified diff content to review (from git diff, etc.)
file_contextsNoOptional JSON object mapping file paths to file contents for additional context
base_refNoOptional base branch or commit reference for context
confidence_thresholdNoMinimum confidence score (0-1) to include findings. Default: 0.7
max_findingsNoMaximum number of findings to return. Default: 20
categoriesNoComma-separated categories to focus on. Options: correctness, security, performance, maintainability, style, documentation
changed_lines_onlyNoOnly report issues on changed lines. Default: true
custom_instructionsNoCustom instructions for the reviewer (e.g., "Focus on React best practices")
exclude_patternsNoComma-separated glob patterns for files to exclude (e.g., "*.test.ts,*.spec.js")
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and does an excellent job disclosing behavioral traits. It explains the structured output format, priority levels, confidence scoring, actionable suggestions, and filtering capabilities. It doesn't mention rate limits, authentication needs, or performance characteristics, but provides substantial behavioral context beyond basic functionality.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (Key Features, Priority Levels, Categories, Output Schema, Usage Examples) and front-loads the core purpose. While comprehensive, some sections like the detailed category explanations could be more concise. Every sentence adds value, but the overall length is substantial for a tool description.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (9 parameters, no annotations, no output schema), the description provides excellent context about what the tool does, how it behaves, and what it returns. It explains the output structure in detail despite no formal output schema. The main gap is lack of explicit guidance on when to choose this over sibling review tools.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all 9 parameters thoroughly. The description adds minimal parameter-specific information beyond what's in the schema - it mentions categories and confidence_threshold in usage examples but doesn't provide additional semantic context. Baseline 3 is appropriate when schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool performs 'structured code review on a unified diff' with 'AI-powered analysis,' specifying both the action (review) and resource (code changes from diff). It distinguishes from siblings like 'review_diff' or 'review_git_diff' by emphasizing structured analysis across multiple categories rather than just diff processing.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool (reviewing code changes with AI analysis) and includes usage examples for different scenarios. However, it doesn't explicitly state when NOT to use it or mention alternatives among sibling tools like 'review_diff' or 'review_git_diff' that might serve similar purposes.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Kirachon/context-engine'

If you have feedback or need assistance with the MCP directory API, please join our Discord server