Skip to main content
Glama

review-local-changes

Analyze uncommitted code changes with AI and static analysis to identify issues, improve quality, and ensure security before committing.

Instructions

Analyze and review uncommitted code changes using AI and static analysis. Performs comprehensive code review including linting, style checks, logic analysis, and security review. Use this when user asks to review, analyze, check, or examine code changes, diffs, or local modifications.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • The main execution logic of the tool: fetches git diff and linter results on local changes, generates AI prompt, calls Gemini CLI for synthesis, returns structured review JSON.
      async () => {
        logInfo('Code review tool invoked');
    
        try {
          // 1. Run the hybrid analysis directly in Node.js
          const analysisData = await runHybridAnalysis();
    
          // 2. Short-circuit if no relevant files have changed
          if (analysisData === '{}') {
            logInfo('No relevant files changed, returning empty result');
            return {
              content: [
                {
                  type: "text",
                  text: JSON.stringify({
                    summary: 'No relevant files changed.',
                    assessment: '',
                    findings: []
                  }, null, 2),
                },
              ],
            };
          }
    
          // 3. Create a simplified prompt for Gemini CLI with the analysis data
          const prompt = `You are an expert-level AI code reviewer, acting as a synthesis engine. Your primary function is to interpret results from a static analysis tool and combine them with your own deep code analysis to provide a single, comprehensive, and human-friendly review.
    
    Your entire response MUST be a single, valid JSON object. Do not include any text outside of the JSON structure.
    
    **CONTEXT GATHERING:**
    You will be provided with a JSON object containing two key pieces of information:
    1.  \`diff\`: The raw output of \`git diff HEAD\`.
    2.  \`linterReport\`: A JSON array of deterministic issues found by a static analysis tool (ESLint).
    
    The full context is as follows:
    --- CONTEXT START ---
    ${analysisData}
    --- CONTEXT END ---
    
    **REVIEW INSTRUCTIONS & OUTPUT SCHEMA:**
    Follow these steps meticulously to construct the final JSON object:
    
    1.  **Synthesize Linter Findings:** First, analyze the \`linterReport\`. For each issue found by the linter, translate its technical message into an empathetic, educational comment. Explain *why* the rule is important. These findings are your ground truth for style and common errors.
    
    2.  **Perform Deeper Analysis:** Next, analyze the \`diff\` to identify higher-level issues that static analysis tools typically miss. Focus on:
        - Logic errors, incorrect algorithms, or missed edge cases.
        - Architectural inconsistencies or violations of design patterns.
        - Performance bottlenecks or inefficient code.
        - Unclear naming or lack of necessary comments for complex logic.
    
    3.  **De-duplicate and Merge:** Combine the findings from both steps into a single, de-duplicated list of actionable issues. If an issue is flagged by both the linter and your own analysis, prioritize the linter's finding but enrich its \`comment\` with your deeper contextual explanation.
    
    4.  **Summarize and Assess:** Based on your complete analysis, write a concise \`summary\` of the changes and a high-level \`assessment\` of the code quality. This is the correct place for all positive feedback, praise for good architectural decisions, and other high-level, non-actionable observations.
    
    5.  **CRITICAL RULE FOR \`findings\`:** The \`findings\` array must ONLY contain actionable issues that require a developer to make a code change. Do NOT include positive feedback, praise for good architecture, or general observations in the \`findings\` array. Any finding with a suggestion of "No change needed" must be excluded from this array and its content moved to the \`assessment\` field.
    
    6.  **Format Output:** Assemble everything into the final JSON object according to the schema below.
    
    **JSON SCHEMA:**
    {
      "summary": "...",
      "assessment": "...",
      "findings": [
        {
          "filePath": "...",
          "lineNumber": "...",
          "severity": "...",
          "category": "...",
          "comment": "...",
          "suggestion": "..."
        }
      ]
    }`;
    
          // 3. Execute Gemini CLI with the prepared prompt without shell quoting
          logInfo('Executing Gemini CLI for code review analysis');
          const { stdout } = await runGeminiPrompt(prompt);
    
          // 4. Return the successful JSON output from stdout
          logInfo('Code review analysis completed successfully');
          return {
            content: [
              {
                type: "text",
                text: stdout,
              },
            ],
          };
        } catch (error: unknown) {
          logError('Code review tool execution failed', error);
          console.error(`Execution failed: ${error instanceof Error ? error.message : String(error)}`);
          throw error;
        }
      }
  • src/index.ts:362-460 (registration)
    Registers the 'review-local-changes' tool with the MCP server, including name, description, empty schema, and handler reference.
    server.tool(
      "review-local-changes", // The unique name of our tool
      "Analyze and review uncommitted code changes using AI and static analysis. Performs comprehensive code review including linting, style checks, logic analysis, and security review. Use this when user asks to review, analyze, check, or examine code changes, diffs, or local modifications.",
      {}, // No input parameters are needed
      async () => {
        logInfo('Code review tool invoked');
    
        try {
          // 1. Run the hybrid analysis directly in Node.js
          const analysisData = await runHybridAnalysis();
    
          // 2. Short-circuit if no relevant files have changed
          if (analysisData === '{}') {
            logInfo('No relevant files changed, returning empty result');
            return {
              content: [
                {
                  type: "text",
                  text: JSON.stringify({
                    summary: 'No relevant files changed.',
                    assessment: '',
                    findings: []
                  }, null, 2),
                },
              ],
            };
          }
    
          // 3. Create a simplified prompt for Gemini CLI with the analysis data
          const prompt = `You are an expert-level AI code reviewer, acting as a synthesis engine. Your primary function is to interpret results from a static analysis tool and combine them with your own deep code analysis to provide a single, comprehensive, and human-friendly review.
    
    Your entire response MUST be a single, valid JSON object. Do not include any text outside of the JSON structure.
    
    **CONTEXT GATHERING:**
    You will be provided with a JSON object containing two key pieces of information:
    1.  \`diff\`: The raw output of \`git diff HEAD\`.
    2.  \`linterReport\`: A JSON array of deterministic issues found by a static analysis tool (ESLint).
    
    The full context is as follows:
    --- CONTEXT START ---
    ${analysisData}
    --- CONTEXT END ---
    
    **REVIEW INSTRUCTIONS & OUTPUT SCHEMA:**
    Follow these steps meticulously to construct the final JSON object:
    
    1.  **Synthesize Linter Findings:** First, analyze the \`linterReport\`. For each issue found by the linter, translate its technical message into an empathetic, educational comment. Explain *why* the rule is important. These findings are your ground truth for style and common errors.
    
    2.  **Perform Deeper Analysis:** Next, analyze the \`diff\` to identify higher-level issues that static analysis tools typically miss. Focus on:
        - Logic errors, incorrect algorithms, or missed edge cases.
        - Architectural inconsistencies or violations of design patterns.
        - Performance bottlenecks or inefficient code.
        - Unclear naming or lack of necessary comments for complex logic.
    
    3.  **De-duplicate and Merge:** Combine the findings from both steps into a single, de-duplicated list of actionable issues. If an issue is flagged by both the linter and your own analysis, prioritize the linter's finding but enrich its \`comment\` with your deeper contextual explanation.
    
    4.  **Summarize and Assess:** Based on your complete analysis, write a concise \`summary\` of the changes and a high-level \`assessment\` of the code quality. This is the correct place for all positive feedback, praise for good architectural decisions, and other high-level, non-actionable observations.
    
    5.  **CRITICAL RULE FOR \`findings\`:** The \`findings\` array must ONLY contain actionable issues that require a developer to make a code change. Do NOT include positive feedback, praise for good architecture, or general observations in the \`findings\` array. Any finding with a suggestion of "No change needed" must be excluded from this array and its content moved to the \`assessment\` field.
    
    6.  **Format Output:** Assemble everything into the final JSON object according to the schema below.
    
    **JSON SCHEMA:**
    {
      "summary": "...",
      "assessment": "...",
      "findings": [
        {
          "filePath": "...",
          "lineNumber": "...",
          "severity": "...",
          "category": "...",
          "comment": "...",
          "suggestion": "..."
        }
      ]
    }`;
    
          // 3. Execute Gemini CLI with the prepared prompt without shell quoting
          logInfo('Executing Gemini CLI for code review analysis');
          const { stdout } = await runGeminiPrompt(prompt);
    
          // 4. Return the successful JSON output from stdout
          logInfo('Code review analysis completed successfully');
          return {
            content: [
              {
                type: "text",
                text: stdout,
              },
            ],
          };
        } catch (error: unknown) {
          logError('Code review tool execution failed', error);
          console.error(`Execution failed: ${error instanceof Error ? error.message : String(error)}`);
          throw error;
        }
      }
    );
  • Key helper function that identifies changed JS/TS files via git, runs linter, fetches full git diff, and returns JSON payload for AI analysis.
    async function runHybridAnalysis(): Promise<string> {
      logInfo('Starting hybrid analysis');
    
      try {
        // 1. Find all modified and added JS/TS/Vue files, excluding deleted files
        const { stdout: gitFiles } = await execAsync("git diff HEAD --name-only --diff-filter=AM");
        const changedFiles = gitFiles
          .split('\n')
          .filter(file => file.trim() !== '')
          .filter(file => /\.(js|ts|tsx|vue)$/.test(file));
    
        logInfo('Found changed files', { totalFiles: changedFiles.length, files: changedFiles });
    
        // 2. If no relevant files have changed, return empty JSON object
        if (changedFiles.length === 0) {
          logInfo('No relevant files changed, returning empty result');
          return '{}';
        }
    
        // 3. Run the project's preferred linter
        logInfo('Running linter analysis');
        const linterReport = await runProjectLinter(changedFiles);
        logInfo('Linter analysis completed', { issuesFound: linterReport.length });
    
        // 4. Get the raw git diff
        const { stdout: gitDiff } = await execAsync("git diff HEAD");
        logInfo('Git diff retrieved', { diffLength: gitDiff.length });
    
        // 5. Combine the git diff and linter report into JSON
        const result = {
          diff: gitDiff,
          linterReport: linterReport
        };
    
        logInfo('Hybrid analysis completed successfully');
        return JSON.stringify(result, null, 2);
      } catch (error) {
        logError('Error in hybrid analysis', error);
        throw error;
      }
    }
  • Helper to invoke Gemini CLI safely, handling long prompts by switching to stdin mode.
    async function runGeminiPrompt(prompt: string, timeoutMs = 120_000): Promise<GeminiResult> {
      logInfo('Running Gemini CLI prompt', { promptLength: prompt.length, timeoutMs });
    
      try {
        return await spawnGemini([prompt], null, timeoutMs);
      } catch (error) {
        if (isArgumentTooLongError(error)) {
          logWarning('Gemini prompt exceeded argument length; falling back to stdin streaming');
          console.warn("Gemini prompt exceeded argument length; streaming via stdin instead.");
          return spawnGemini([], prompt, timeoutMs);
        }
    
        logError('Gemini CLI prompt failed', error);
        throw error;
      }
    }
  • Detects project linter (prioritizes ESLint) and runs it on changed files, with fallbacks to JSHint and TypeScript checking, standardizing output.
    async function runProjectLinter(changedFiles: string[]): Promise<any[]> {
      try {
        // Check if project has ESLint configuration
        const hasEslintConfig = await checkFileExists(['eslint.config.js', '.eslintrc.js', '.eslintrc.json', '.eslintrc.yml', '.eslintrc.yaml', '.eslintrc']);
    
        if (hasEslintConfig) {
          try {
            const { stdout: linterJson } = await spawnEslint(changedFiles, 30000);
            return linterJson.trim() ? JSON.parse(linterJson) : [];
          } catch (eslintError) {
            console.warn('ESLint execution failed, trying fallback linter:', eslintError instanceof Error ? eslintError.message : String(eslintError));
          }
        }
    
        // Fallback 1: Try JSHint if available
        if (await checkCommandExists('jshint')) {
          try {
            const results = [];
            for (const file of changedFiles) {
              const { stderr } = await execAsync(`jshint ${file}`, { timeout: 10000 });
              if (stderr) {
                const issues = parseJshintOutput(stderr, file);
                results.push(...issues);
              }
            }
            return results;
          } catch (jshintError) {
            console.warn('JSHint execution failed:', jshintError instanceof Error ? jshintError.message : String(jshintError));
          }
        }
    
        // Fallback 2: Try basic TypeScript compiler checking
        const hasTsConfig = await checkFileExists(['tsconfig.json']);
        if (hasTsConfig && changedFiles.some(f => f.endsWith('.ts') || f.endsWith('.tsx'))) {
          try {
            const { stderr } = await execAsync('npx tsc --noEmit', { timeout: 30000 });
            if (stderr) {
              return parseTypeScriptErrors(stderr);
            }
          } catch (tscError) {
            console.warn('TypeScript compilation check failed:', tscError instanceof Error ? tscError.message : String(tscError));
          }
        }
    
        // If no linter is available, return empty array
        console.warn('No suitable linter found in project, proceeding without static analysis');
        return [];
      } catch (error) {
        console.warn('Linter detection failed:', error);
        return [];
      }
    }
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It mentions 'AI and static analysis' but lacks details on behavioral traits like whether it modifies code, requires specific permissions, has rate limits, or outputs format. For a tool with no annotation coverage, this is a significant gap in transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose and efficiently lists review components. It could be slightly more concise by combining some phrases, but every sentence adds value (e.g., usage guidelines).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations, no output schema, and 0 parameters, the description covers purpose and usage well but lacks behavioral details (e.g., what the review outputs, any side effects). It's adequate for a simple tool but misses completeness for a 'comprehensive' analysis tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0 parameters with 100% coverage, so no parameter documentation is needed. The description appropriately does not discuss parameters, focusing instead on the tool's function and usage. Baseline is 4 for zero parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('analyze and review') and resources ('uncommitted code changes'), detailing the scope ('comprehensive code review including linting, style checks, logic analysis, and security review'). It distinguishes itself by focusing on AI and static analysis, though no siblings exist for comparison.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use the tool: 'when user asks to review, analyze, check, or examine code changes, diffs, or local modifications.' This provides clear context and usage triggers, though no alternatives are mentioned as there are no sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/shaunie2fly/ndlovu-code-reviewer'

If you have feedback or need assistance with the MCP directory API, please join our Discord server