review-local-changes
Analyze uncommitted code changes with AI-powered review including linting, style checks, logic analysis, and security assessment to identify issues before committing.
Instructions
Analyze and review uncommitted code changes using AI and static analysis. Performs comprehensive code review including linting, style checks, logic analysis, and security review. Use this when user asks to review, analyze, check, or examine code changes, diffs, or local modifications.
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
No arguments | |||
Input Schema (JSON Schema)
{
"properties": {},
"type": "object"
}
Implementation Reference
- src/index.ts:362-460 (registration)Registers the 'review-local-changes' MCP tool with server.tool(), providing name, description, empty input schema, and the inline asynchronous handler function that performs the tool logic.server.tool( "review-local-changes", // The unique name of our tool "Analyze and review uncommitted code changes using AI and static analysis. Performs comprehensive code review including linting, style checks, logic analysis, and security review. Use this when user asks to review, analyze, check, or examine code changes, diffs, or local modifications.", {}, // No input parameters are needed async () => { logInfo('Code review tool invoked'); try { // 1. Run the hybrid analysis directly in Node.js const analysisData = await runHybridAnalysis(); // 2. Short-circuit if no relevant files have changed if (analysisData === '{}') { logInfo('No relevant files changed, returning empty result'); return { content: [ { type: "text", text: JSON.stringify({ summary: 'No relevant files changed.', assessment: '', findings: [] }, null, 2), }, ], }; } // 3. Create a simplified prompt for Gemini CLI with the analysis data const prompt = `You are an expert-level AI code reviewer, acting as a synthesis engine. Your primary function is to interpret results from a static analysis tool and combine them with your own deep code analysis to provide a single, comprehensive, and human-friendly review. Your entire response MUST be a single, valid JSON object. Do not include any text outside of the JSON structure. **CONTEXT GATHERING:** You will be provided with a JSON object containing two key pieces of information: 1. \`diff\`: The raw output of \`git diff HEAD\`. 2. \`linterReport\`: A JSON array of deterministic issues found by a static analysis tool (ESLint). The full context is as follows: --- CONTEXT START --- ${analysisData} --- CONTEXT END --- **REVIEW INSTRUCTIONS & OUTPUT SCHEMA:** Follow these steps meticulously to construct the final JSON object: 1. **Synthesize Linter Findings:** First, analyze the \`linterReport\`. For each issue found by the linter, translate its technical message into an empathetic, educational comment. Explain *why* the rule is important. These findings are your ground truth for style and common errors. 2. **Perform Deeper Analysis:** Next, analyze the \`diff\` to identify higher-level issues that static analysis tools typically miss. Focus on: - Logic errors, incorrect algorithms, or missed edge cases. - Architectural inconsistencies or violations of design patterns. - Performance bottlenecks or inefficient code. - Unclear naming or lack of necessary comments for complex logic. 3. **De-duplicate and Merge:** Combine the findings from both steps into a single, de-duplicated list of actionable issues. If an issue is flagged by both the linter and your own analysis, prioritize the linter's finding but enrich its \`comment\` with your deeper contextual explanation. 4. **Summarize and Assess:** Based on your complete analysis, write a concise \`summary\` of the changes and a high-level \`assessment\` of the code quality. This is the correct place for all positive feedback, praise for good architectural decisions, and other high-level, non-actionable observations. 5. **CRITICAL RULE FOR \`findings\`:** The \`findings\` array must ONLY contain actionable issues that require a developer to make a code change. Do NOT include positive feedback, praise for good architecture, or general observations in the \`findings\` array. Any finding with a suggestion of "No change needed" must be excluded from this array and its content moved to the \`assessment\` field. 6. **Format Output:** Assemble everything into the final JSON object according to the schema below. **JSON SCHEMA:** { "summary": "...", "assessment": "...", "findings": [ { "filePath": "...", "lineNumber": "...", "severity": "...", "category": "...", "comment": "...", "suggestion": "..." } ] }`; // 3. Execute Gemini CLI with the prepared prompt without shell quoting logInfo('Executing Gemini CLI for code review analysis'); const { stdout } = await runGeminiPrompt(prompt); // 4. Return the successful JSON output from stdout logInfo('Code review analysis completed successfully'); return { content: [ { type: "text", text: stdout, }, ], }; } catch (error: unknown) { logError('Code review tool execution failed', error); console.error(`Execution failed: ${error instanceof Error ? error.message : String(error)}`); throw error; } } );
- src/index.ts:366-459 (handler)The core handler function for the tool. It executes git diff and linter on local changes via runHybridAnalysis(), constructs a prompt with the data, calls Gemini CLI for AI synthesis into review JSON, and returns the result as MCP content.async () => { logInfo('Code review tool invoked'); try { // 1. Run the hybrid analysis directly in Node.js const analysisData = await runHybridAnalysis(); // 2. Short-circuit if no relevant files have changed if (analysisData === '{}') { logInfo('No relevant files changed, returning empty result'); return { content: [ { type: "text", text: JSON.stringify({ summary: 'No relevant files changed.', assessment: '', findings: [] }, null, 2), }, ], }; } // 3. Create a simplified prompt for Gemini CLI with the analysis data const prompt = `You are an expert-level AI code reviewer, acting as a synthesis engine. Your primary function is to interpret results from a static analysis tool and combine them with your own deep code analysis to provide a single, comprehensive, and human-friendly review. Your entire response MUST be a single, valid JSON object. Do not include any text outside of the JSON structure. **CONTEXT GATHERING:** You will be provided with a JSON object containing two key pieces of information: 1. \`diff\`: The raw output of \`git diff HEAD\`. 2. \`linterReport\`: A JSON array of deterministic issues found by a static analysis tool (ESLint). The full context is as follows: --- CONTEXT START --- ${analysisData} --- CONTEXT END --- **REVIEW INSTRUCTIONS & OUTPUT SCHEMA:** Follow these steps meticulously to construct the final JSON object: 1. **Synthesize Linter Findings:** First, analyze the \`linterReport\`. For each issue found by the linter, translate its technical message into an empathetic, educational comment. Explain *why* the rule is important. These findings are your ground truth for style and common errors. 2. **Perform Deeper Analysis:** Next, analyze the \`diff\` to identify higher-level issues that static analysis tools typically miss. Focus on: - Logic errors, incorrect algorithms, or missed edge cases. - Architectural inconsistencies or violations of design patterns. - Performance bottlenecks or inefficient code. - Unclear naming or lack of necessary comments for complex logic. 3. **De-duplicate and Merge:** Combine the findings from both steps into a single, de-duplicated list of actionable issues. If an issue is flagged by both the linter and your own analysis, prioritize the linter's finding but enrich its \`comment\` with your deeper contextual explanation. 4. **Summarize and Assess:** Based on your complete analysis, write a concise \`summary\` of the changes and a high-level \`assessment\` of the code quality. This is the correct place for all positive feedback, praise for good architectural decisions, and other high-level, non-actionable observations. 5. **CRITICAL RULE FOR \`findings\`:** The \`findings\` array must ONLY contain actionable issues that require a developer to make a code change. Do NOT include positive feedback, praise for good architecture, or general observations in the \`findings\` array. Any finding with a suggestion of "No change needed" must be excluded from this array and its content moved to the \`assessment\` field. 6. **Format Output:** Assemble everything into the final JSON object according to the schema below. **JSON SCHEMA:** { "summary": "...", "assessment": "...", "findings": [ { "filePath": "...", "lineNumber": "...", "severity": "...", "category": "...", "comment": "...", "suggestion": "..." } ] }`; // 3. Execute Gemini CLI with the prepared prompt without shell quoting logInfo('Executing Gemini CLI for code review analysis'); const { stdout } = await runGeminiPrompt(prompt); // 4. Return the successful JSON output from stdout logInfo('Code review analysis completed successfully'); return { content: [ { type: "text", text: stdout, }, ], }; } catch (error: unknown) { logError('Code review tool execution failed', error); console.error(`Execution failed: ${error instanceof Error ? error.message : String(error)}`); throw error; } }
- src/index.ts:313-352 (helper)Key helper function called by the handler. Performs 'hybrid analysis' by getting git-changed JS/TS files, running project linter (ESLint preferred), fetching full git diff, and returning JSON with diff and linter issues for AI prompt.async function runHybridAnalysis(): Promise<string> { logInfo('Starting hybrid analysis'); try { // 1. Find all modified and added JS/TS/Vue files, excluding deleted files const { stdout: gitFiles } = await execAsync("git diff HEAD --name-only --diff-filter=AM"); const changedFiles = gitFiles .split('\n') .filter(file => file.trim() !== '') .filter(file => /\.(js|ts|tsx|vue)$/.test(file)); logInfo('Found changed files', { totalFiles: changedFiles.length, files: changedFiles }); // 2. If no relevant files have changed, return empty JSON object if (changedFiles.length === 0) { logInfo('No relevant files changed, returning empty result'); return '{}'; } // 3. Run the project's preferred linter logInfo('Running linter analysis'); const linterReport = await runProjectLinter(changedFiles); logInfo('Linter analysis completed', { issuesFound: linterReport.length }); // 4. Get the raw git diff const { stdout: gitDiff } = await execAsync("git diff HEAD"); logInfo('Git diff retrieved', { diffLength: gitDiff.length }); // 5. Combine the git diff and linter report into JSON const result = { diff: gitDiff, linterReport: linterReport }; logInfo('Hybrid analysis completed successfully'); return JSON.stringify(result, null, 2); } catch (error) { logError('Error in hybrid analysis', error); throw error; }
- src/index.ts:127-187 (helper)Helper for securely running ESLint on changed files via spawn, used in runProjectLinter.async function spawnEslint(files: string[], timeoutMs: number): Promise<{ stdout: string; stderr: string }> { return new Promise((resolve, reject) => { let child; try { child = spawn("npx", ["eslint", "--format", "json", ...files], { stdio: ["pipe", "pipe", "pipe"], shell: true // Keep shell: true for npx to work properly }); } catch (spawnError) { reject(spawnError); return; } let stdout = ""; let stderr = ""; let settled = false; const timer: NodeJS.Timeout = setTimeout(() => { if (settled) { return; } settled = true; child.kill("SIGTERM"); reject(new Error(`ESLint timed out after ${timeoutMs} ms`)); }, timeoutMs); child.stdout.setEncoding("utf8"); child.stderr.setEncoding("utf8"); child.stdout.on("data", (data) => { stdout += data; }); child.stderr.on("data", (data) => { stderr += data; }); child.on("error", (error) => { if (settled) { return; } settled = true; clearTimeout(timer); reject(error); }); child.on("close", (_code) => { if (settled) { return; } settled = true; clearTimeout(timer); resolve({ stdout, stderr }); }); if (child.stdin) { child.stdin.end(); } }); }
- src/index.ts:365-365 (schema)Empty input schema object, indicating the tool takes no parameters.{}, // No input parameters are needed