sage-review
Send code and receive expert review suggestions with SEARCH/REPLACE edits. Specify absolute paths for files or directories to include in the review context. Ideal for detailed code improvements.
Instructions
Send code to the sage model for expert review and get specific edit suggestions as SEARCH/REPLACE blocks.
Use this tool any time the user asks for a "sage review" or "code review" or "expert review".
This tool includes the full content of all files in the specified paths and instructs the model to return edit suggestions in a specific format with search and replace blocks.
IMPORTANT: All paths must be absolute paths (e.g., /home/user/project/src), not relative paths.
If the user hasn't provided specific paths, use as many paths to files or directories as you're aware of that are useful in the context of the prompt.
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| instruction | Yes | The specific changes or improvements needed. | |
| paths | Yes | Paths to include as context. MUST be absolute paths (e.g., /home/user/project/src). Including directories will include all files contained within recursively. |
Implementation Reference
- src/index.ts:290-546 (registration)Complete registration of the 'sage-review' tool using McpServer.tool(), including tool description, Zod input schema (instruction, paths, optional debate), and inline async handler. The handler packs absolute path files into XML context, optionally runs multi-model debate using ReviewStrategy if debate=true, otherwise selects LLM (OpenAI/Gemini) based on token count, crafts expert review prompt instructing SEARCH/REPLACE format, sends to model, and returns response with notifications.server.tool( "sage-review", `Send code to the sage model for expert review and get specific edit suggestions as SEARCH/REPLACE blocks. Use this tool any time the user asks for a "sage review" or "code review" or "expert review". This tool includes the full content of all files in the specified paths and instructs the model to return edit suggestions in a specific format with search and replace blocks. IMPORTANT: All paths must be absolute paths (e.g., /home/user/project/src), not relative paths. If the user hasn't provided specific paths, use as many paths to files or directories as you're aware of that are useful in the context of the prompt.`, { instruction: z .string() .describe("The specific changes or improvements needed."), paths: z .array(z.string()) .describe( "Paths to include as context. MUST be absolute paths (e.g., /home/user/project/src). Including directories will include all files contained within recursively.", ), debate: z .boolean() .optional() .describe("Set to true when a multi-model debate should ensue"), }, async ({ instruction, paths, debate }, { sendNotification }) => { try { // Check if debate is enabled if (debate) { await sendNotification({ method: "notifications/message", params: { level: "info", data: `Using debate mode for sage-review`, }, }); const strategy = await getStrategy(ToolType.Review); if (!strategy) { throw new Error("Review strategy not found"); } const result = await runDebate( { toolType: ToolType.Review, userPrompt: instruction, debateConfig: { enabled: true, rounds: 1, logLevel: "debug", }, }, async (notification) => { await sendNotification({ method: "notifications/message", params: notification, }); }, ); return { content: [ { type: "text", text: "review" in result ? result.review : "Error: No review generated", }, ], metadata: { meta: result.meta, }, }; } // Pack the files const packedFiles = await packFiles(paths); // Create the expert review prompt that requests SEARCH/REPLACE formatting const expertReviewPrompt = ` Act as an expert software developer. Always use best practices when coding. Respect and use existing conventions, libraries, etc that are already present in the code base. The following instruction describes the changes needed: ${instruction} Use the following to describe and format the change. Describe each change with a *SEARCH/REPLACE block* per the examples below. ALWAYS use the full path, use the files structure to find the right file path otherwise see if user request has it. All changes to files must use this *SEARCH/REPLACE block* format. ONLY EVER RETURN CODE IN A *SEARCH/REPLACE BLOCK*! Some of the changes may not be relevant to some files - SKIP THOSE IN YOUR RESPONSE. Provide rationale for each change above each SEARCH/REPLACE block. Make sure search block exists in original file and is NOT empty. Please make sure the block is formatted correctly with \`<<<<<<< SEARCH\`, \`=======\` and \`>>>>>>> REPLACE\` as shown below. EXAMPLE: \`\`\`\`\`\` <<<<<<< SEARCH from flask import Flask ======= import math from flask import Flask >>>>>>> REPLACE \`\`\`\`\`\` \`\`\`\`\`\` <<<<<<< SEARCH def factorial(n): "compute factorial" if n == 0: return 1 else: return n * factorial(n-1) ======= >>>>>>> REPLACE \`\`\`\`\`\` \`\`\`\`\`\` <<<<<<< SEARCH return str(factorial(n)) ======= return str(math.factorial(n)) >>>>>>> REPLACE \`\`\`\`\`\` `; // Combine with the prompt const combined = combinePromptWithContext( packedFiles, expertReviewPrompt, ); // Select model based on token count and get token information const modelSelection = selectModelBasedOnTokens(combined, 'review'); const { modelName, modelType, tokenCount, withinLimit, tokenLimit } = modelSelection; // Log token usage via MCP logging notification await sendNotification({ method: "notifications/message", params: { level: "debug", data: `Token usage: ${tokenCount.toLocaleString()} tokens. Selected model: ${modelName} (limit: ${tokenLimit.toLocaleString()} tokens)`, }, }); await sendNotification({ method: "notifications/message", params: { level: "debug", data: `Files included: ${paths.length}, Document count: ${analyzeXmlTokens(combined).documentCount}`, }, }); if (!withinLimit) { // Handle different error cases let errorMsg = ""; if (modelName === "none" && tokenLimit === 0) { // No API keys available // Get token limits from config for error message const gpt5Model = getModelById('gpt5'); const geminiModel = getModelById('gemini25pro'); const gpt5Limit = gpt5Model ? gpt5Model.tokenLimit : 400000; const geminiLimit = geminiModel ? geminiModel.tokenLimit : 1000000; errorMsg = `Error: No API keys available. Please set OPENAI_API_KEY for contexts up to ${gpt5Limit.toLocaleString()} tokens or GEMINI_API_KEY for contexts up to ${geminiLimit.toLocaleString()} tokens.`; } else if (modelType === "openai" && !process.env.OPENAI_API_KEY) { // Missing OpenAI API key errorMsg = `Error: OpenAI API key not set. This content (${tokenCount.toLocaleString()} tokens) could be processed by GPT-5, but OPENAI_API_KEY is missing. Please set the environment variable or use a smaller context.`; } else if (modelType === "gemini" && !process.env.GEMINI_API_KEY) { // Missing Gemini API key errorMsg = `Error: Gemini API key not set. This content (${tokenCount.toLocaleString()} tokens) requires Gemini's larger context window, but GEMINI_API_KEY is missing. Please set the environment variable.`; } else { // Content exceeds all available model limits // Get token limits from config for error message const gpt5Model = getModelById('gpt5'); const geminiModel = getModelById('gemini25pro'); const gpt5Limit = gpt5Model ? gpt5Model.tokenLimit : 400000; const geminiLimit = geminiModel ? geminiModel.tokenLimit : 1000000; errorMsg = `Error: The combined content (${tokenCount.toLocaleString()} tokens) exceeds the maximum token limit for all available models (GPT-5: ${gpt5Limit.toLocaleString()}, Gemini: ${geminiLimit.toLocaleString()} tokens). Please reduce the number of files or shorten the instruction.`; } await sendNotification({ method: "notifications/message", params: { level: "error", data: `Request blocked: ${process.env.OPENAI_API_KEY ? "OpenAI API available. " : "OpenAI API unavailable. "}${process.env.GEMINI_API_KEY ? "Gemini available." : "Gemini unavailable."}`, }, }); return { content: [{ type: "text", text: errorMsg }], isError: true, }; } // Send to appropriate model based on selection with fallback capability const startTime = Date.now(); const response = await sendToModel( combined, { modelName, modelType, tokenCount }, sendNotification, ); const elapsedTime = Date.now() - startTime; await sendNotification({ method: "notifications/message", params: { level: "info", data: `Received response from ${modelName} in ${elapsedTime}ms`, }, }); return { content: [ { type: "text", text: response, }, ], }; } catch (error) { const errorMsg = error instanceof Error ? error.message : String(error); await sendNotification({ method: "notifications/message", params: { level: "error", data: `Error in expert-review tool: ${errorMsg}`, }, }); return { content: [ { type: "text", text: `Error: ${errorMsg}`, }, ], isError: true, }; } }, );
- src/index.ts:315-545 (handler)Inline handler logic for 'sage-review': supports debate mode (using ReviewStrategy and debateOrchestrator) or single-model mode. Packs files, builds specialized prompt for code review edits in SEARCH/REPLACE format, handles token limits and API key checks with detailed errors, selects model, sends request, notifies progress/errors, returns text response.async ({ instruction, paths, debate }, { sendNotification }) => { try { // Check if debate is enabled if (debate) { await sendNotification({ method: "notifications/message", params: { level: "info", data: `Using debate mode for sage-review`, }, }); const strategy = await getStrategy(ToolType.Review); if (!strategy) { throw new Error("Review strategy not found"); } const result = await runDebate( { toolType: ToolType.Review, userPrompt: instruction, debateConfig: { enabled: true, rounds: 1, logLevel: "debug", }, }, async (notification) => { await sendNotification({ method: "notifications/message", params: notification, }); }, ); return { content: [ { type: "text", text: "review" in result ? result.review : "Error: No review generated", }, ], metadata: { meta: result.meta, }, }; } // Pack the files const packedFiles = await packFiles(paths); // Create the expert review prompt that requests SEARCH/REPLACE formatting const expertReviewPrompt = ` Act as an expert software developer. Always use best practices when coding. Respect and use existing conventions, libraries, etc that are already present in the code base. The following instruction describes the changes needed: ${instruction} Use the following to describe and format the change. Describe each change with a *SEARCH/REPLACE block* per the examples below. ALWAYS use the full path, use the files structure to find the right file path otherwise see if user request has it. All changes to files must use this *SEARCH/REPLACE block* format. ONLY EVER RETURN CODE IN A *SEARCH/REPLACE BLOCK*! Some of the changes may not be relevant to some files - SKIP THOSE IN YOUR RESPONSE. Provide rationale for each change above each SEARCH/REPLACE block. Make sure search block exists in original file and is NOT empty. Please make sure the block is formatted correctly with \`<<<<<<< SEARCH\`, \`=======\` and \`>>>>>>> REPLACE\` as shown below. EXAMPLE: \`\`\`\`\`\` <<<<<<< SEARCH from flask import Flask ======= import math from flask import Flask >>>>>>> REPLACE \`\`\`\`\`\` \`\`\`\`\`\` <<<<<<< SEARCH def factorial(n): "compute factorial" if n == 0: return 1 else: return n * factorial(n-1) ======= >>>>>>> REPLACE \`\`\`\`\`\` \`\`\`\`\`\` <<<<<<< SEARCH return str(factorial(n)) ======= return str(math.factorial(n)) >>>>>>> REPLACE \`\`\`\`\`\` `; // Combine with the prompt const combined = combinePromptWithContext( packedFiles, expertReviewPrompt, ); // Select model based on token count and get token information const modelSelection = selectModelBasedOnTokens(combined, 'review'); const { modelName, modelType, tokenCount, withinLimit, tokenLimit } = modelSelection; // Log token usage via MCP logging notification await sendNotification({ method: "notifications/message", params: { level: "debug", data: `Token usage: ${tokenCount.toLocaleString()} tokens. Selected model: ${modelName} (limit: ${tokenLimit.toLocaleString()} tokens)`, }, }); await sendNotification({ method: "notifications/message", params: { level: "debug", data: `Files included: ${paths.length}, Document count: ${analyzeXmlTokens(combined).documentCount}`, }, }); if (!withinLimit) { // Handle different error cases let errorMsg = ""; if (modelName === "none" && tokenLimit === 0) { // No API keys available // Get token limits from config for error message const gpt5Model = getModelById('gpt5'); const geminiModel = getModelById('gemini25pro'); const gpt5Limit = gpt5Model ? gpt5Model.tokenLimit : 400000; const geminiLimit = geminiModel ? geminiModel.tokenLimit : 1000000; errorMsg = `Error: No API keys available. Please set OPENAI_API_KEY for contexts up to ${gpt5Limit.toLocaleString()} tokens or GEMINI_API_KEY for contexts up to ${geminiLimit.toLocaleString()} tokens.`; } else if (modelType === "openai" && !process.env.OPENAI_API_KEY) { // Missing OpenAI API key errorMsg = `Error: OpenAI API key not set. This content (${tokenCount.toLocaleString()} tokens) could be processed by GPT-5, but OPENAI_API_KEY is missing. Please set the environment variable or use a smaller context.`; } else if (modelType === "gemini" && !process.env.GEMINI_API_KEY) { // Missing Gemini API key errorMsg = `Error: Gemini API key not set. This content (${tokenCount.toLocaleString()} tokens) requires Gemini's larger context window, but GEMINI_API_KEY is missing. Please set the environment variable.`; } else { // Content exceeds all available model limits // Get token limits from config for error message const gpt5Model = getModelById('gpt5'); const geminiModel = getModelById('gemini25pro'); const gpt5Limit = gpt5Model ? gpt5Model.tokenLimit : 400000; const geminiLimit = geminiModel ? geminiModel.tokenLimit : 1000000; errorMsg = `Error: The combined content (${tokenCount.toLocaleString()} tokens) exceeds the maximum token limit for all available models (GPT-5: ${gpt5Limit.toLocaleString()}, Gemini: ${geminiLimit.toLocaleString()} tokens). Please reduce the number of files or shorten the instruction.`; } await sendNotification({ method: "notifications/message", params: { level: "error", data: `Request blocked: ${process.env.OPENAI_API_KEY ? "OpenAI API available. " : "OpenAI API unavailable. "}${process.env.GEMINI_API_KEY ? "Gemini available." : "Gemini unavailable."}`, }, }); return { content: [{ type: "text", text: errorMsg }], isError: true, }; } // Send to appropriate model based on selection with fallback capability const startTime = Date.now(); const response = await sendToModel( combined, { modelName, modelType, tokenCount }, sendNotification, ); const elapsedTime = Date.now() - startTime; await sendNotification({ method: "notifications/message", params: { level: "info", data: `Received response from ${modelName} in ${elapsedTime}ms`, }, }); return { content: [ { type: "text", text: response, }, ], }; } catch (error) { const errorMsg = error instanceof Error ? error.message : String(error); await sendNotification({ method: "notifications/message", params: { level: "error", data: `Error in expert-review tool: ${errorMsg}`, }, }); return { content: [ { type: "text", text: `Error: ${errorMsg}`, }, ], isError: true, }; } },
- src/index.ts:302-314 (schema)Zod schema for sage-review tool inputs: instruction (string, changes needed), paths (array of absolute paths for context), optional debate (boolean). Used for input validation in MCP tool.instruction: z .string() .describe("The specific changes or improvements needed."), paths: z .array(z.string()) .describe( "Paths to include as context. MUST be absolute paths (e.g., /home/user/project/src). Including directories will include all files contained within recursively.", ), debate: z .boolean() .optional() .describe("Set to true when a multi-model debate should ensue"), },
- ReviewStrategy (implements DebateStrategy) registered for ToolType.Review. Used exclusively in debate mode of sage-review. Provides tool-specific prompts for debate phases (generate initial review, critique others, judge winner) and custom judge parsing that favors responses with valid SEARCH/REPLACE blocks or explicit winner markers./** * Code review debate strategy * * This strategy handles the debate process for code reviews with SEARCH/REPLACE blocks. */ import { ToolType } from "../types/public"; import { DebateContext, DebatePhase, DebateStrategy } from "./strategyTypes"; import { loadPrompt, escapeUserInput } from "../prompts/promptFactory"; import { parseSearchReplace } from "../utils/searchReplaceParser"; import { registerStrategy } from "./registry"; /** * Strategy for code review debates */ class ReviewStrategy implements DebateStrategy { readonly toolType = ToolType.Review; /** * Default configuration for review debates */ readonly configDefaults = { rounds: 1, logLevel: "info" as const, }; /** * Generate a prompt for the specified debate phase */ getPrompt(phase: DebatePhase, ctx: DebateContext): string { const template = loadPrompt(this.toolType, phase); // Replace placeholders based on the phase switch (phase) { case "generate": return template .replace(/\${modelId}/g, String(ctx.round)) .replace(/\${userPrompt}/g, escapeUserInput(ctx.userPrompt)); case "critique": const reviewEntries = ctx.candidates .map((review, idx) => `## REVIEW ${idx + 1}\n${review.trim()}`) .join("\n\n"); return template .replace(/\${modelId}/g, String(ctx.round)) .replace(/\${planEntries}/g, reviewEntries); case "judge": const judgeReviewEntries = ctx.candidates .map((review, idx) => `## REVIEW ${idx + 1}\n${review.trim()}`) .join("\n\n"); return template.replace(/\${planEntries}/g, judgeReviewEntries); default: throw new Error(`Unknown debate phase: ${phase}`); } } /** * Parse the judge's decision to determine the winning review */ parseJudge( raw: string, candidates: string[], ): { success: true; winnerIdx: number } | { success: false; error: string } { // Try to find explicit winner marker (e.g., [[WINNER: #]]) const winnerMatch = raw.match(/\[\[WINNER:\s*(\d+)\]\]/i); if (winnerMatch && winnerMatch[1]) { const winnerIdx = parseInt(winnerMatch[1], 10) - 1; // Convert to 0-based if (winnerIdx >= 0 && winnerIdx < candidates.length) { return { success: true, winnerIdx }; } } // For reviews, we need to validate the format regardless of winner selection const parseResult = parseSearchReplace(raw); // If the judge provided valid SEARCH/REPLACE blocks, use that if (parseResult.valid && parseResult.blocks.length > 0) { return { success: true, winnerIdx: -1 }; // -1 indicates the judge's own synthesis } // If there was only one candidate if (candidates.length === 1) { return { success: true, winnerIdx: 0 }; } // If all else fails, return an error return { success: false, error: "Could not determine winning review from judge response", }; } } // Create and export the singleton instance export const reviewStrategy = new ReviewStrategy(); // Register this strategy with the registry registerStrategy(reviewStrategy);