Skip to main content
Glama

sage-opinion

Send a prompt with absolute file paths to a sage-like model for its opinion or code review. Embed relevant file contents for context to handle large codebases effectively.

Instructions

Send a prompt to sage-like model for its opinion on a matter.

Include the paths to all relevant files and/or directories that are pertinent to the matter. IMPORTANT: All paths must be absolute paths (e.g., /home/user/project/src), not relative paths. Do not worry about context limits; feel free to include as much as you think is relevant. If you include too much it will error and tell you, and then you can include less. Err on the side of including more context.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
pathsYesPaths to include as context. MUST be absolute paths (e.g., /home/user/project/src). Including directories will include all files contained within recursively.
promptYesThe prompt to send to the external model.

Implementation Reference

  • Primary handler logic for 'sage-opinion' tool: packs files, checks tokens, selects model, handles debate mode using runDebate or direct LLM call, with notifications and error handling.
    async ({ prompt, paths, debate }, { sendNotification }) => { try { // Pack the files up front - we'll need them in either case const packedFiles = await packFiles(paths); // Check if debate is enabled if (debate) { await sendNotification({ method: "notifications/message", params: { level: "info", data: `Using debate mode for sage-opinion`, }, }); const strategy = await getStrategy(ToolType.Opinion); if (!strategy) { throw new Error("Opinion strategy not found"); } const result = await runDebate( { toolType: ToolType.Opinion, userPrompt: prompt, codeContext: packedFiles, // Add packed files as context debateConfig: { enabled: true, rounds: 1, logLevel: "debug", }, }, async (notification) => { // Fix notification nesting by passing the notification directly await sendNotification({ method: "notifications/message", params: { level: notification.level, data: notification.data, }, }); }, ); return { content: [ { type: "text", text: "opinion" in result ? result.opinion : "Error: No opinion generated", }, ], metadata: { meta: result.meta, }, }; } // Combine with the prompt const combined = combinePromptWithContext(packedFiles, prompt); // Select model based on token count and get token information const modelSelection = selectModelBasedOnTokens(combined, 'opinion'); const { modelName, modelType, tokenCount, withinLimit, tokenLimit } = modelSelection; // Log token usage via MCP logging notification await sendNotification({ method: "notifications/message", params: { level: "debug", data: `Token usage: ${tokenCount.toLocaleString()} tokens. Selected model: ${modelName} (limit: ${tokenLimit.toLocaleString()} tokens)`, }, }); await sendNotification({ method: "notifications/message", params: { level: "debug", data: `Files included: ${paths.length}, Document count: ${analyzeXmlTokens(combined).documentCount}`, }, }); if (!withinLimit) { // Handle different error cases let errorMsg = ""; if (modelName === "none" && tokenLimit === 0) { // No API keys available // Get token limits from config for error message const gpt5Model = getModelById('gpt5'); const geminiModel = getModelById('gemini25pro'); const gpt5Limit = gpt5Model ? gpt5Model.tokenLimit : 400000; const geminiLimit = geminiModel ? geminiModel.tokenLimit : 1000000; errorMsg = `Error: No API keys available. Please set OPENAI_API_KEY for contexts up to ${gpt5Limit.toLocaleString()} tokens or GEMINI_API_KEY for contexts up to ${geminiLimit.toLocaleString()} tokens.`; } else if (modelType === "openai" && !process.env.OPENAI_API_KEY) { // Missing OpenAI API key errorMsg = `Error: OpenAI API key not set. This content (${tokenCount.toLocaleString()} tokens) could be processed by GPT-5, but OPENAI_API_KEY is missing. Please set the environment variable or use a smaller context.`; } else if (modelType === "gemini" && !process.env.GEMINI_API_KEY) { // Missing Gemini API key errorMsg = `Error: Gemini API key not set. This content (${tokenCount.toLocaleString()} tokens) requires Gemini's larger context window, but GEMINI_API_KEY is missing. Please set the environment variable.`; } else { // Content exceeds all available model limits // Get token limits from config for error message const gpt5Model = getModelById('gpt5'); const geminiModel = getModelById('gemini25pro'); const gpt5Limit = gpt5Model ? gpt5Model.tokenLimit : 400000; const geminiLimit = geminiModel ? geminiModel.tokenLimit : 1000000; errorMsg = `Error: The combined content (${tokenCount.toLocaleString()} tokens) exceeds the maximum token limit for all available models (GPT-5: ${gpt5Limit.toLocaleString()}, Gemini: ${geminiLimit.toLocaleString()} tokens). Please reduce the number of files or shorten the prompt.`; } await sendNotification({ method: "notifications/message", params: { level: "error", data: `Request blocked: ${process.env.OPENAI_API_KEY ? "OpenAI API available. " : "OpenAI API unavailable. "}${process.env.GEMINI_API_KEY ? "Gemini available." : "Gemini unavailable."}`, }, }); return { content: [{ type: "text", text: errorMsg }], isError: true, }; } // Send to appropriate model based on selection with fallback capability const startTime = Date.now(); const response = await sendToModel( combined, { modelName, modelType, tokenCount }, sendNotification, ); const elapsedTime = Date.now() - startTime; await sendNotification({ method: "notifications/message", params: { level: "info", data: `Received response from ${modelName} in ${elapsedTime}ms`, }, }); return { content: [ { type: "text", text: response, }, ], }; } catch (error) { const errorMsg = error instanceof Error ? error.message : String(error); await sendNotification({ method: "notifications/message", params: { level: "error", data: `Error in sage-opinion tool: ${errorMsg}`, }, }); return { content: [ { type: "text", text: `Error: ${errorMsg}`, }, ], isError: true, }; } },
  • Zod input schema defining parameters: prompt (string), paths (array of absolute paths), debate (boolean).
    prompt: z.string().describe("The prompt to send to the external model."), paths: z .array(z.string()) .describe( "Paths to include as context. MUST be absolute paths (e.g., /home/user/project/src). Including directories will include all files contained within recursively.", ), debate: z .boolean() .describe("Set to true when a multi-model debate should ensue (e.g., when the user mentions 'sages' plural)."), },
  • src/index.ts:91-102 (registration)
    MCP server.tool registration for 'sage-opinion', including tool name, multi-line description, input schema, and inline handler function.
    server.tool( "sage-opinion", `Send a prompt to sage-like model for its opinion on a matter. Include the paths to all relevant files and/or directories that are pertinent to the matter. IMPORTANT: All paths must be absolute paths (e.g., /home/user/project/src), not relative paths. Do not worry about context limits; feel free to include as much as you think is relevant. If you include too much it will error and tell you, and then you can include less. Err on the side of including more context. If the user mentiones "sages" plural, or asks for a debate explicitly, set debate to true. `,
  • DebateStrategy implementation for ToolType.Opinion, used in debate mode: provides phase-specific prompts and judge parsing logic.
    class OpinionStrategy implements DebateStrategy { readonly toolType = ToolType.Opinion; /** * Default configuration for opinion debates */ readonly configDefaults = { rounds: 1, logLevel: "info" as const, }; /** * Generate a prompt for the specified debate phase */ getPrompt(phase: DebatePhase, ctx: DebateContext): string { const template = loadPrompt(this.toolType, phase); // Replace placeholders based on the phase switch (phase) { case "generate": return template .replace(/\${modelId}/g, String(ctx.round)) .replace(/\${userPrompt}/g, escapeUserInput(ctx.userPrompt)); case "critique": const opinionEntries = ctx.candidates .map((opinion, idx) => `## OPINION ${idx + 1}\n${opinion.trim()}`) .join("\n\n"); return template .replace(/\${modelId}/g, String(ctx.round)) .replace(/\${planEntries}/g, opinionEntries); case "judge": const judgeOpinionEntries = ctx.candidates .map((opinion, idx) => `## OPINION ${idx + 1}\n${opinion.trim()}`) .join("\n\n"); return template.replace(/\${planEntries}/g, judgeOpinionEntries); default: throw new Error(`Unknown debate phase: ${phase}`); } } /** * Parse the judge's decision to determine the winning opinion */ parseJudge( raw: string, candidates: string[], ): { success: true; winnerIdx: number } | { success: false; error: string } { // check if there was only one candidate if (candidates.length === 1) { return { success: true, winnerIdx: 0 }; } // Try to find explicit winner marker (e.g., [[WINNER: #]]) const winnerMatch = raw.match(/\[\[WINNER:\s*(\d+)\]\]/i); if (winnerMatch && winnerMatch[1]) { const winnerIdx = parseInt(winnerMatch[1], 10) - 1; // Convert to 0-based if (winnerIdx >= 0 && winnerIdx < candidates.length) { return { success: true, winnerIdx }; } } // For opinions, we don't allow synthesis - look for references to specific candidates for (let i = 0; i < candidates.length; i++) { const candidateNumber = i + 1; if (raw.toLowerCase().includes(`opinion ${candidateNumber}`)) { return { success: true, winnerIdx: i }; } } // If we can't determine a winner, default to the first candidate return { success: true, winnerIdx: 0, }; } }

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/jalehman/mcp-sage'

If you have feedback or need assistance with the MCP directory API, please join our Discord server