Skip to main content
Glama

consult_llm

Consult advanced AI models for complex programming problems by providing code context and neutral questions to get unbiased analysis, code reviews, or implementation advice.

Instructions

Ask a more powerful AI for help with complex problems. Provide your question in the prompt field and always include relevant code files as context.

Be specific about what you want: code implementation, code review, bug analysis, architecture advice, etc.

IMPORTANT: Ask neutral, open-ended questions. Avoid suggesting specific solutions or alternatives in your prompt as this can bias the analysis. Instead of "Should I use X or Y approach?", ask "What's the best approach for this problem?" Let the consultant LLM provide unbiased recommendations.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
filesNoArray of file paths to include as context. All files are added as context with file paths and code blocks.
promptYesYour question or request for the consultant LLM. Ask neutral, open-ended questions without suggesting specific solutions to avoid biasing the analysis.
modelYesLLM model to use. Prefer gpt-5.1-codex-max when user mentions Codex. This parameter is ignored when `web_mode` is `true`.o3
web_modeYesIf true, copy the formatted prompt to the clipboard instead of querying an LLM. When true, the `model` parameter is ignored. Use this to paste the prompt into browser-based LLM services. IMPORTANT: Only use this when the user specifically requests it. When true, wait for the user to provide the external LLM's response before proceeding with any implementation.
git_diffNoGenerate git diff output to include as context. Shows uncommitted changes by default.

Implementation Reference

  • The main handler function for the 'consult_llm' tool. Parses input arguments using ConsultLlmArgs schema, determines the model, builds a prompt incorporating files and git diff if provided, handles web_mode by copying to clipboard, otherwise queries the LLM via queryLlm, logs everything, and returns the response.
    export async function handleConsultLlm(args: unknown) { const parseResult = ConsultLlmArgs.safeParse(args) if (!parseResult.success) { const errors = parseResult.error.issues .map((issue) => `${issue.path.join('.')}: ${issue.message}`) .join(', ') throw new Error(`Invalid request parameters: ${errors}`) } const { files, prompt: userPrompt, git_diff, web_mode, model: parsedModel, } = parseResult.data const providedModel = typeof args === 'object' && args !== null && Object.prototype.hasOwnProperty.call( args as Record<string, unknown>, 'model', ) const model: SupportedChatModel = providedModel ? parsedModel : (config.defaultModel ?? parsedModel) logToolCall('consult_llm', args) const isCliMode = isCliExecution(model) let prompt: string let filePaths: string[] | undefined if (web_mode || !isCliMode) { const contextFiles = files ? processFiles(files) : [] const gitDiffOutput = git_diff ? generateGitDiff(git_diff.repo_path, git_diff.files, git_diff.base_ref) : undefined prompt = buildPrompt(userPrompt, contextFiles, gitDiffOutput) } else { filePaths = files ? files.map((f) => resolve(f)) : undefined const gitDiffOutput = git_diff ? generateGitDiff(git_diff.repo_path, git_diff.files, git_diff.base_ref) : undefined prompt = gitDiffOutput ? `## Git Diff\n\`\`\`diff\n${gitDiffOutput}\n\`\`\`\n\n${userPrompt}` : userPrompt } await logPrompt(model, prompt) if (web_mode) { const systemPrompt = getSystemPrompt(isCliMode) const fullPrompt = `# System Prompt ${systemPrompt} # User Prompt ${prompt}` await copyToClipboard(fullPrompt) let responseMessage = '✓ Prompt copied to clipboard!\n\n' responseMessage += 'Please paste it into your browser-based LLM service and share the response here before I proceed with any implementation.' if (filePaths && filePaths.length > 0) { responseMessage += `\n\nNote: File paths were included:\n${filePaths.map((p) => ` - ${p}`).join('\n')}` } return { content: [{ type: 'text', text: responseMessage }], } } const { response, costInfo } = await queryLlm(prompt, model, filePaths) await logResponse(model, response, costInfo) return { content: [{ type: 'text', text: response }], } }
  • Zod schema definition for the input arguments of the 'consult_llm' tool, including files, prompt, model, web_mode, and git_diff options.
    export const ConsultLlmArgs = z.object({ files: z .array(z.string()) .optional() .describe( 'Array of file paths to include as context. All files are added as context with file paths and code blocks.', ), prompt: z .string() .describe( 'Your question or request for the consultant LLM. Ask neutral, open-ended questions without suggesting specific solutions to avoid biasing the analysis.', ), model: SupportedChatModel.optional() .default(defaultModel) .describe( 'LLM model to use. Prefer gpt-5.1-codex-max when user mentions Codex. This parameter is ignored when `web_mode` is `true`.', ), web_mode: z .boolean() .optional() .default(false) .describe( "If true, copy the formatted prompt to the clipboard instead of querying an LLM. When true, the `model` parameter is ignored. Use this to paste the prompt into browser-based LLM services. IMPORTANT: Only use this when the user specifically requests it. When true, wait for the user to provide the external LLM's response before proceeding with any implementation.", ), git_diff: z .object({ repo_path: z .string() .optional() .describe( 'Path to git repository (defaults to current working directory)', ), files: z .array(z.string()) .min(1, 'At least one file is required for git diff') .describe('Specific files to include in diff'), base_ref: z .string() .optional() .default('HEAD') .describe( 'Git reference to compare against (e.g., "HEAD", "main", commit hash)', ), }) .optional() .describe( 'Generate git diff output to include as context. Shows uncommitted changes by default.', ), })
  • src/server.ts:49-53 (registration)
    Registers the tool schema for the ListTools request, returning the 'consult_llm' tool schema.
    server.setRequestHandler(ListToolsRequestSchema, () => { return { tools: [toolSchema], } })
  • src/server.ts:159-171 (registration)
    Registers the CallTool request handler, dispatching to handleConsultLlm if the tool name is 'consult_llm'.
    server.setRequestHandler(CallToolRequestSchema, async (request) => { if (request.params.name === 'consult_llm') { try { return await handleConsultLlm(request.params.arguments) } catch (error) { throw new Error( `LLM query failed: ${error instanceof Error ? error.message : String(error)}`, ) } } throw new Error(`Unknown tool: ${request.params.name}`) })
  • Defines the MCP tool schema for 'consult_llm', including name, description, and inputSchema derived from ConsultLlmArgs.
    export const toolSchema = { name: 'consult_llm', description: `Ask a more powerful AI for help with complex problems. Provide your question in the prompt field and always include relevant code files as context. Be specific about what you want: code implementation, code review, bug analysis, architecture advice, etc. IMPORTANT: Ask neutral, open-ended questions. Avoid suggesting specific solutions or alternatives in your prompt as this can bias the analysis. Instead of "Should I use X or Y approach?", ask "What's the best approach for this problem?" Let the consultant LLM provide unbiased recommendations.`, inputSchema: consultLlmInputSchema, } as const
Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/raine/consult-llm-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server