Skip to main content
Glama

consult_llm

Leverage advanced AI models for unbiased, in-depth analysis on complex coding challenges. Submit your question with optional code context for code review, bug fixes, architecture advice, or implementation guidance.

Instructions

Ask a more powerful AI for help with complex problems. Provide your question in the prompt field and optionally include relevant code files as context.

Be specific about what you want: code implementation, code review, bug analysis, architecture advice, etc.

IMPORTANT: Ask neutral, open-ended questions. Avoid suggesting specific solutions or alternatives in your prompt as this can bias the analysis. Instead of "Should I use X or Y approach?", ask "What's the best approach for this problem?" Let the consultant LLM provide unbiased recommendations.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
filesNoArray of file paths to include as context. All files are added as context with file paths and code blocks.
git_diffNoGenerate git diff output to include as context. Shows uncommitted changes by default.
modelNoLLM model to useo3
promptYesYour question or request for the consultant LLM. Ask neutral, open-ended questions without suggesting specific solutions to avoid biasing the analysis.

Implementation Reference

  • Main handler function for the 'consult_llm' tool. Parses arguments using ConsultLlmArgs schema, determines model, builds prompt with files and git diff context, handles web_mode by copying to clipboard, otherwise queries the LLM via queryLlm and returns the response.
    export async function handleConsultLlm(args: unknown) { const parseResult = ConsultLlmArgs.safeParse(args) if (!parseResult.success) { const errors = parseResult.error.issues .map((issue) => `${issue.path.join('.')}: ${issue.message}`) .join(', ') throw new Error(`Invalid request parameters: ${errors}`) } const { files, prompt: userPrompt, git_diff, web_mode, model: parsedModel, } = parseResult.data const providedModel = typeof args === 'object' && args !== null && Object.prototype.hasOwnProperty.call( args as Record<string, unknown>, 'model', ) const model: SupportedChatModel = providedModel ? parsedModel : (config.defaultModel ?? parsedModel) logToolCall('consult_llm', args) const isCliMode = isCliExecution(model) let prompt: string let filePaths: string[] | undefined if (web_mode || !isCliMode) { const contextFiles = files ? processFiles(files) : [] const gitDiffOutput = git_diff ? generateGitDiff(git_diff.repo_path, git_diff.files, git_diff.base_ref) : undefined prompt = buildPrompt(userPrompt, contextFiles, gitDiffOutput) } else { filePaths = files ? files.map((f) => resolve(f)) : undefined const gitDiffOutput = git_diff ? generateGitDiff(git_diff.repo_path, git_diff.files, git_diff.base_ref) : undefined prompt = gitDiffOutput ? `## Git Diff\n\`\`\`diff\n${gitDiffOutput}\n\`\`\`\n\n${userPrompt}` : userPrompt } await logPrompt(model, prompt) if (web_mode) { const systemPrompt = getSystemPrompt(isCliMode) const fullPrompt = `# System Prompt ${systemPrompt} # User Prompt ${prompt}` await copyToClipboard(fullPrompt) let responseMessage = '✓ Prompt copied to clipboard!\n\n' responseMessage += 'Please paste it into your browser-based LLM service and share the response here before I proceed with any implementation.' if (filePaths && filePaths.length > 0) { responseMessage += `\n\nNote: File paths were included:\n${filePaths.map((p) => ` - ${p}`).join('\n')}` } return { content: [{ type: 'text', text: responseMessage }], } } const { response, costInfo } = await queryLlm(prompt, model, filePaths) await logResponse(model, response, costInfo) return { content: [{ type: 'text', text: response }], } }
  • src/server.ts:159-171 (registration)
    Registers the CallToolRequest handler, which dispatches to handleConsultLlm when the tool name is 'consult_llm'.
    server.setRequestHandler(CallToolRequestSchema, async (request) => { if (request.params.name === 'consult_llm') { try { return await handleConsultLlm(request.params.arguments) } catch (error) { throw new Error( `LLM query failed: ${error instanceof Error ? error.message : String(error)}`, ) } } throw new Error(`Unknown tool: ${request.params.name}`) })
  • src/server.ts:49-53 (registration)
    Registers the ListToolsRequest handler, which returns the schema for the 'consult_llm' tool.
    server.setRequestHandler(ListToolsRequestSchema, () => { return { tools: [toolSchema], } })
  • Zod schema defining the input arguments for the 'consult_llm' tool, including files, prompt, model, web_mode, and git_diff options.
    export const ConsultLlmArgs = z.object({ files: z .array(z.string()) .optional() .describe( 'Array of file paths to include as context. All files are added as context with file paths and code blocks.', ), prompt: z .string() .describe( 'Your question or request for the consultant LLM. Ask neutral, open-ended questions without suggesting specific solutions to avoid biasing the analysis.', ), model: SupportedChatModel.optional() .default('o3') .describe( 'LLM model to use. Prefer gpt-5.1-codex-max when user mentions Codex. This parameter is ignored when `web_mode` is `true`.', ), web_mode: z .boolean() .optional() .default(false) .describe( "If true, copy the formatted prompt to the clipboard instead of querying an LLM. When true, the `model` parameter is ignored. Use this to paste the prompt into browser-based LLM services. IMPORTANT: Only use this when the user specifically requests it. When true, wait for the user to provide the external LLM's response before proceeding with any implementation.", ), git_diff: z .object({ repo_path: z .string() .optional() .describe( 'Path to git repository (defaults to current working directory)', ), files: z .array(z.string()) .min(1, 'At least one file is required for git diff') .describe('Specific files to include in diff'), base_ref: z .string() .optional() .default('HEAD') .describe( 'Git reference to compare against (e.g., "HEAD", "main", commit hash)', ), }) .optional() .describe( 'Generate git diff output to include as context. Shows uncommitted changes by default.', ), })
  • Defines the JSON schema and metadata (name, description, inputSchema) for the 'consult_llm' tool, used in tool listing.
    const consultLlmInputSchema = z.toJSONSchema(ConsultLlmArgs, { target: 'openapi-3.0', }) export const toolSchema = { name: 'consult_llm', description: `Ask a more powerful AI for help with complex problems. Provide your question in the prompt field and always include relevant code files as context. Be specific about what you want: code implementation, code review, bug analysis, architecture advice, etc. IMPORTANT: Ask neutral, open-ended questions. Avoid suggesting specific solutions or alternatives in your prompt as this can bias the analysis. Instead of "Should I use X or Y approach?", ask "What's the best approach for this problem?" Let the consultant LLM provide unbiased recommendations.`, inputSchema: consultLlmInputSchema, } as const

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/raine/consult-llm-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server