Skip to main content
Glama
raine

Consult LLM MCP

by raine

consult_llm

Consult advanced AI models for complex programming problems by providing code context and neutral questions to get unbiased analysis, code reviews, or implementation advice.

Instructions

Ask a more powerful AI for help with complex problems. Provide your question in the prompt field and always include relevant code files as context.

Be specific about what you want: code implementation, code review, bug analysis, architecture advice, etc.

IMPORTANT: Ask neutral, open-ended questions. Avoid suggesting specific solutions or alternatives in your prompt as this can bias the analysis. Instead of "Should I use X or Y approach?", ask "What's the best approach for this problem?" Let the consultant LLM provide unbiased recommendations.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
filesNoArray of file paths to include as context. All files are added as context with file paths and code blocks.
promptYesYour question or request for the consultant LLM. Ask neutral, open-ended questions without suggesting specific solutions to avoid biasing the analysis.
modelYesLLM model to use. Prefer gpt-5.1-codex-max when user mentions Codex. This parameter is ignored when `web_mode` is `true`.o3
web_modeYesIf true, copy the formatted prompt to the clipboard instead of querying an LLM. When true, the `model` parameter is ignored. Use this to paste the prompt into browser-based LLM services. IMPORTANT: Only use this when the user specifically requests it. When true, wait for the user to provide the external LLM's response before proceeding with any implementation.
git_diffNoGenerate git diff output to include as context. Shows uncommitted changes by default.

Implementation Reference

  • The main handler function for the 'consult_llm' tool. Parses input arguments using ConsultLlmArgs schema, determines the model, builds a prompt incorporating files and git diff if provided, handles web_mode by copying to clipboard, otherwise queries the LLM via queryLlm, logs everything, and returns the response.
    export async function handleConsultLlm(args: unknown) {
      const parseResult = ConsultLlmArgs.safeParse(args)
      if (!parseResult.success) {
        const errors = parseResult.error.issues
          .map((issue) => `${issue.path.join('.')}: ${issue.message}`)
          .join(', ')
        throw new Error(`Invalid request parameters: ${errors}`)
      }
    
      const {
        files,
        prompt: userPrompt,
        git_diff,
        web_mode,
        model: parsedModel,
      } = parseResult.data
    
      const providedModel =
        typeof args === 'object' &&
        args !== null &&
        Object.prototype.hasOwnProperty.call(
          args as Record<string, unknown>,
          'model',
        )
    
      const model: SupportedChatModel = providedModel
        ? parsedModel
        : (config.defaultModel ?? parsedModel)
    
      logToolCall('consult_llm', args)
    
      const isCliMode = isCliExecution(model)
    
      let prompt: string
      let filePaths: string[] | undefined
    
      if (web_mode || !isCliMode) {
        const contextFiles = files ? processFiles(files) : []
    
        const gitDiffOutput = git_diff
          ? generateGitDiff(git_diff.repo_path, git_diff.files, git_diff.base_ref)
          : undefined
    
        prompt = buildPrompt(userPrompt, contextFiles, gitDiffOutput)
      } else {
        filePaths = files ? files.map((f) => resolve(f)) : undefined
    
        const gitDiffOutput = git_diff
          ? generateGitDiff(git_diff.repo_path, git_diff.files, git_diff.base_ref)
          : undefined
    
        prompt = gitDiffOutput
          ? `## Git Diff\n\`\`\`diff\n${gitDiffOutput}\n\`\`\`\n\n${userPrompt}`
          : userPrompt
      }
    
      await logPrompt(model, prompt)
    
      if (web_mode) {
        const systemPrompt = getSystemPrompt(isCliMode)
        const fullPrompt = `# System Prompt
    
    ${systemPrompt}
    
    # User Prompt
    
    ${prompt}`
    
        await copyToClipboard(fullPrompt)
    
        let responseMessage = '✓ Prompt copied to clipboard!\n\n'
        responseMessage +=
          'Please paste it into your browser-based LLM service and share the response here before I proceed with any implementation.'
    
        if (filePaths && filePaths.length > 0) {
          responseMessage += `\n\nNote: File paths were included:\n${filePaths.map((p) => `  - ${p}`).join('\n')}`
        }
    
        return {
          content: [{ type: 'text', text: responseMessage }],
        }
      }
    
      const { response, costInfo } = await queryLlm(prompt, model, filePaths)
      await logResponse(model, response, costInfo)
    
      return {
        content: [{ type: 'text', text: response }],
      }
    }
  • Zod schema definition for the input arguments of the 'consult_llm' tool, including files, prompt, model, web_mode, and git_diff options.
    export const ConsultLlmArgs = z.object({
      files: z
        .array(z.string())
        .optional()
        .describe(
          'Array of file paths to include as context. All files are added as context with file paths and code blocks.',
        ),
      prompt: z
        .string()
        .describe(
          'Your question or request for the consultant LLM. Ask neutral, open-ended questions without suggesting specific solutions to avoid biasing the analysis.',
        ),
      model: SupportedChatModel.optional()
        .default(defaultModel)
        .describe(
          'LLM model to use. Prefer gpt-5.1-codex-max when user mentions Codex. This parameter is ignored when `web_mode` is `true`.',
        ),
      web_mode: z
        .boolean()
        .optional()
        .default(false)
        .describe(
          "If true, copy the formatted prompt to the clipboard instead of querying an LLM. When true, the `model` parameter is ignored. Use this to paste the prompt into browser-based LLM services. IMPORTANT: Only use this when the user specifically requests it. When true, wait for the user to provide the external LLM's response before proceeding with any implementation.",
        ),
      git_diff: z
        .object({
          repo_path: z
            .string()
            .optional()
            .describe(
              'Path to git repository (defaults to current working directory)',
            ),
          files: z
            .array(z.string())
            .min(1, 'At least one file is required for git diff')
            .describe('Specific files to include in diff'),
          base_ref: z
            .string()
            .optional()
            .default('HEAD')
            .describe(
              'Git reference to compare against (e.g., "HEAD", "main", commit hash)',
            ),
        })
        .optional()
        .describe(
          'Generate git diff output to include as context. Shows uncommitted changes by default.',
        ),
    })
  • src/server.ts:49-53 (registration)
    Registers the tool schema for the ListTools request, returning the 'consult_llm' tool schema.
    server.setRequestHandler(ListToolsRequestSchema, () => {
      return {
        tools: [toolSchema],
      }
    })
  • src/server.ts:159-171 (registration)
    Registers the CallTool request handler, dispatching to handleConsultLlm if the tool name is 'consult_llm'.
    server.setRequestHandler(CallToolRequestSchema, async (request) => {
      if (request.params.name === 'consult_llm') {
        try {
          return await handleConsultLlm(request.params.arguments)
        } catch (error) {
          throw new Error(
            `LLM query failed: ${error instanceof Error ? error.message : String(error)}`,
          )
        }
      }
    
      throw new Error(`Unknown tool: ${request.params.name}`)
    })
  • Defines the MCP tool schema for 'consult_llm', including name, description, and inputSchema derived from ConsultLlmArgs.
    export const toolSchema = {
      name: 'consult_llm',
      description: `Ask a more powerful AI for help with complex problems. Provide your question in the prompt field and always include relevant code files as context.
    
    Be specific about what you want: code implementation, code review, bug analysis, architecture advice, etc.
    
    IMPORTANT: Ask neutral, open-ended questions. Avoid suggesting specific solutions or alternatives in your prompt as this can bias the analysis. Instead of "Should I use X or Y approach?", ask "What's the best approach for this problem?" Let the consultant LLM provide unbiased recommendations.`,
      inputSchema: consultLlmInputSchema,
    } as const
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behavioral traits: the tool queries an external LLM, requires specific input formatting (neutral questions with context), and mentions the web_mode alternative for browser-based services. It doesn't cover rate limits, authentication needs, or response format details, but provides substantial operational guidance.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections: purpose, input requirements, usage examples, and important guidelines. Each sentence adds value, though it could be slightly more concise by combining some guidance about question formulation. The information is front-loaded with the core purpose first.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (5 parameters, nested objects) and lack of annotations/output schema, the description provides substantial context about how to use the tool effectively. It covers the tool's purpose, input requirements, and behavioral expectations well. The main gap is the absence of information about return values or response format, but this is partially compensated by the detailed input guidance.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds some context about the 'prompt' parameter (neutral, open-ended questions) and implies usage of 'files' for context, but doesn't provide additional semantic meaning beyond what's in the schema descriptions. This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Ask a more powerful AI for help with complex problems.' It specifies the verb ('ask'), resource ('more powerful AI'), and scope ('complex problems'), with no sibling tools to differentiate from. The description goes beyond the name 'consult_llm' by explaining what kind of consultation is provided.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when and how to use this tool: 'Provide your question in the prompt field and always include relevant code files as context.' It gives specific examples of use cases (code implementation, code review, bug analysis, architecture advice) and detailed instructions on question formulation (neutral, open-ended questions). Since there are no sibling tools, no alternative comparison is needed.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/raine/consult-llm-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server