Skip to main content
Glama

Deep Reasoning

deep-reasoning

Solve complex problems using advanced AI reasoning models from multiple providers with integrated search capabilities.

Instructions

Use advanced AI models for deep reasoning and complex problem-solving. Supports GPT-5 for OpenAI/Azure and Gemini 2.5 Pro with Google Search.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
providerNoAI provider to use (defaults to Azure if configured, otherwise OpenAI)
promptYesThe complex question or problem requiring deep reasoning
modelNoSpecific model to use (optional, will use provider default)
temperatureNoTemperature for response generation
maxOutputTokensNoMaximum tokens in response
systemPromptNoSystem prompt to set context for reasoning
reasoningEffortNoReasoning effort level (for certain reasoning models)high
enableSearchNoEnable Google Search for Gemini models

Implementation Reference

  • The main handler function implementing the deep-reasoning tool. Selects AI provider, constructs system prompt for reasoning, calls provider.generateText with optimized parameters including reasoning effort and optional search grounding for Gemini.
    async handleDeepReasoning(params: z.infer<typeof DeepReasoningSchema>) {
      // Use provided provider or get the preferred one (Azure if configured)
      const providerName = params.provider || (await this.providerManager.getPreferredProvider(['openai', 'azure']));
      const provider = await this.providerManager.getProvider(providerName);
      
      // Build a comprehensive system prompt for deep reasoning
      const systemPrompt = params.systemPrompt || `You are an expert AI assistant specializing in deep reasoning and complex problem-solving. 
      Approach problems systematically, consider multiple perspectives, and provide thorough, well-reasoned responses.
      Break down complex problems into components, analyze each thoroughly, and synthesize insights.`;
    
      const response = await provider.generateText({
        prompt: params.prompt,
        model: params.model,
        temperature: params.temperature,
        maxOutputTokens: params.maxOutputTokens,
        systemPrompt,
        reasoningEffort: params.reasoningEffort,
        useSearchGrounding: providerName === "gemini" ? (params.enableSearch !== false) : false,
        toolName: 'deep-reasoning',
      });
    
      return {
        content: [
          {
            type: "text",
            text: response.text,
          },
        ],
        metadata: {
          provider: params.provider,
          model: response.model,
          usage: response.usage,
          ...response.metadata,
        },
      };
    }
  • src/server.ts:244-252 (registration)
    Tool registration in the MCP server, specifying title, description, input schema, and delegating execution to AIToolHandlers.handleDeepReasoning via getHandlers()
    // Register deep-reasoning tool
    server.registerTool("deep-reasoning", {
      title: "Deep Reasoning",
      description: "Use advanced AI models for deep reasoning and complex problem-solving. Supports GPT-5 for OpenAI/Azure and Gemini 2.5 Pro with Google Search.",
      inputSchema: DeepReasoningSchema.shape,
    }, async (args) => {
      const aiHandlers = await getHandlers();
      return await aiHandlers.handleDeepReasoning(args);
    });
  • Zod input schema for the deep-reasoning tool, defining parameters like provider, prompt, model, temperature, and Gemini-specific search enablement.
    const DeepReasoningSchema = z.object({
      provider: z.enum(["openai", "gemini", "azure", "grok"]).optional().describe("AI provider to use (defaults to Azure if configured, otherwise OpenAI)"),
      prompt: z.string().describe("The complex question or problem requiring deep reasoning"),
      model: z.string().optional().describe("Specific model to use (optional, will use provider default)"),
      temperature: z.number().min(0).max(2).optional().default(0.7).describe("Temperature for response generation"),
      maxOutputTokens: z.number().positive().optional().describe("Maximum tokens in response"),
      systemPrompt: z.string().optional().describe("System prompt to set context for reasoning"),
      reasoningEffort: z.enum(["low", "medium", "high"]).optional().default("high").describe("Reasoning effort level (for certain reasoning models)"),
      enableSearch: z.boolean().optional().default(true).describe("Enable Google Search for Gemini models"),
    });
  • src/server.ts:513-530 (registration)
    Prompt registration for deep-reasoning tool, providing a default user message template for invocation via prompts.
    server.registerPrompt("deep-reasoning", {
      title: "Deep Reasoning", 
      description: "Use advanced AI reasoning to solve complex problems requiring deep analysis",
      argsSchema: {
        prompt: z.string().optional(),
        provider: z.string().optional(),
        model: z.string().optional(),
        systemPrompt: z.string().optional(),
      },
    }, (args) => ({
      messages: [{
        role: "user",
        content: {
          type: "text",
          text: `Use advanced AI reasoning to solve this complex problem: ${args.prompt || 'Please provide a complex problem that requires deep reasoning and analysis.'}${args.provider ? ` (using ${args.provider} provider)` : ''}${args.systemPrompt ? `\n\nSystem context: ${args.systemPrompt}` : ''}`
        }
      }]
    }));
  • Helper function that lazily initializes and returns the AIToolHandlers instance with ProviderManager, used by all tool registrations to delegate to specific handlers.
    async function getHandlers() {
      if (!handlers) {
        const { ConfigManager } = require("./config/manager");
        const { ProviderManager } = require("./providers/manager");
        const { AIToolHandlers } = require("./handlers/ai-tools");
        
        const configManager = new ConfigManager();
        
        // Load config and set environment variables
        const config = await configManager.getConfig();
        if (config.openai?.apiKey) {
          process.env.OPENAI_API_KEY = config.openai.apiKey;
        }
        if (config.openai?.baseURL) {
          process.env.OPENAI_BASE_URL = config.openai.baseURL;
        }
        if (config.google?.apiKey) {
          process.env.GOOGLE_API_KEY = config.google.apiKey;
        }
        if (config.google?.baseURL) {
          process.env.GOOGLE_BASE_URL = config.google.baseURL;
        }
        if (config.azure?.apiKey) {
          process.env.AZURE_API_KEY = config.azure.apiKey;
        }
        if (config.azure?.baseURL) {
          process.env.AZURE_BASE_URL = config.azure.baseURL;
        }
        if (config.xai?.apiKey) {
          process.env.XAI_API_KEY = config.xai.apiKey;
        }
        if (config.xai?.baseURL) {
          process.env.XAI_BASE_URL = config.xai.baseURL;
        }
        
        providerManager = new ProviderManager(configManager);
        handlers = new AIToolHandlers(providerManager);
      }
      
      return handlers;
    }
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden for behavioral disclosure. While it mentions support for specific models and Google Search integration, it doesn't describe important behavioral aspects like rate limits, authentication requirements, cost implications, response formats, or error handling. For a complex AI tool with 8 parameters, this leaves significant gaps in understanding how the tool behaves in practice.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately concise with two sentences that efficiently convey the core functionality and model support. It's front-loaded with the main purpose and follows with specific implementation details. There's no wasted language, though it could potentially benefit from slightly more structure to separate purpose from technical details.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of an 8-parameter AI tool with no annotations and no output schema, the description is incomplete. It doesn't explain what the tool returns, how to interpret results, error conditions, or practical constraints. While the schema covers parameter details, the description fails to provide the broader context needed to effectively use this tool for 'deep reasoning' tasks, especially compared to the many alternative tools available.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all 8 parameters thoroughly. The description adds minimal parameter semantics beyond what's in the schema - it mentions GPT-5 and Gemini 2.5 Pro specifically (which relate to the 'model' parameter) and Google Search for Gemini (related to 'enableSearch'), but these are already implied in the schema descriptions. With high schema coverage, the baseline is 3 even without significant param info in the description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the tool uses advanced AI models for deep reasoning and complex problem-solving, which provides a general purpose. However, it's somewhat vague about what constitutes 'deep reasoning' versus other AI tasks, and it doesn't clearly distinguish this tool from sibling tools like 'analyze-code', 'research', or 'investigate' which might also involve AI reasoning. The mention of specific model support adds some specificity but doesn't fully clarify the unique role.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. With many sibling tools that might involve AI or reasoning (like 'research', 'analyze-code', 'investigate'), there's no indication of what types of problems are best suited for 'deep-reasoning' versus those other tools. The description mentions model support but doesn't explain when to choose this tool over other AI-related tools in the server.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/RealMikeChong/ultra-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server