Skip to main content
Glama

deep-reasoning

Solve complex problems using advanced AI reasoning models from multiple providers with integrated search capabilities.

Instructions

Use advanced AI models for deep reasoning and complex problem-solving. Supports GPT-5 for OpenAI/Azure and Gemini 2.5 Pro with Google Search.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
providerNoAI provider to use (defaults to Azure if configured, otherwise OpenAI)
promptYesThe complex question or problem requiring deep reasoning
modelNoSpecific model to use (optional, will use provider default)
temperatureNoTemperature for response generation
maxOutputTokensNoMaximum tokens in response
systemPromptNoSystem prompt to set context for reasoning
reasoningEffortNoReasoning effort level (for certain reasoning models)high
enableSearchNoEnable Google Search for Gemini models

Implementation Reference

  • The main handler function implementing the deep-reasoning tool. Selects AI provider, constructs system prompt for reasoning, calls provider.generateText with optimized parameters including reasoning effort and optional search grounding for Gemini.
    async handleDeepReasoning(params: z.infer<typeof DeepReasoningSchema>) { // Use provided provider or get the preferred one (Azure if configured) const providerName = params.provider || (await this.providerManager.getPreferredProvider(['openai', 'azure'])); const provider = await this.providerManager.getProvider(providerName); // Build a comprehensive system prompt for deep reasoning const systemPrompt = params.systemPrompt || `You are an expert AI assistant specializing in deep reasoning and complex problem-solving. Approach problems systematically, consider multiple perspectives, and provide thorough, well-reasoned responses. Break down complex problems into components, analyze each thoroughly, and synthesize insights.`; const response = await provider.generateText({ prompt: params.prompt, model: params.model, temperature: params.temperature, maxOutputTokens: params.maxOutputTokens, systemPrompt, reasoningEffort: params.reasoningEffort, useSearchGrounding: providerName === "gemini" ? (params.enableSearch !== false) : false, toolName: 'deep-reasoning', }); return { content: [ { type: "text", text: response.text, }, ], metadata: { provider: params.provider, model: response.model, usage: response.usage, ...response.metadata, }, }; }
  • src/server.ts:244-252 (registration)
    Tool registration in the MCP server, specifying title, description, input schema, and delegating execution to AIToolHandlers.handleDeepReasoning via getHandlers()
    // Register deep-reasoning tool server.registerTool("deep-reasoning", { title: "Deep Reasoning", description: "Use advanced AI models for deep reasoning and complex problem-solving. Supports GPT-5 for OpenAI/Azure and Gemini 2.5 Pro with Google Search.", inputSchema: DeepReasoningSchema.shape, }, async (args) => { const aiHandlers = await getHandlers(); return await aiHandlers.handleDeepReasoning(args); });
  • Zod input schema for the deep-reasoning tool, defining parameters like provider, prompt, model, temperature, and Gemini-specific search enablement.
    const DeepReasoningSchema = z.object({ provider: z.enum(["openai", "gemini", "azure", "grok"]).optional().describe("AI provider to use (defaults to Azure if configured, otherwise OpenAI)"), prompt: z.string().describe("The complex question or problem requiring deep reasoning"), model: z.string().optional().describe("Specific model to use (optional, will use provider default)"), temperature: z.number().min(0).max(2).optional().default(0.7).describe("Temperature for response generation"), maxOutputTokens: z.number().positive().optional().describe("Maximum tokens in response"), systemPrompt: z.string().optional().describe("System prompt to set context for reasoning"), reasoningEffort: z.enum(["low", "medium", "high"]).optional().default("high").describe("Reasoning effort level (for certain reasoning models)"), enableSearch: z.boolean().optional().default(true).describe("Enable Google Search for Gemini models"), });
  • src/server.ts:513-530 (registration)
    Prompt registration for deep-reasoning tool, providing a default user message template for invocation via prompts.
    server.registerPrompt("deep-reasoning", { title: "Deep Reasoning", description: "Use advanced AI reasoning to solve complex problems requiring deep analysis", argsSchema: { prompt: z.string().optional(), provider: z.string().optional(), model: z.string().optional(), systemPrompt: z.string().optional(), }, }, (args) => ({ messages: [{ role: "user", content: { type: "text", text: `Use advanced AI reasoning to solve this complex problem: ${args.prompt || 'Please provide a complex problem that requires deep reasoning and analysis.'}${args.provider ? ` (using ${args.provider} provider)` : ''}${args.systemPrompt ? `\n\nSystem context: ${args.systemPrompt}` : ''}` } }] }));
  • Helper function that lazily initializes and returns the AIToolHandlers instance with ProviderManager, used by all tool registrations to delegate to specific handlers.
    async function getHandlers() { if (!handlers) { const { ConfigManager } = require("./config/manager"); const { ProviderManager } = require("./providers/manager"); const { AIToolHandlers } = require("./handlers/ai-tools"); const configManager = new ConfigManager(); // Load config and set environment variables const config = await configManager.getConfig(); if (config.openai?.apiKey) { process.env.OPENAI_API_KEY = config.openai.apiKey; } if (config.openai?.baseURL) { process.env.OPENAI_BASE_URL = config.openai.baseURL; } if (config.google?.apiKey) { process.env.GOOGLE_API_KEY = config.google.apiKey; } if (config.google?.baseURL) { process.env.GOOGLE_BASE_URL = config.google.baseURL; } if (config.azure?.apiKey) { process.env.AZURE_API_KEY = config.azure.apiKey; } if (config.azure?.baseURL) { process.env.AZURE_BASE_URL = config.azure.baseURL; } if (config.xai?.apiKey) { process.env.XAI_API_KEY = config.xai.apiKey; } if (config.xai?.baseURL) { process.env.XAI_BASE_URL = config.xai.baseURL; } providerManager = new ProviderManager(configManager); handlers = new AIToolHandlers(providerManager); } return handlers; }

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/RealMikeChong/ultra-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server