Skip to main content
Glama

Zen Challenge

ultra-challenge

Analyze statements critically to identify assumptions and prevent reflexive agreement using AI-powered questioning.

Instructions

Challenges a statement or assumption with critical thinking to prevent reflexive agreement

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
promptYesThe statement, assumption, or proposal to analyze critically
providerNoAI provider to use for critical analysis (optional, defaults to best available)
modelNoSpecific model to use (optional)
sessionIdNoSession ID for conversation context (optional)

Implementation Reference

  • src/server.ts:968-975 (registration)
    Registers the 'ultra-challenge' tool, specifying title, description, input schema, and an inline handler that delegates to handleChallenge function.
    server.registerTool("ultra-challenge", {
      title: "Zen Challenge",
      description: "Challenges a statement or assumption with critical thinking to prevent reflexive agreement",
      inputSchema: ZenChallengeSchema.shape,
    }, async (args) => {
      const provider = await getProviderManager();
      return await handleChallenge(args, provider) as any;
    });
  • Defines the input schema (ZenChallengeSchema) for the ultra-challenge tool using Zod validation.
    const ZenChallengeSchema = z.object({
      prompt: z.string().describe("The statement, assumption, or proposal to analyze critically"),
      provider: z.enum(["openai", "gemini", "azure", "grok", "openai-compatible"]).optional().describe("AI provider to use for critical analysis (optional, defaults to best available)"),
      model: z.string().optional().describe("Specific model to use (optional)"),
      sessionId: z.string().optional().describe("Session ID for conversation context (optional)"),
    });
  • The main handler function for 'ultra-challenge' tool. It constructs a critical thinking prompt, optionally loads session context, generates AI response using provider, saves to session if applicable, and returns formatted content.
    export async function handleChallenge(args: any, providerManager: ProviderManager) {
      const { prompt, provider, model, sessionId } = args;
    
      // Wrap the prompt with critical thinking instructions
      const challengePrompt = `You are asked to critically analyze the following statement or assumption. Your goal is to provide honest, thoughtful analysis rather than automatic agreement.
    
    Instructions:
    - Challenge assumptions if they seem questionable
    - Point out potential flaws, limitations, or alternative perspectives
    - Ask clarifying questions if the statement is unclear
    - Provide evidence-based reasoning for your analysis
    - If you disagree, explain why clearly and constructively
    - If you agree, explain your reasoning and any caveats
    
    DO NOT simply agree to be agreeable. Your role is to provide honest, critical evaluation.
    
    Statement to analyze:
    ${prompt}
    
    Provide your critical analysis:`;
    
      // Get conversation context if sessionId provided
      let conversationContext = '';
      if (sessionId) {
        try {
          const context = await conversationMemory.getConversationContext(sessionId, 4000, true);
          if (context.messages.length > 0) {
            conversationContext = '\n\nPrevious conversation context:\n' + 
              context.messages.slice(-5).map(m => `${m.role}: ${m.content}`).join('\n');
          }
        } catch (error) {
          console.warn('Failed to load conversation context:', error);
        }
      }
    
      const finalPrompt = challengePrompt + conversationContext;
    
      // Use provider manager to get response
      const aiProvider = await providerManager.getProvider(provider);
      const result = await aiProvider.generateText({
        prompt: finalPrompt,
        model: model || aiProvider.getDefaultModel(),
        temperature: 0.7,
        useSearchGrounding: false,
      });
    
      // Save to conversation if sessionId provided
      if (sessionId) {
        try {
          await conversationMemory.getOrCreateSession(sessionId);
          await conversationMemory.addMessage(sessionId, 'user', prompt, 'challenge');
          await conversationMemory.addMessage(
            sessionId, 
            'assistant', 
            result.text, 
            'challenge',
            undefined,
            { provider, model: result.model }
          );
        } catch (error) {
          console.warn('Failed to save to conversation:', error);
        }
      }
    
      return {
        content: [
          {
            type: 'text',
            text: `## Critical Analysis\n\n${result.text}\n\n---\n*Analysis provided by ${result.model} via critical thinking prompt to prevent reflexive agreement.*`
          }
        ]
      };
    }
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions 'critical thinking' and 'prevent reflexive agreement,' which imply analysis and questioning, but doesn't detail aspects like response format, error handling, rate limits, or authentication needs. For a tool with no annotations, this leaves significant gaps in understanding its behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose without unnecessary details. It's appropriately sized and wastes no words, making it easy to understand quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (a critical analysis tool with 4 parameters, no annotations, and no output schema), the description is incomplete. It lacks information on behavioral traits, output expectations, and differentiation from siblings, making it insufficient for an agent to fully understand how to use this tool effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters well. The description adds no specific meaning beyond the schema, such as examples or usage tips for parameters like 'prompt' or 'provider.' With high schema coverage, the baseline score of 3 is appropriate as the description doesn't compensate but also doesn't detract.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Challenges a statement or assumption with critical thinking to prevent reflexive agreement.' It specifies the action (challenges with critical thinking) and the resource (statement/assumption), but doesn't explicitly differentiate it from sibling tools like 'challenge' or 'ultra-analyze' which might have similar functions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'challenge' or 'ultra-analyze' among the siblings. It states what the tool does but offers no context about appropriate scenarios, exclusions, or comparisons with other tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/RealMikeChong/ultra-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server