Skip to main content
Glama

vibe_check

Analyze assumptions, identify uncertainties, and interrupt overcomplicated reasoning paths to improve decision-making and prevent cascading errors in AI systems.

Instructions

Metacognitive questioning tool that identifies assumptions and breaks tunnel vision to prevent cascading errors

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
goalYesThe agent's current goal
modelOverrideNo
planYesThe agent's detailed plan
progressNoThe agent's progress so far
sessionIdNoOptional session ID for state management
taskContextNoThe context of the current task
uncertaintiesNoThe agent's uncertainties
userPromptNoThe original user prompt

Implementation Reference

  • Core handler function that generates metacognitive questions using LLM, manages session history, and provides fallback questions on error.
    export async function vibeCheckTool(input: VibeCheckInput): Promise<VibeCheckOutput> {
      console.log('[vibe_check] called', { hasSession: Boolean(input.sessionId) });
      try {
        // Get history summary
        const historySummary = getHistorySummary(input.sessionId);
    
        // Get metacognitive questions from Gemini with dynamic parameters
        const response = await getMetacognitiveQuestions({
          goal: input.goal,
          plan: input.plan,
          modelOverride: input.modelOverride,
          userPrompt: input.userPrompt,
          progress: input.progress,
          uncertainties: input.uncertainties,
          taskContext: input.taskContext,
          sessionId: input.sessionId,
          historySummary,
        });
    
        // Add to history
        addToHistory(input.sessionId, input, response.questions);
    
        return {
          questions: response.questions,
        };
      } catch (error) {
        console.error('Error in vibe_check tool:', error);
    
        // Fallback to basic questions if there's an error
        return {
          questions: generateFallbackQuestions(input.userPrompt || "", input.plan || ""),
        };
      }
    }
  • TypeScript interfaces defining the input and output structures for the vibe_check tool.
    export interface VibeCheckInput {
      goal: string;
      plan: string;
      modelOverride?: {
        provider?: string;
        model?: string;
      };
      userPrompt?: string;
      progress?: string;
      uncertainties?: string[];
      taskContext?: string;
      sessionId?: string;
    }
    
    export interface VibeCheckOutput {
      questions: string;
    }
  • src/index.ts:74-129 (registration)
    MCP tool registration in listTools handler, defining name, description, and detailed input schema for vibe_check.
    {
      name: 'vibe_check',
      description: 'Metacognitive questioning tool that identifies assumptions and breaks tunnel vision to prevent cascading errors',
      inputSchema: {
        type: 'object',
        properties: {
          goal: {
            type: 'string',
            description: "The agent's current goal",
            examples: ['Ship CPI v2.5 with zero regressions']
          },
          plan: {
            type: 'string',
            description: "The agent's detailed plan",
            examples: ['1) Write tests 2) Refactor 3) Canary rollout']
          },
          modelOverride: {
            type: 'object',
            properties: {
              provider: { type: 'string', enum: [...SUPPORTED_LLM_PROVIDERS] },
              model: { type: 'string' }
            },
            required: [],
            examples: [{ provider: 'gemini', model: 'gemini-2.5-pro' }]
          },
          userPrompt: {
            type: 'string',
            description: 'The original user prompt',
            examples: ['Summarize the repo']
          },
          progress: {
            type: 'string',
            description: "The agent's progress so far",
            examples: ['Finished step 1']
          },
          uncertainties: {
            type: 'array',
            items: { type: 'string' },
            description: "The agent's uncertainties",
            examples: [['uncertain about deployment']]
          },
          taskContext: {
            type: 'string',
            description: 'The context of the current task',
            examples: ['repo: vibe-check-mcp @2.5.0']
          },
          sessionId: {
            type: 'string',
            description: 'Optional session ID for state management',
            examples: ['session-123']
          }
        },
        required: ['goal', 'plan'],
        additionalProperties: false
      }
    },
  • src/index.ts:218-241 (registration)
    Dispatch handler in CallToolRequestSchema that validates inputs, constructs VibeCheckInput, calls vibeCheckTool, and formats MCP response.
    case 'vibe_check': {
      const missing: string[] = [];
      if (!args || typeof args.goal !== 'string') missing.push('goal');
      if (!args || typeof args.plan !== 'string') missing.push('plan');
      if (missing.length) {
        const example = '{"goal":"Ship CPI v2.5","plan":"1) tests 2) refactor 3) canary"}';
        const message = IS_DISCOVERY
          ? `discovery: missing [${missing.join(', ')}]; example: ${example}`
          : `Missing: ${missing.join(', ')}. Example: ${example}`;
        throw new McpError(ErrorCode.InvalidParams, message);
      }
      const input: VibeCheckInput = {
        goal: args.goal,
        plan: args.plan,
        modelOverride: typeof args.modelOverride === 'object' && args.modelOverride !== null ? args.modelOverride : undefined,
        userPrompt: typeof args.userPrompt === 'string' ? args.userPrompt : undefined,
        progress: typeof args.progress === 'string' ? args.progress : undefined,
        uncertainties: Array.isArray(args.uncertainties) ? args.uncertainties : undefined,
        taskContext: typeof args.taskContext === 'string' ? args.taskContext : undefined,
        sessionId: typeof args.sessionId === 'string' ? args.sessionId : undefined,
      };
      const result = await vibeCheckTool(input);
      return { content: [{ type: 'text', text: formatVibeCheckOutput(result) }] };
    }
  • Fallback question generator used when the primary LLM call to getMetacognitiveQuestions fails.
    function generateFallbackQuestions(userRequest: string, plan: string): string {
        return `
    I can see you're thinking through your approach, which shows thoughtfulness:
    
    1. Does this plan directly address what the user requested, or might it be solving a different problem?
    2. Is there a simpler approach that would meet the user's needs?
    3. What unstated assumptions might be limiting the thinking here?
    4. How does this align with the user's original intent?
    `;
    }
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the tool's cognitive effects (identifying assumptions, breaking tunnel vision, preventing errors) but lacks details on how it operates (e.g., does it generate questions, provide feedback, modify plans?), what it returns, or any constraints like rate limits or permissions. This leaves significant gaps in understanding its behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the key purpose ('metacognitive questioning tool') and elaborates with clear outcomes. Every word earns its place, avoiding redundancy or fluff, making it highly concise and well-structured for quick understanding.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (8 parameters, no annotations, no output schema), the description is incomplete. It doesn't explain what the tool returns, how it uses the parameters (e.g., 'modelOverride' for AI model selection), or behavioral details like state management with 'sessionId.' For a metacognitive tool with rich inputs, more context is needed to guide effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is high (88%), so the schema already documents most parameters well (e.g., 'goal,' 'plan,' 'uncertainties'). The description doesn't add specific meaning beyond the schema, such as explaining how parameters like 'modelOverride' or 'sessionId' relate to the tool's purpose. Baseline 3 is appropriate as the schema does the heavy lifting, but no extra value is provided.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose as a 'metacognitive questioning tool' that 'identifies assumptions and breaks tunnel vision to prevent cascading errors.' It uses specific verbs ('identifies,' 'breaks,' 'prevent') and describes the cognitive function, though it doesn't explicitly differentiate from its sibling 'vibe_learn' beyond the general domain of 'vibe' tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage in scenarios involving assumptions, tunnel vision, or error prevention, suggesting it's for reflective or corrective moments. However, it doesn't provide explicit guidance on when to use this tool versus 'vibe_learn' or other alternatives, nor does it specify prerequisites or exclusions, leaving the context somewhat open-ended.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/PV-Bhat/vibe-check-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server