vibe_check
Analyze assumptions, identify uncertainties, and interrupt overcomplicated reasoning paths to improve decision-making and prevent cascading errors in AI systems.
Instructions
Metacognitive questioning tool that identifies assumptions and breaks tunnel vision to prevent cascading errors
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| goal | Yes | The agent's current goal | |
| modelOverride | No | ||
| plan | Yes | The agent's detailed plan | |
| progress | No | The agent's progress so far | |
| sessionId | No | Optional session ID for state management | |
| taskContext | No | The context of the current task | |
| uncertainties | No | The agent's uncertainties | |
| userPrompt | No | The original user prompt |
Input Schema (JSON Schema)
{
"properties": {
"goal": {
"description": "The agent's current goal",
"type": "string"
},
"modelOverride": {
"properties": {
"model": {
"type": "string"
},
"provider": {
"enum": [
"gemini",
"openai",
"openrouter"
],
"type": "string"
}
},
"required": [],
"type": "object"
},
"plan": {
"description": "The agent's detailed plan",
"type": "string"
},
"progress": {
"description": "The agent's progress so far",
"type": "string"
},
"sessionId": {
"description": "Optional session ID for state management",
"type": "string"
},
"taskContext": {
"description": "The context of the current task",
"type": "string"
},
"uncertainties": {
"description": "The agent's uncertainties",
"items": {
"type": "string"
},
"type": "array"
},
"userPrompt": {
"description": "The original user prompt",
"type": "string"
}
},
"required": [
"goal",
"plan"
],
"type": "object"
}
Implementation Reference
- src/tools/vibeCheck.ts:28-61 (handler)Core handler function that generates metacognitive questions using LLM, manages session history, and provides fallback questions on error.export async function vibeCheckTool(input: VibeCheckInput): Promise<VibeCheckOutput> { console.log('[vibe_check] called', { hasSession: Boolean(input.sessionId) }); try { // Get history summary const historySummary = getHistorySummary(input.sessionId); // Get metacognitive questions from Gemini with dynamic parameters const response = await getMetacognitiveQuestions({ goal: input.goal, plan: input.plan, modelOverride: input.modelOverride, userPrompt: input.userPrompt, progress: input.progress, uncertainties: input.uncertainties, taskContext: input.taskContext, sessionId: input.sessionId, historySummary, }); // Add to history addToHistory(input.sessionId, input, response.questions); return { questions: response.questions, }; } catch (error) { console.error('Error in vibe_check tool:', error); // Fallback to basic questions if there's an error return { questions: generateFallbackQuestions(input.userPrompt || "", input.plan || ""), }; } }
- src/tools/vibeCheck.ts:5-21 (schema)TypeScript interfaces defining the input and output structures for the vibe_check tool.export interface VibeCheckInput { goal: string; plan: string; modelOverride?: { provider?: string; model?: string; }; userPrompt?: string; progress?: string; uncertainties?: string[]; taskContext?: string; sessionId?: string; } export interface VibeCheckOutput { questions: string; }
- src/index.ts:74-129 (registration)MCP tool registration in listTools handler, defining name, description, and detailed input schema for vibe_check.{ name: 'vibe_check', description: 'Metacognitive questioning tool that identifies assumptions and breaks tunnel vision to prevent cascading errors', inputSchema: { type: 'object', properties: { goal: { type: 'string', description: "The agent's current goal", examples: ['Ship CPI v2.5 with zero regressions'] }, plan: { type: 'string', description: "The agent's detailed plan", examples: ['1) Write tests 2) Refactor 3) Canary rollout'] }, modelOverride: { type: 'object', properties: { provider: { type: 'string', enum: [...SUPPORTED_LLM_PROVIDERS] }, model: { type: 'string' } }, required: [], examples: [{ provider: 'gemini', model: 'gemini-2.5-pro' }] }, userPrompt: { type: 'string', description: 'The original user prompt', examples: ['Summarize the repo'] }, progress: { type: 'string', description: "The agent's progress so far", examples: ['Finished step 1'] }, uncertainties: { type: 'array', items: { type: 'string' }, description: "The agent's uncertainties", examples: [['uncertain about deployment']] }, taskContext: { type: 'string', description: 'The context of the current task', examples: ['repo: vibe-check-mcp @2.5.0'] }, sessionId: { type: 'string', description: 'Optional session ID for state management', examples: ['session-123'] } }, required: ['goal', 'plan'], additionalProperties: false } },
- src/index.ts:218-241 (registration)Dispatch handler in CallToolRequestSchema that validates inputs, constructs VibeCheckInput, calls vibeCheckTool, and formats MCP response.case 'vibe_check': { const missing: string[] = []; if (!args || typeof args.goal !== 'string') missing.push('goal'); if (!args || typeof args.plan !== 'string') missing.push('plan'); if (missing.length) { const example = '{"goal":"Ship CPI v2.5","plan":"1) tests 2) refactor 3) canary"}'; const message = IS_DISCOVERY ? `discovery: missing [${missing.join(', ')}]; example: ${example}` : `Missing: ${missing.join(', ')}. Example: ${example}`; throw new McpError(ErrorCode.InvalidParams, message); } const input: VibeCheckInput = { goal: args.goal, plan: args.plan, modelOverride: typeof args.modelOverride === 'object' && args.modelOverride !== null ? args.modelOverride : undefined, userPrompt: typeof args.userPrompt === 'string' ? args.userPrompt : undefined, progress: typeof args.progress === 'string' ? args.progress : undefined, uncertainties: Array.isArray(args.uncertainties) ? args.uncertainties : undefined, taskContext: typeof args.taskContext === 'string' ? args.taskContext : undefined, sessionId: typeof args.sessionId === 'string' ? args.sessionId : undefined, }; const result = await vibeCheckTool(input); return { content: [{ type: 'text', text: formatVibeCheckOutput(result) }] }; }
- src/tools/vibeCheck.ts:66-75 (helper)Fallback question generator used when the primary LLM call to getMetacognitiveQuestions fails.function generateFallbackQuestions(userRequest: string, plan: string): string { return ` I can see you're thinking through your approach, which shows thoughtfulness: 1. Does this plan directly address what the user requested, or might it be solving a different problem? 2. Is there a simpler approach that would meet the user's needs? 3. What unstated assumptions might be limiting the thinking here? 4. How does this align with the user's original intent? `; }