Skip to main content
Glama

consensus

Analyze proposals from multiple AI perspectives to identify consensus and diverse viewpoints. Input a decision or idea with configured model stances to receive aggregated insights.

Instructions

Get consensus from multiple AI models on a proposal

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
proposalYesThe proposal, idea, or decision to analyze from multiple perspectives
modelsYesList of models to consult with their stances
filesNoRelevant file paths for context (optional)

Implementation Reference

  • src/server.ts:344-352 (registration)
    Registers the 'consensus' tool with the MCP server, providing title, description, input schema, and linking to the handler execution.
    // Register consensus tool server.registerTool("consensus", { title: "Consensus", description: "Get consensus from multiple AI models on a proposal", inputSchema: ConsensusSchema.shape, }, async (args) => { const aiHandlers = await getHandlers(); return await aiHandlers.handleConsensus(args); });
  • Zod schema defining the input structure for the consensus tool, including proposal, list of models with stances, and optional files.
    const ConsensusSchema = z.object({ proposal: z.string().describe("The proposal, idea, or decision to analyze from multiple perspectives"), models: z.array(z.object({ model: z.string().describe("Model name to consult (e.g., 'gemini-pro', 'gpt-4', 'gpt-5')"), stance: z.enum(["for", "against", "neutral"]).default("neutral").describe("Perspective stance for this model"), provider: z.enum(["openai", "gemini", "azure", "grok"]).optional().describe("AI provider for this model") })).min(1).describe("List of models to consult with their stances"), files: z.array(z.string()).optional().describe("Relevant file paths for context (optional)"), });
  • Core handler implementation that executes the consensus tool. Consults multiple AI models with configurable stances (for/against/neutral), collects individual analyses on the proposal, synthesizes a consensus summary, and returns structured results including all perspectives and recommendations.
    async handleConsensus(params: z.infer<typeof ConsensusSchema>) { const responses: any[] = []; // Consult each model with their specified stance for (const modelConfig of params.models) { const providerName = modelConfig.provider || (await this.providerManager.getPreferredProvider(['openai', 'gemini', 'azure', 'grok'])); const provider = await this.providerManager.getProvider(providerName); // Build stance-specific system prompt let stancePrompt = ""; switch (modelConfig.stance) { case "for": stancePrompt = "You are analyzing this proposal from a supportive perspective. Focus on benefits, opportunities, and positive aspects while being realistic about implementation."; break; case "against": stancePrompt = "You are analyzing this proposal from a critical perspective. Focus on risks, challenges, drawbacks, and potential issues while being fair and constructive."; break; case "neutral": default: stancePrompt = "You are analyzing this proposal from a balanced, neutral perspective. Consider both benefits and risks, opportunities and challenges equally."; break; } const systemPrompt = `${stancePrompt} Provide a thorough analysis of the proposal considering: - Technical feasibility and implementation complexity - Benefits and value proposition - Risks and potential challenges - Resource requirements and timeline considerations - Alternative approaches or modifications Be specific and actionable in your analysis.`; let prompt = `Analyze this proposal: ${params.proposal}`; if (params.files) { prompt += `\n\nRelevant files for context: ${params.files.join(", ")}`; } try { const response = await provider.generateText({ prompt, model: modelConfig.model, systemPrompt, temperature: 0.3, // Lower temperature for more consistent analysis useSearchGrounding: providerName === "gemini", toolName: 'consensus', }); responses.push({ model: modelConfig.model, provider: providerName, stance: modelConfig.stance, analysis: response.text, usage: response.usage, }); } catch (error) { responses.push({ model: modelConfig.model, provider: providerName, stance: modelConfig.stance, error: error instanceof Error ? error.message : "Unknown error", }); } } // Generate synthesis const synthesisPrompt = `Based on the following analyses from different perspectives, provide a comprehensive consensus summary: ${responses.map((r, i) => r.error ? `${i + 1}. ${r.model} (${r.stance}, ERROR): ${r.error}` : `${i + 1}. ${r.model} (${r.stance}): ${r.analysis}` ).join('\n\n')} Please synthesize these perspectives into: 1. **Key Points of Agreement**: What do most analyses agree on? 2. **Major Concerns and Disagreements**: Where do the analyses differ? 3. **Balanced Recommendation**: Based on all perspectives, what would you recommend? 4. **Next Steps**: What additional considerations or actions might be needed? Be objective and highlight both the strongest arguments for and against the proposal.`; const synthesisProvider = await this.providerManager.getProvider(await this.providerManager.getPreferredProvider(['openai', 'gemini', 'azure', 'grok'])); const synthesis = await synthesisProvider.generateText({ prompt: synthesisPrompt, systemPrompt: "You are an expert facilitator synthesizing multiple expert opinions. Provide balanced, objective analysis that captures the full spectrum of perspectives.", temperature: 0.4, useSearchGrounding: false, toolName: 'consensus', }); const result = { proposal: params.proposal, individual_analyses: responses, synthesis: synthesis.text, total_models_consulted: responses.length, successful_consultations: responses.filter(r => !r.error).length, }; return { content: [ { type: "text", text: JSON.stringify(result, null, 2), }, ], metadata: { toolName: "consensus", modelsConsulted: responses.length, synthesisModel: synthesis.model, totalUsage: responses.reduce((acc, r) => { if (r.usage) { return { inputTokens: (acc.inputTokens || 0) + (r.usage.inputTokens || 0), outputTokens: (acc.outputTokens || 0) + (r.usage.outputTokens || 0), totalTokens: (acc.totalTokens || 0) + (r.usage.totalTokens || 0), }; } return acc; }, {}), }, }; }
  • Zod schema for consensus tool inputs, used internally by the handler for type inference.
    const ConsensusSchema = z.object({ proposal: z.string().describe("The proposal, idea, or decision to analyze from multiple perspectives"), models: z.array(z.object({ model: z.string().describe("Model name to consult (e.g., 'gemini-pro', 'gpt-4', 'gpt-5')"), stance: z.enum(["for", "against", "neutral"]).default("neutral").describe("Perspective stance for this model"), provider: z.enum(["openai", "gemini", "azure", "grok"]).optional().describe("AI provider for this model") })).min(1).describe("List of models to consult with their stances"), files: z.array(z.string()).optional().describe("Relevant file paths for context (optional)"), });
  • src/server.ts:701-719 (registration)
    Registers a prompt for the consensus tool, enabling natural language invocation with simplified args schema.
    server.registerPrompt("consensus", { title: "Multi-Model Consensus", description: "Get consensus from multiple AI models on a proposal or decision", // Use string-only schema for MCP compatibility argsSchema: { proposal: z.string().optional(), models: z.string().optional(), // e.g., "gpt-4:for, gemini:neutral" files: z.string().optional(), provider: z.string().optional(), }, }, (args) => ({ messages: [{ role: "user", content: { type: "text", text: `Get multi-model consensus on this proposal: ${args.proposal || 'Please provide a proposal or decision to analyze.'}\n\nConsult these models: ${args.models || 'gpt-4:neutral, gemini:neutral'}${args.files ? `\n\nRelevant files for context: ${args.files}` : ''}` } }] }));

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/RealMikeChong/ultra-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server