Skip to main content
Glama

Consensus

consensus

Analyze proposals from multiple AI perspectives to identify consensus and diverse viewpoints. Input a decision or idea with configured model stances to receive aggregated insights.

Instructions

Get consensus from multiple AI models on a proposal

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
proposalYesThe proposal, idea, or decision to analyze from multiple perspectives
modelsYesList of models to consult with their stances
filesNoRelevant file paths for context (optional)

Implementation Reference

  • src/server.ts:344-352 (registration)
    Registers the 'consensus' tool with the MCP server, providing title, description, input schema, and linking to the handler execution.
    // Register consensus tool
    server.registerTool("consensus", {
      title: "Consensus",
      description: "Get consensus from multiple AI models on a proposal",
      inputSchema: ConsensusSchema.shape,
    }, async (args) => {
      const aiHandlers = await getHandlers();
      return await aiHandlers.handleConsensus(args);
    });
  • Zod schema defining the input structure for the consensus tool, including proposal, list of models with stances, and optional files.
    const ConsensusSchema = z.object({
      proposal: z.string().describe("The proposal, idea, or decision to analyze from multiple perspectives"),
      models: z.array(z.object({
        model: z.string().describe("Model name to consult (e.g., 'gemini-pro', 'gpt-4', 'gpt-5')"),
        stance: z.enum(["for", "against", "neutral"]).default("neutral").describe("Perspective stance for this model"),
        provider: z.enum(["openai", "gemini", "azure", "grok"]).optional().describe("AI provider for this model")
      })).min(1).describe("List of models to consult with their stances"),
      files: z.array(z.string()).optional().describe("Relevant file paths for context (optional)"),
    });
  • Core handler implementation that executes the consensus tool. Consults multiple AI models with configurable stances (for/against/neutral), collects individual analyses on the proposal, synthesizes a consensus summary, and returns structured results including all perspectives and recommendations.
      async handleConsensus(params: z.infer<typeof ConsensusSchema>) {
        const responses: any[] = [];
        
        // Consult each model with their specified stance
        for (const modelConfig of params.models) {
          const providerName = modelConfig.provider || (await this.providerManager.getPreferredProvider(['openai', 'gemini', 'azure', 'grok']));
          const provider = await this.providerManager.getProvider(providerName);
          
          // Build stance-specific system prompt
          let stancePrompt = "";
          switch (modelConfig.stance) {
            case "for":
              stancePrompt = "You are analyzing this proposal from a supportive perspective. Focus on benefits, opportunities, and positive aspects while being realistic about implementation.";
              break;
            case "against":
              stancePrompt = "You are analyzing this proposal from a critical perspective. Focus on risks, challenges, drawbacks, and potential issues while being fair and constructive.";
              break;
            case "neutral":
            default:
              stancePrompt = "You are analyzing this proposal from a balanced, neutral perspective. Consider both benefits and risks, opportunities and challenges equally.";
              break;
          }
    
          const systemPrompt = `${stancePrompt}
          
          Provide a thorough analysis of the proposal considering:
          - Technical feasibility and implementation complexity
          - Benefits and value proposition
          - Risks and potential challenges
          - Resource requirements and timeline considerations
          - Alternative approaches or modifications
          
          Be specific and actionable in your analysis.`;
    
          let prompt = `Analyze this proposal: ${params.proposal}`;
          if (params.files) {
            prompt += `\n\nRelevant files for context: ${params.files.join(", ")}`;
          }
    
          try {
            const response = await provider.generateText({
              prompt,
              model: modelConfig.model,
              systemPrompt,
              temperature: 0.3, // Lower temperature for more consistent analysis
              useSearchGrounding: providerName === "gemini",
              toolName: 'consensus',
            });
    
            responses.push({
              model: modelConfig.model,
              provider: providerName,
              stance: modelConfig.stance,
              analysis: response.text,
              usage: response.usage,
            });
          } catch (error) {
            responses.push({
              model: modelConfig.model,
              provider: providerName,
              stance: modelConfig.stance,
              error: error instanceof Error ? error.message : "Unknown error",
            });
          }
        }
    
        // Generate synthesis
        const synthesisPrompt = `Based on the following analyses from different perspectives, provide a comprehensive consensus summary:
    
    ${responses.map((r, i) => 
      r.error 
        ? `${i + 1}. ${r.model} (${r.stance}, ERROR): ${r.error}`
        : `${i + 1}. ${r.model} (${r.stance}): ${r.analysis}`
    ).join('\n\n')}
    
    Please synthesize these perspectives into:
    1. **Key Points of Agreement**: What do most analyses agree on?
    2. **Major Concerns and Disagreements**: Where do the analyses differ?
    3. **Balanced Recommendation**: Based on all perspectives, what would you recommend?
    4. **Next Steps**: What additional considerations or actions might be needed?
    
    Be objective and highlight both the strongest arguments for and against the proposal.`;
    
        const synthesisProvider = await this.providerManager.getProvider(await this.providerManager.getPreferredProvider(['openai', 'gemini', 'azure', 'grok']));
        const synthesis = await synthesisProvider.generateText({
          prompt: synthesisPrompt,
          systemPrompt: "You are an expert facilitator synthesizing multiple expert opinions. Provide balanced, objective analysis that captures the full spectrum of perspectives.",
          temperature: 0.4,
          useSearchGrounding: false,
          toolName: 'consensus',
        });
    
        const result = {
          proposal: params.proposal,
          individual_analyses: responses,
          synthesis: synthesis.text,
          total_models_consulted: responses.length,
          successful_consultations: responses.filter(r => !r.error).length,
        };
    
        return {
          content: [
            {
              type: "text",
              text: JSON.stringify(result, null, 2),
            },
          ],
          metadata: {
            toolName: "consensus",
            modelsConsulted: responses.length,
            synthesisModel: synthesis.model,
            totalUsage: responses.reduce((acc, r) => {
              if (r.usage) {
                return {
                  inputTokens: (acc.inputTokens || 0) + (r.usage.inputTokens || 0),
                  outputTokens: (acc.outputTokens || 0) + (r.usage.outputTokens || 0),
                  totalTokens: (acc.totalTokens || 0) + (r.usage.totalTokens || 0),
                };
              }
              return acc;
            }, {}),
          },
        };
      }
  • Zod schema for consensus tool inputs, used internally by the handler for type inference.
    const ConsensusSchema = z.object({
      proposal: z.string().describe("The proposal, idea, or decision to analyze from multiple perspectives"),
      models: z.array(z.object({
        model: z.string().describe("Model name to consult (e.g., 'gemini-pro', 'gpt-4', 'gpt-5')"),
        stance: z.enum(["for", "against", "neutral"]).default("neutral").describe("Perspective stance for this model"),
        provider: z.enum(["openai", "gemini", "azure", "grok"]).optional().describe("AI provider for this model")
      })).min(1).describe("List of models to consult with their stances"),
      files: z.array(z.string()).optional().describe("Relevant file paths for context (optional)"),
    });
  • src/server.ts:701-719 (registration)
    Registers a prompt for the consensus tool, enabling natural language invocation with simplified args schema.
    server.registerPrompt("consensus", {
      title: "Multi-Model Consensus",
      description: "Get consensus from multiple AI models on a proposal or decision",
      // Use string-only schema for MCP compatibility
      argsSchema: {
        proposal: z.string().optional(),
        models: z.string().optional(), // e.g., "gpt-4:for, gemini:neutral"
        files: z.string().optional(),
        provider: z.string().optional(),
      },
    }, (args) => ({
      messages: [{
        role: "user",
        content: {
          type: "text",
          text: `Get multi-model consensus on this proposal: ${args.proposal || 'Please provide a proposal or decision to analyze.'}\n\nConsult these models: ${args.models || 'gpt-4:neutral, gemini:neutral'}${args.files ? `\n\nRelevant files for context: ${args.files}` : ''}`
        }
      }]
    }));
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. While 'Get consensus' implies a read-only operation, it doesn't specify whether this makes API calls to external services, what the output format looks like, whether there are rate limits, or what happens with the 'files' parameter. For a tool that likely interacts with multiple AI models, this is insufficient behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that states the core purpose without unnecessary words. It's appropriately sized and front-loaded with the essential information. Every word earns its place in this concise formulation.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 3 parameters (including a complex array of model objects), no annotations, and no output schema, the description is incomplete. It doesn't explain what 'consensus' means in practice, what the output looks like, or how the tool behaves operationally. The agent would need to guess about the tool's behavior and results.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all three parameters thoroughly. The description adds no additional parameter semantics beyond what's in the schema - it doesn't explain how 'models' interact, what 'consensus' means operationally, or how 'files' are used. Baseline 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose as 'Get consensus from multiple AI models on a proposal' - a specific verb ('Get consensus') with resource ('multiple AI models') and target ('on a proposal'). However, it doesn't differentiate from sibling tools like 'analyze-code' or 'research' which might also involve AI analysis, so it doesn't fully distinguish from alternatives.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. There's no mention of when this tool is appropriate versus tools like 'analyze-code', 'research', or 'ultra-analyze', nor any context about prerequisites or limitations. The agent must infer usage from the name alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/RealMikeChong/ultra-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server