Skip to main content
Glama

conclave_full

Synthesize collective AI insights by gathering opinions, peer rankings, and generating a consensus answer through a chairman model for complex questions.

Instructions

Run the full conclave with synthesis (all 3 stages).

Most comprehensive - collects opinions, peer rankings, then has a Chairman model synthesize the best possible answer from the collective wisdom.

If a custom conclave is active (via conclave_select), it will be used instead of the tier-based config. The custom chairman overrides the chairman and chairman_preset parameters.

Args: question: The question to ask the conclave tier: Model tier - "premium" (complex), "standard" (default), "budget" (simple) Ignored if custom conclave is active. chairman: Override chairman model (e.g., 'anthropic/claude-sonnet-4') Ignored if custom conclave is active. chairman_preset: Use a context-based preset - "code", "creative", "reasoning", "concise", "balanced" Ignored if custom conclave is active.

Returns: Chairman's synthesis, consensus level, rankings, and individual responses

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
questionYes
tierNostandard
chairmanNo
chairman_presetNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • The main MCP tool handler for 'conclave_full' that validates inputs, checks for custom conclave selection, and calls the core implementation. This is the entry point exposed to MCP clients.
    @mcp.tool()
    async def conclave_full(
        question: str,
        tier: str = "standard",
        chairman: Optional[str] = None,
        chairman_preset: Optional[str] = None,
    ) -> str:
        """Run the full conclave with synthesis (all 3 stages).
    
        Most comprehensive - collects opinions, peer rankings, then has a Chairman
        model synthesize the best possible answer from the collective wisdom.
    
        If a custom conclave is active (via conclave_select), it will be used
        instead of the tier-based config. The custom chairman overrides the
        chairman and chairman_preset parameters.
    
        Args:
            question: The question to ask the conclave
            tier: Model tier - "premium" (complex), "standard" (default), "budget" (simple)
                  Ignored if custom conclave is active.
            chairman: Override chairman model (e.g., 'anthropic/claude-sonnet-4')
                      Ignored if custom conclave is active.
            chairman_preset: Use a context-based preset - "code", "creative", "reasoning", "concise", "balanced"
                             Ignored if custom conclave is active.
    
        Returns:
            Chairman's synthesis, consensus level, rankings, and individual responses
        """
        if not question:
            return "Error: 'question' is required"
    
        if not OPENROUTER_API_KEY:
            return "Error: OPENROUTER_API_KEY not configured"
    
        # Check for custom conclave
        custom_models, custom_chairman, source = get_active_models()
    
        if source == "custom":
            models = custom_models
            tier_label = "custom"
            # Custom chairman overrides parameters
            chairman = custom_chairman
            chairman_preset = None
        else:
            if tier not in ("premium", "standard", "budget"):
                tier = "standard"
    
            if chairman_preset and chairman_preset not in CHAIRMAN_PRESETS:
                return f"Error: Invalid chairman_preset. Valid options: {list(CHAIRMAN_PRESETS.keys())}"
    
            models = get_council_by_tier(tier)
            tier_label = tier
    
        try:
            result = await run_council_full(
                question,
                models=models,
                chairman=chairman,
                chairman_preset=chairman_preset,
            )
            result["tier"] = tier_label
            return format_full_result(result)
        except Exception as e:
            return f"Error: {str(e)}"
  • Core implementation of run_council_full that orchestrates all 3 stages: collecting opinions (stage1), peer rankings (stage2), and chairman synthesis (stage3). Includes consensus detection and tiebreaker logic.
    async def run_council_full(
        user_query: str,
        models: list[str] | None = None,
        chairman: str | None = None,
        chairman_preset: str | None = None,
    ) -> dict:
        """
        Full council: All 3 stages with final synthesis.
    
        Most expensive but provides synthesized best answer.
        Includes consensus detection and chairman tiebreaker if needed.
        """
        models = models or COUNCIL_MODELS
        chairman_model = get_current_chairman(override=chairman, preset=chairman_preset)
    
        # Validate council size (warn if even)
        size_validation = validate_council_size(models, chairman_model)
    
        # Stage 1: Collect opinions
        stage1 = await stage1_collect_responses(user_query, models)
    
        # Stage 2: Peer rankings
        stage2 = await stage2_collect_rankings(user_query, stage1, models)
    
        # Detect consensus
        consensus = detect_consensus(stage1, stage2)
    
        # Handle tiebreaker if needed
        tiebreaker = None
        if consensus["needs_tiebreaker"] and CHAIRMAN_TIEBREAKER_ENABLED:
            tied_models = consensus["split_details"]["tied_models"]
            tiebreaker = await chairman_tiebreaker(
                user_query, stage1, stage2, tied_models,
                chairman=chairman,
                chairman_preset=chairman_preset,
            )
    
        # Stage 3: Chairman synthesis (with consensus context)
        stage3 = await stage3_synthesize_final(
            user_query, stage1, stage2,
            chairman=chairman,
            chairman_preset=chairman_preset,
            consensus=consensus,
            tiebreaker=tiebreaker,
        )
    
        return {
            "tier": "full",
            "query": user_query,
            "stage1": stage1,
            "stage2": stage2,
            "consensus": consensus,
            "tiebreaker": tiebreaker,
            "stage3": stage3,
            "council_size": size_validation,
        }
  • Stage 3 helper function that synthesizes the final answer. Takes individual responses, peer rankings, consensus data, and tiebreaker results to generate the chairman's final synthesis.
    async def stage3_synthesize_final(
        user_query: str,
        stage1_results: list[dict],
        stage2_results: dict,
        chairman: str = None,
        chairman_preset: str = None,
        consensus: dict = None,
        tiebreaker: dict = None,
    ) -> dict:
        """
        Stage 3: Chairman synthesizes final answer from all inputs.
    
        Args:
            user_query: Original question
            stage1_results: Individual responses from Stage 1
            stage2_results: Rankings and feedback from Stage 2
            chairman: Explicit chairman model override
            chairman_preset: Preset name ("code", "creative", etc.)
            consensus: Consensus detection results
            tiebreaker: Tiebreaker vote results (if any)
    
        Returns:
            {"chairman": "model_id", "synthesis": "final answer text", "usage": {...}}
        """
        chairman_model = get_current_chairman(override=chairman, preset=chairman_preset)
    
        # Format Stage 1 responses (with model names)
        stage1_text = "\n\n---\n\n".join([
            f"Response from {resp['model']}:\n{resp['content']}"
            for resp in stage1_results
        ])
    
        # Format Stage 2 rankings
        stage2_text = "\n\n".join([
            f"Evaluation by {r['evaluator']}:\nRanking: {' > '.join(r['ranking'])}\n{r['feedback'][:500]}..."
            for r in stage2_results["rankings"]
        ])
    
        # Aggregate scores
        aggregate_text = "\n".join([
            f"  {model}: {score:.2f} avg rank"
            for model, score in sorted(stage2_results["aggregate"].items(), key=lambda x: x[1])
        ])
    
        # Build consensus context
        consensus_text = ""
        if consensus:
            consensus_text = f"""
    === CONSENSUS STATUS ===
    
    Level: {consensus.get('level', 'unknown').upper()}
    Ranking Agreement: {consensus.get('ranking_agreement', 0):.0%}
    Top Ranked: {consensus.get('top_ranked', 'unknown')}
    """
            if consensus.get('level') == 'split':
                consensus_text += f"SPLIT DETECTED: {', '.join(consensus.get('split_details', {}).get('tied_models', []))}\n"
    
        # Build tiebreaker context
        tiebreaker_text = ""
        if tiebreaker and tiebreaker.get('valid_vote'):
            tiebreaker_text = f"""
    === CHAIRMAN TIEBREAKER VOTE ===
    
    Your tiebreaker vote selected: {tiebreaker['vote']} (Response {tiebreaker['vote_label']})
    This response should be weighted more heavily in your synthesis.
    """
    
        # Adjust system prompt based on consensus level
        if consensus and consensus.get('level') == 'split' and tiebreaker:
            system_prompt = STAGE3_SYSTEM_PROMPT + """
    
    IMPORTANT: The council was SPLIT on this question. You cast a tiebreaker vote.
    Your synthesis should favor the response you voted for while acknowledging
    the valid points from other responses. Make the reasoning clear."""
        elif consensus and consensus.get('level') == 'weak':
            system_prompt = STAGE3_SYSTEM_PROMPT + """
    
    NOTE: The council showed WEAK consensus on this question - there was significant
    disagreement. Your synthesis should acknowledge this uncertainty and present
    multiple valid perspectives where appropriate."""
        else:
            system_prompt = STAGE3_SYSTEM_PROMPT
    
        synthesis_prompt = f"""Original question: {user_query}
    
    === INDIVIDUAL RESPONSES ===
    
    {stage1_text}
    
    === PEER EVALUATIONS ===
    
    {stage2_text}
    
    === AGGREGATE RANKINGS (lower is better) ===
    
    {aggregate_text}
    {consensus_text}
    {tiebreaker_text}
    === YOUR TASK ===
    
    As Chairman, synthesize the best possible answer to the original question,
    drawing on the council's collective wisdom and the peer evaluations."""
    
        messages = [
            {"role": "system", "content": system_prompt},
            {"role": "user", "content": synthesis_prompt},
        ]
    
        result = await query_model(chairman_model, messages)
    
        return {
            "chairman": chairman_model,
            "synthesis": result["content"],
            "usage": result.get("usage", {}),
            "consensus_level": consensus.get("level") if consensus else None,
            "tiebreaker_used": tiebreaker is not None and tiebreaker.get("valid_vote", False),
        }
  • Result formatter for conclave_full that formats the output with consensus status, rankings, synthesis, and individual responses into a readable markdown format.
    def format_full_result(result: dict) -> str:
        """Format full conclave result for display."""
        output = "## Conclave Full Result\n\n"
    
        # Consensus status badge
        consensus = result.get("consensus", {})
        consensus_level = consensus.get("level", "unknown")
        consensus_emoji = {
            "strong": "✅",
            "moderate": "🟡",
            "weak": "🟠",
            "split": "⚖️",
        }.get(consensus_level, "❓")
    
        output += f"**Consensus: {consensus_emoji} {consensus_level.upper()}**"
        if consensus.get("ranking_agreement"):
            output += f" ({consensus['ranking_agreement']:.0%} agreement)"
        output += "\n\n"
    
        # Tiebreaker info if used
        tiebreaker = result.get("tiebreaker")
        if tiebreaker and tiebreaker.get("valid_vote"):
            output += f"⚖️ **Tiebreaker Vote**: Chairman selected **{tiebreaker['vote'].split('/')[-1]}** (Response {tiebreaker['vote_label']})\n\n"
    
        # Council size warning if even
        size_info = result.get("council_size", {})
        if size_info and not size_info.get("valid", True):
            output += f"⚠️ {size_info.get('message', 'Council size is even')}\n\n"
    
        output += "---\n\n"
    
        # Final synthesis (most important)
        output += "### Chairman's Synthesis\n\n"
        output += f"_Chairman: {result['stage3']['chairman']}_\n"
        if result['stage3'].get('tiebreaker_used'):
            output += "_Tiebreaker vote was cast_\n"
        output += f"\n{result['stage3']['synthesis']}\n\n"
        output += "---\n\n"
    
        # Aggregate rankings
        output += "### Model Rankings (lower is better)\n\n"
        sorted_rankings = sorted(
            result["stage2"]["aggregate"].items(),
            key=lambda x: x[1]
        )
        for i, (model, score) in enumerate(sorted_rankings, 1):
            model_name = model.split("/")[-1]
            # Mark tied models
            is_tied = consensus_level == "split" and model in consensus.get("split_details", {}).get("tied_models", [])
            tie_marker = " ⚖️" if is_tied else ""
            output += f"{i}. **{model_name}**: {score:.2f}{tie_marker}\n"
    
        # First place vote distribution
        if consensus.get("first_place_votes"):
            output += "\n_First-place votes:_ "
            votes = [f"{m.split('/')[-1]}={v}" for m, v in consensus["first_place_votes"].items()]
            output += ", ".join(votes)
            output += "\n"
    
        output += "\n---\n\n"
    
        # Individual responses (collapsed by default in most renderers)
        output += "<details>\n<summary>Individual Responses</summary>\n\n"
        for resp in result["stage1"]:
            model_name = resp["model"].split("/")[-1]
            output += f"#### {model_name}\n\n{resp['content']}\n\n---\n\n"
        output += "</details>\n"
    
        # Tiebreaker reasoning if available
        if tiebreaker and tiebreaker.get("reasoning"):
            output += "\n<details>\n<summary>Tiebreaker Reasoning</summary>\n\n"
            output += f"{tiebreaker['reasoning']}\n"
            output += "</details>\n"
    
        return output
  • server.py:41-45 (registration)
    Import statement that brings the core conclave functions (including run_council_full) from the conclave module, making them available for use in the MCP tool handlers.
    from conclave import (
        run_council_quick,
        run_council_ranked,
        run_council_full,
    )
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and does well by explaining the multi-stage process ('collects opinions, peer rankings, then has a Chairman model synthesize'), the override behavior with custom conclaves, and what the tool returns. It doesn't mention rate limits, auth needs, or error conditions, but provides substantial behavioral context beyond basic functionality.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded with the core purpose first. The Args and Returns sections are well-structured. Some sentences could be slightly more concise, but overall it's efficient with zero wasted text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (multi-stage synthesis with overrides), no annotations, and 0% schema coverage, the description provides complete context. It explains the process, parameter semantics, conditional behavior, and return values. The output schema exists, so the description appropriately doesn't need to detail return structure.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description fully compensates by explaining all 4 parameters in detail: what 'question' is for, the meaning of 'tier' values, what 'chairman' overrides, and the purpose of 'chairman_preset' options. It also clarifies conditional behavior ('Ignored if custom conclave is active') that isn't in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Run the full conclave with synthesis') and resources ('all 3 stages'), and distinguishes it from siblings by emphasizing it's the 'most comprehensive' option that includes synthesis. It explicitly mentions what makes it different from other conclave tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool ('Most comprehensive') and when parameters are ignored ('Ignored if custom conclave is active'). It also implies alternatives through sibling tool names like conclave_quick and conclave_ranked, giving clear context for selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/stephenpeters/conclave-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server