Skip to main content
Glama

councly_hearing

Debate topics using multiple AI models (Claude, GPT, Gemini, Grok) to synthesize verdicts with diverse perspectives for code review, technical decisions, and problem solving.

Instructions

Create a council hearing where multiple LLMs (Claude, GPT, Gemini, Grok) debate a topic and a moderator synthesizes the verdict.

Use cases:

  • Code review: Get diverse perspectives on code quality, architecture, security

  • Technical decisions: Compare approaches, weigh trade-offs

  • Problem solving: Generate and evaluate multiple solutions

  • Brainstorming: Explore ideas from different angles

The hearing runs asynchronously. By default, this tool waits for completion and returns the verdict. Set wait=false to get the hearing ID immediately and check status later with councly_status.

Cost: Varies by preset (6-17 credits). Check councly.ai for current pricing.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
subjectYesThe topic or question to discuss. Be specific and include relevant context.
presetNoModel preset: balanced (9 credits), fast (6 credits), coding (14 credits), coding_plus (17 credits, 4 counsels)balanced
workflowNoWorkflow type for the hearingauto
waitNoWait for completion (true) or return immediately (false)
timeout_secondsNoMax wait time in seconds (if wait=true)

Implementation Reference

  • Main execution handler for the councly_hearing tool within the MCP CallToolRequestSchema handler. Parses input with schema, creates hearing via API client, optionally polls for completion, formats result, and returns MCP response.
    case 'councly_hearing': {
      const parsed = counclyHearingSchema.parse(args);
    
      // Create the hearing
      const hearing = await client.createHearing({
        subject: parsed.subject,
        preset: parsed.preset,
        workflow: parsed.workflow,
      });
    
      // If not waiting, return immediately
      if (!parsed.wait) {
        return {
          content: [
            {
              type: 'text',
              text: `Hearing created: ${hearing.hearingId}\nStatus: ${hearing.status}\nPreset: ${hearing.preset}\nCost: ${hearing.cost.credits} credits\n\nUse councly_status to check progress.`,
            },
          ],
        };
      }
    
      // Wait for completion
      const status = await client.waitForCompletion(hearing.hearingId, {
        timeoutMs: parsed.timeout_seconds * 1000,
        onProgress: (s) => {
          // Progress updates could be logged if needed
          if (s.progress !== undefined) {
            process.stderr.write(`\rProgress: ${s.progress}%`);
          }
        },
      });
    
      return {
        content: [
          {
            type: 'text',
            text: formatHearingResult(status),
          },
        ],
      };
    }
  • Zod input schema definition for councly_hearing tool parameters.
    export const counclyHearingSchema = z.object({
      subject: z.string().min(10).max(10000).describe(
        'The topic or question to discuss. Should be clear and specific. For code review, include the code. For decisions, include context and constraints.'
      ),
      preset: z.enum(['balanced', 'fast', 'coding', 'coding_plus']).optional().default('balanced').describe(
        'Model preset to use. balanced (9 credits) - general purpose, fast (6 credits) - quick responses, coding (14 credits) - code-focused, coding_plus (17 credits) - enhanced with 4 counsels'
      ),
      workflow: z.enum(['auto', 'discussion', 'review', 'brainstorming']).optional().default('auto').describe(
        'Workflow type. auto - system chooses best fit, discussion - debate format, review - code/document review, brainstorming - idea generation'
      ),
      wait: z.boolean().optional().default(true).describe(
        'Whether to wait for completion. If true, polls until hearing finishes. If false, returns immediately with hearing ID.'
      ),
      timeout_seconds: z.number().min(30).max(600).optional().default(300).describe(
        'Maximum seconds to wait for completion (only used if wait=true). Default 300 (5 minutes).'
      ),
    });
  • src/tools.ts:41-90 (registration)
    MCP tool definition object for councly_hearing, including name, description, and JSON schema, part of TOOL_DEFINITIONS array used for listTools response.
      {
        name: 'councly_hearing',
        description: `Create a council hearing where multiple LLMs (Claude, GPT, Gemini, Grok) debate a topic and a moderator synthesizes the verdict.
    
    Use cases:
    - Code review: Get diverse perspectives on code quality, architecture, security
    - Technical decisions: Compare approaches, weigh trade-offs
    - Problem solving: Generate and evaluate multiple solutions
    - Brainstorming: Explore ideas from different angles
    
    The hearing runs asynchronously. By default, this tool waits for completion and returns the verdict. Set wait=false to get the hearing ID immediately and check status later with councly_status.
    
    Cost: Varies by preset (6-17 credits). Check councly.ai for current pricing.`,
        inputSchema: {
          type: 'object',
          properties: {
            subject: {
              type: 'string',
              description: 'The topic or question to discuss. Be specific and include relevant context.',
              minLength: 10,
              maxLength: 10000,
            },
            preset: {
              type: 'string',
              enum: ['balanced', 'fast', 'coding', 'coding_plus'],
              default: 'balanced',
              description: 'Model preset: balanced (9 credits), fast (6 credits), coding (14 credits), coding_plus (17 credits, 4 counsels)',
            },
            workflow: {
              type: 'string',
              enum: ['auto', 'discussion', 'review', 'brainstorming'],
              default: 'auto',
              description: 'Workflow type for the hearing',
            },
            wait: {
              type: 'boolean',
              default: true,
              description: 'Wait for completion (true) or return immediately (false)',
            },
            timeout_seconds: {
              type: 'number',
              default: 300,
              minimum: 30,
              maximum: 600,
              description: 'Max wait time in seconds (if wait=true)',
            },
          },
          required: ['subject'],
        },
      },
  • CounclyClient method to create a hearing by making POST request to /api/v1/mcp/council endpoint.
    async createHearing(request: CreateHearingRequest): Promise<CreateHearingResponse> {
      const headers: Record<string, string> = {};
      if (request.idempotencyKey) {
        headers['Idempotency-Key'] = request.idempotencyKey;
      }
    
      return this.request<CreateHearingResponse>('POST', '/council', request, headers);
    }
  • CounclyClient helper to poll getHearingStatus until completion or timeout, with optional progress callback.
    async waitForCompletion(
      hearingId: string,
      options: {
        pollIntervalMs?: number;
        timeoutMs?: number;
        onProgress?: (status: HearingStatusResponse) => void;
      } = {}
    ): Promise<HearingStatusResponse> {
      const {
        pollIntervalMs = 5000,
        timeoutMs = 300000, // 5 minutes default
        onProgress,
      } = options;
    
      const startTime = Date.now();
    
      while (true) {
        const status = await this.getHearingStatus(hearingId);
    
        if (onProgress) {
          onProgress(status);
        }
    
        // Terminal states
        if (['completed', 'failed', 'early_stopped'].includes(status.status)) {
          return status;
        }
    
        // Timeout check
        if (Date.now() - startTime > timeoutMs) {
          throw new CounclyApiError(
            'TIMEOUT',
            `Hearing did not complete within ${timeoutMs / 1000} seconds`,
            408
          );
        }
    
        // Wait before next poll
        await new Promise((resolve) => setTimeout(resolve, pollIntervalMs));
      }
    }
Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/slmnsrf/councly-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server