Skip to main content
Glama

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault
COUNCLY_API_KEYYesYour MCP API key from Councly
COUNCLY_BASE_URLNoAPI base URLhttps://councly.ai

Tools

Functions exposed to the LLM to take actions

NameDescription
councly_hearing

Create a council hearing where multiple LLMs (Claude, GPT, Gemini, Grok) debate a topic and a moderator synthesizes the verdict.

Use cases:

  • Code review: Get diverse perspectives on code quality, architecture, security

  • Technical decisions: Compare approaches, weigh trade-offs

  • Problem solving: Generate and evaluate multiple solutions

  • Brainstorming: Explore ideas from different angles

The hearing runs asynchronously. By default, this tool waits for completion and returns the verdict. Set wait=false to get the hearing ID immediately and check status later with councly_status.

Cost: Varies by preset (6-17 credits). Check councly.ai for current pricing.

councly_status

Check the status of a council hearing.

Returns:

  • For in-progress hearings: current phase and progress percentage

  • For completed hearings: verdict, trust score, and counsel summaries

  • For failed hearings: error message

Use this to check on hearings created with wait=false, or to retrieve past hearing results.

Prompts

Interactive templates invoked by user choice

NameDescription

No prompts

Resources

Contextual data attached and managed by the client

NameDescription

No resources

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/slmnsrf/councly-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server