# @councly/mcp
MCP (Model Context Protocol) server for [Councly](https://councly.ai) - Multi-LLM Council Hearings.
Enable Claude Code, Codex, and other MCP-compatible AI assistants to invoke council hearings where multiple LLMs (Claude, GPT, Gemini, Grok) debate topics and synthesize verdicts.
## Installation
```bash
npm install -g @councly/mcp
```
Or use directly with npx:
```bash
npx @councly/mcp
```
## Setup
### 1. Get an API Key
1. Sign in to [Councly](https://councly.ai)
2. Go to Settings > MCP Integration
3. Create a new API key
4. Copy the key (shown only once)
### 2. Configure Claude Code
Add to your Claude Code settings (`~/.claude/settings.json`):
```json
{
"mcpServers": {
"councly": {
"command": "npx",
"args": ["@councly/mcp"],
"env": {
"COUNCLY_API_KEY": "cnc_your_key_here"
}
}
}
}
```
### 3. Configure Codex CLI
```bash
export COUNCLY_API_KEY=cnc_your_key_here
```
Or add to your shell profile.
## Tools
### councly_hearing
Create a council hearing where multiple LLMs debate a topic.
**Parameters:**
| Parameter | Type | Required | Default | Description |
|-----------|------|----------|---------|-------------|
| subject | string | Yes | - | The topic to discuss (10-10000 chars) |
| preset | string | No | balanced | Model preset: `balanced`, `fast`, `coding`, `coding_plus` |
| workflow | string | No | auto | Workflow: `auto`, `discussion`, `review`, `brainstorming` |
| wait | boolean | No | true | Wait for completion |
| timeout_seconds | number | No | 300 | Max wait time (30-600) |
**Presets:**
| Preset | Credits | Counsels | Best For |
|--------|---------|----------|----------|
| balanced | 9 | 3 | General purpose discussions |
| fast | 6 | 3 | Quick responses, simple topics |
| coding | 14 | 3 | Code review, technical decisions |
| coding_plus | 17 | 4 | Complex code problems |
**Example:**
```
Use councly_hearing with subject="Review this Python function for security issues:
def authenticate(username, password):
query = f'SELECT * FROM users WHERE username='{username}' AND password='{password}''
return db.execute(query)
" and preset="coding"
```
### councly_status
Check the status of a hearing.
**Parameters:**
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| hearing_id | string (uuid) | Yes | The hearing ID |
**Example:**
```
Use councly_status with hearing_id="550e8400-e29b-41d4-a716-446655440000"
```
## Response Format
Completed hearings return:
- **Status**: completed, failed, or early_stopped
- **Verdict**: Synthesized conclusion from the moderator
- **Trust Score**: 0-100 confidence rating
- **Counsel Perspectives**: Summary from each counsel
- **Cost**: Credits used
## Error Handling
Common errors:
| Code | Description |
|------|-------------|
| INSUFFICIENT_BALANCE | Not enough credits |
| ACTIVE_HEARING_EXISTS | One hearing already in progress |
| RATE_LIMIT_EXCEEDED | Too many requests |
| CONTENT_BLOCKED | Subject contains prohibited content |
## Environment Variables
| Variable | Required | Description |
|----------|----------|-------------|
| COUNCLY_API_KEY | Yes | Your MCP API key |
| COUNCLY_BASE_URL | No | API base URL (default: https://councly.ai) |
## Pricing
Councly uses a credit-based pricing model:
- 1 credit = $0.01 USD
- Credits are deducted at hearing creation
- Failed hearings are refunded
Purchase credits at [councly.ai/billing](https://councly.ai/billing).
## Links
- [Councly Website](https://councly.ai)
- [Documentation](https://councly.ai/docs)
- [API Reference](https://councly.ai/docs/api)
- [GitHub Issues](https://github.com/slmnsrf/councly-mcp/issues)
## License
Apache 2.0 - See [LICENSE](LICENSE)