Skip to main content
Glama

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault

No arguments

Capabilities

Features and capabilities supported by this server

CapabilityDetails
tools
{}
logging
{}
prompts
{}

Tools

Functions exposed to the LLM to take actions

NameDescription
ask-ai

Provider selection [--provider], model selection [-m], sandbox [-s], and changeMode:boolean for providing edits. Supports Gemini, Codex, and Claude Code.

ping

Echo

Help

Display help information for the configured AI provider CLI

brainstorm

Generate novel ideas with dynamic context gathering. --> Creative frameworks (SCAMPER, Design Thinking, etc.), domain context integration, idea clustering, feasibility analysis, and iterative refinement. Supports Gemini, Codex, and Claude.

fetch-chunk

Retrieves cached chunks from a changeMode response. Use this to get subsequent chunks after receiving a partial changeMode response.

timeout-test

Test timeout prevention by running for a specified duration

mitigate-mistakes

Apply a single research-grounded skill gate to detect common AI coding mistakes. Choose the gate relevant to your current task stage: requirements-grounding, context-scope-discipline, dependency-verification, design-doc-and-architecture-gate, test-and-error-path-gate, secure-coding-and-validation-gate, code-review-and-change-gate, code-quality-enforcer, or deterministic-validation-gate. Based on academic research and professional engineering standards.

coordinate-review

Coordinated multi-gate review. Auto-selects the minimum set of research-grounded skill gates for a given task type (feature/bugfix/refactor/dependency-update) and runs them in a single structured pass. Produces gate-by-gate findings, blocking issues, and a final merge recommendation. Uses the AI Coding Agent Mitigator coordinator pattern.

deploy-agents

Deploy multiple AI agents to work on tasks collaboratively or independently. Strategies: 'parallel' (N agents on 1 task), 'sequential' (chained, each sees prior output), 'fan-out' (1 agent per task). Agents communicate via shared context to avoid conflicts and redundancy. Current agent mode: read-only.

agent-status

Check status of multi-agent orchestration sessions. Provide a sessionId to get details, or omit to list recent sessions. Current agent mode: read-only.

Prompts

Interactive templates invoked by user choice

NameDescription
ask-aiExecute AI analysis using Gemini, Codex, or Claude. Supports enhanced change mode for structured edit suggestions (Gemini only).
pingEcho test message with structured response.
HelpDisplay help information for the configured AI provider CLI
brainstormGenerate structured brainstorming prompt with methodology-driven ideation, domain context integration, and analytical evaluation framework
fetch-chunkFetch the next chunk of a response
timeout-testTest the timeout prevention system by running a long operation
mitigate-mistakesApply a single research-grounded gate to analyze code or a task for common AI agent failure modes.
coordinate-reviewRun the minimum relevant skill gates for a task type in a single coordinated review pass.
deploy-agentsDeploy multiple AI agents with coordination. Supports parallel collaborative analysis, sequential refinement, and fan-out task distribution.

Resources

Contextual data attached and managed by the client

NameDescription

No resources

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/diaz3618/ccg-mcp-tool'

If you have feedback or need assistance with the MCP directory API, please join our Discord server