Server Configuration
Describes the environment variables required to run the server.
| Name | Required | Description | Default |
|---|---|---|---|
No arguments | |||
Capabilities
Features and capabilities supported by this server
| Capability | Details |
|---|---|
| tools | {} |
| logging | {} |
| prompts | {} |
Tools
Functions exposed to the LLM to take actions
| Name | Description |
|---|---|
| ask-ai | Provider selection [--provider], model selection [-m], sandbox [-s], and changeMode:boolean for providing edits. Supports Gemini, Codex, and Claude Code. |
| ping | Echo |
| Help | Display help information for the configured AI provider CLI |
| brainstorm | Generate novel ideas with dynamic context gathering. --> Creative frameworks (SCAMPER, Design Thinking, etc.), domain context integration, idea clustering, feasibility analysis, and iterative refinement. Supports Gemini, Codex, and Claude. |
| fetch-chunk | Retrieves cached chunks from a changeMode response. Use this to get subsequent chunks after receiving a partial changeMode response. |
| timeout-test | Test timeout prevention by running for a specified duration |
| mitigate-mistakes | Apply a single research-grounded skill gate to detect common AI coding mistakes. Choose the gate relevant to your current task stage: requirements-grounding, context-scope-discipline, dependency-verification, design-doc-and-architecture-gate, test-and-error-path-gate, secure-coding-and-validation-gate, code-review-and-change-gate, code-quality-enforcer, or deterministic-validation-gate. Based on academic research and professional engineering standards. |
| coordinate-review | Coordinated multi-gate review. Auto-selects the minimum set of research-grounded skill gates for a given task type (feature/bugfix/refactor/dependency-update) and runs them in a single structured pass. Produces gate-by-gate findings, blocking issues, and a final merge recommendation. Uses the AI Coding Agent Mitigator coordinator pattern. |
| deploy-agents | Deploy multiple AI agents to work on tasks collaboratively or independently. Strategies: 'parallel' (N agents on 1 task), 'sequential' (chained, each sees prior output), 'fan-out' (1 agent per task). Agents communicate via shared context to avoid conflicts and redundancy. Current agent mode: read-only. |
| agent-status | Check status of multi-agent orchestration sessions. Provide a sessionId to get details, or omit to list recent sessions. Current agent mode: read-only. |
Prompts
Interactive templates invoked by user choice
| Name | Description |
|---|---|
| ask-ai | Execute AI analysis using Gemini, Codex, or Claude. Supports enhanced change mode for structured edit suggestions (Gemini only). |
| ping | Echo test message with structured response. |
| Help | Display help information for the configured AI provider CLI |
| brainstorm | Generate structured brainstorming prompt with methodology-driven ideation, domain context integration, and analytical evaluation framework |
| fetch-chunk | Fetch the next chunk of a response |
| timeout-test | Test the timeout prevention system by running a long operation |
| mitigate-mistakes | Apply a single research-grounded gate to analyze code or a task for common AI agent failure modes. |
| coordinate-review | Run the minimum relevant skill gates for a task type in a single coordinated review pass. |
| deploy-agents | Deploy multiple AI agents with coordination. Supports parallel collaborative analysis, sequential refinement, and fan-out task distribution. |
Resources
Contextual data attached and managed by the client
| Name | Description |
|---|---|
No resources | |