Smart-AI-Bridge
Server Configuration
Describes the environment variables required to run the server.
| Name | Required | Description | Default |
|---|---|---|---|
| TDD_MODE | No | Enable TDD mode for testing | false |
| CLOUD_API_KEY_1 | No | API key for Cloud Backend 1 (Coding Specialist) | |
| CLOUD_API_KEY_2 | No | API key for Cloud Backend 2 (Analysis Specialist) | |
| CLOUD_API_KEY_3 | No | API key for Cloud Backend 3 (General Purpose) | |
| LOCAL_MODEL_ENDPOINT | No | The URL endpoint for your local model server (LM Studio, Ollama, vLLM, etc.) | http://localhost:1234/v1 |
Capabilities
Features and capabilities supported by this server
| Capability | Details |
|---|---|
| tools | {} |
| logging | {} |
Tools
Functions exposed to the LLM to take actions
| Name | Description |
|---|---|
| reviewC | Comprehensive code review - Security audit, performance analysis, best practices validation. Multi-file correlation analysis. Automated quality scoring and improvement suggestions. |
| write_files_atomicC | Write multiple files atomically with backup - Enterprise-grade file modification with safety mechanisms |
| validate_changesC | Pre-flight validation for code changes - AI-powered syntax checking and impact analysis. Validates proposed modifications before implementation. |
| backup_restoreC | Enhanced backup management - Timestamped backup tracking with metadata, restore capability, and intelligent cleanup. Extends existing backup patterns with enterprise-grade management. |
| askA | MULTI-AI Direct Query - Ask any backend with smart fallback chains. Features automatic Unity detection, dynamic token scaling, and response headers with backend tracking. |
| manage_conversationC | Manage conversation threading across sessions. Start new conversations, continue existing ones, search conversation history, or get analytics. |
| get_analyticsB | Get usage analytics, performance metrics, cost analysis, and optimization recommendations. View current session stats, historical data, backend performance, and detailed reports. |
| check_backend_healthA | Manual backend health check - On-demand health diagnostics for specific backend with 5-minute result caching. Only runs when explicitly requested. |
| spawn_subagentB | Spawn specialized AI subagent - Create subagents with predefined roles (code-reviewer, security-auditor, planner, refactor-specialist, test-generator, documentation-writer). Each role has customized prompts, tools, and behavior for specific tasks. |
| parallel_agentsA | Execute multiple TDD agents in parallel with quality gate iteration. Decomposes high-level tasks into atomic subtasks, executes them in parallel groups (RED before GREEN), and iterates based on quality review. |
| councilA | Multi-AI Council - Get consensus from multiple AI backends on complex questions. Claude explicitly selects topic and confidence level, backends provide diverse perspectives, Claude synthesizes the final answer. Use for architectural decisions, controversial topics, or when you need validation from multiple viewpoints. |
| analyze_fileA | Local LLM File Analysis - Reads and analyzes files using local LLM. Claude never sees full file content, only structured findings. Token savings: 2000+ to ~150 tokens per file. |
| exploreA | Codebase Exploration - Answer questions about the codebase using intelligent search. Returns only a summary, never raw file contents. Token savings: 90%+ for exploration tasks. |
| generate_fileA | Local LLM Code Generation - Generates code from natural language spec using local LLM. Claude reviews or auto-approves. Token savings: 500+ to ~50 tokens. |
| modify_fileC | Local LLM File Modification - Applies edits using natural language instructions. Local LLM understands code and applies changes. Token savings: 1500+ to ~100 tokens. |
| batch_analyzeA | Batch File Analysis - Analyze multiple files using glob patterns. Aggregates findings across files. Massive token savings for multi-file analysis. |
| batch_modifyA | Batch File Modification - Apply same instructions to multiple files with atomic rollback. Supports transaction mode (all or nothing). |
| refactorB | Cross-File Refactoring - Apply refactoring across files with intelligent scope detection. Supports function, class, module, and project-level refactoring. |
| dual_iterateA | Dual Iterative Code Generation - Internal generate->review->fix loop using dual backends. Generator creates code, reviewer validates, generator fixes. Runs entirely within Smart AI Bridge, returning only final approved code to Claude. Massive token savings for complex generation. |
Prompts
Interactive templates invoked by user choice
| Name | Description |
|---|---|
No prompts | |
Resources
Contextual data attached and managed by the client
| Name | Description |
|---|---|
No resources | |
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/Platano78/Smart-AI-Bridge'
If you have feedback or need assistance with the MCP directory API, please join our Discord server