Skip to main content
Glama
RELEASE-v1.3.0-COMPLETE.md15.9 kB
# Smart AI Bridge v1.3.0 - Complete Release Documentation **Release Date**: December 9, 2025 **Status**: ✅ Production Ready **Upgrade Path**: Direct upgrade from v1.2.0 --- ## 🎉 What's New in v1.3.0 Smart AI Bridge v1.3.0 introduces three major enhancements: ### 1. Backend Adapter Architecture (Phase 1) Clean abstraction layer for AI backend management with enterprise-grade reliability: - Circuit breaker protection - Automatic fallback chains - Per-backend metrics tracking - Health monitoring with status checks ### 2. Compound Learning Engine (Phase 2) Self-improving routing system that learns from outcomes: - EMA-based confidence scoring - Task pattern recognition - Adaptive backend selection - 4-tier routing priority system ### 3. Specialized Subagent System (Phase 3) Six AI roles with tailored prompts and structured outputs: - code-reviewer - Quality review and best practices - security-auditor - Vulnerability detection - planner - Task breakdown and strategy - refactor-specialist - Code improvement suggestions - test-generator - Test suite generation - documentation-writer - Documentation creation --- ## 📦 Architecture Overview ``` smart-ai-bridge/ ├── smart-ai-bridge-v1.1.0.js # Main MCP server (v1.3.0) │ ├── backends/ # Backend Adapter System (Phase 1) │ ├── backend-adapter.js # Abstract base class │ ├── backend-registry.js # Registry with fallback chains │ ├── local-adapter.js # Local Qwen adapter │ ├── gemini-adapter.js # Gemini MCP adapter │ ├── deepseek-adapter.js # NVIDIA DeepSeek adapter │ └── qwen-adapter.js # NVIDIA Qwen3 adapter │ ├── intelligence/ # Learning Engine (Phase 2) │ └── compound-learning.js # EMA-based learning system │ ├── handlers/ # Subagent System (Phase 3) │ └── subagent-handler.js # Subagent orchestration │ ├── config/ # Role Configurations │ └── role-templates.js # 6 specialized AI roles │ ├── utils/ # Utilities │ ├── verdict-parser.js # Structured output parsing │ └── role-validator.js # Request validation │ └── data/learning/ # Learning State (auto-created) └── learning-state.json # Persistent learning data ``` --- ## 🔧 Phase 1: Backend Adapter Architecture ### Overview Enterprise-grade abstraction layer for managing multiple AI backends with automatic failover and health monitoring. ### Key Features **Circuit Breaker Protection** - 5 consecutive failures → 30-second cooldown - Automatic recovery with exponential backoff - Per-backend failure tracking **Fallback Chains** - Default: `local → gemini → deepseek3.1 → qwen3` - Automatic failover on backend unavailability - Configurable priority ordering **Health Monitoring** - Real-time status checks (healthy/degraded/circuit_open) - Per-backend metrics (success rate, latency, calls) - Circuit breaker state tracking ### Files Created ```javascript // backends/backend-adapter.js (8.6 KB) // Abstract base class with: // - Circuit breaker logic // - Health status tracking // - Metrics collection // - Automatic retry with backoff // backends/backend-registry.js (8.5 KB) // Registry pattern with: // - Config-driven backend registration // - Fallback chain management // - Health aggregation // - Request routing with fallback // backends/local-adapter.js (2.6 KB) // Local Qwen2.5-Coder-7B wrapper // backends/gemini-adapter.js (2.3 KB) // Gemini Enhanced MCP wrapper // backends/deepseek-adapter.js (2.7 KB) // NVIDIA DeepSeek V3.1 wrapper // backends/qwen-adapter.js (2.6 KB) // NVIDIA Qwen3-Coder-480B wrapper ``` ### Usage Example ```javascript import { BackendRegistry } from './backends/backend-registry.js'; import { LocalAdapter } from './backends/local-adapter.js'; const registry = new BackendRegistry(); const adapter = new LocalAdapter(); registry.setAdapter('local', adapter); const result = await registry.makeRequestWithFallback( 'Analyze this code', 'local', // Preferred backend { max_tokens: 2048 } ); console.log(result.backend); // Actual backend used console.log(result.fallbackChain); // Backends tried ``` ### Testing ```bash node test-backend-adapters.js # ✅ All 12 tests passing # - Circuit breaker activation # - Automatic fallback # - Health status tracking # - Metrics collection ``` --- ## 🧠 Phase 2: Compound Learning Engine ### Overview Self-improving routing system that learns optimal backend selection patterns from request outcomes. ### Key Features **Outcome Tracking** - Records success/failure for every request - Tracks latency per backend - Learns task patterns (complexity + taskType) **Confidence Scoring** - EMA (Exponential Moving Average) with alpha=0.2 - Per-backend confidence (0.0-1.0) - Tracked by complexity and task type **Pattern Recognition** - Creates patterns from `complexity:taskType` combinations - Tracks success rates per pattern - Recommends backends for known patterns **4-Tier Routing Priority** 1. **Forced backend** - Explicit `options.backend` parameter 2. **Learning recommendation** - Confidence > 0.7 from learned patterns 3. **Rule-based routing** - Heuristics (complex→qwen3, code→deepseek) 4. **Health-based fallback** - First healthy backend in chain ### Implementation **Modified Methods in BackendRouter:** ```javascript // smart-ai-bridge-v1.1.0.js class BackendRouter { async routeRequest(prompt, options = {}) { // Tier 1: Forced backend if (options.backend) return options.backend; // Extract context for learning const context = this._extractContext(prompt, options); // Tier 2: Learning recommendation const rec = this.learningEngine.getRecommendation(context); if (rec && rec.confidence > 0.7) { return rec.backend; // If healthy } // Tier 3: Rule-based routing if (context.complexity === 'complex') return 'qwen3'; if (context.taskType === 'code') return 'deepseek3.1'; // Tier 4: Health fallback return this.registry.getHealthyBackend() || 'local'; } async makeRequest(prompt, backend, options) { const context = this._extractContext(prompt, options); const startTime = Date.now(); try { const result = await this.registry.makeRequestWithFallback(...); // Record successful outcome this.learningEngine.recordOutcome({ backend: result.backend, context, success: true, latency: Date.now() - startTime, source: options.backend ? 'forced' : 'routed' }); return result; } catch (error) { // Record failed outcome this.learningEngine.recordOutcome({ backend, context, success: false, latency: Date.now() - startTime, source: options.backend ? 'forced' : 'routed' }); throw error; } } } ``` ### Learning State Persistence ```json // data/learning/learning-state.json (auto-created) { "backendMetrics": { "deepseek3.1": { "confidence": 0.85, "totalCalls": 42, "successfulCalls": 38, "avgLatency": 1200, "byComplexity": { "simple": {...}, "moderate": {...} }, "byTaskType": { "code": {...}, "analysis": {...} }, "trend": "improving" } }, "taskPatterns": { "simple:code": { "deepseek3.1": { "successRate": 0.92, "avgLatency": 800 }, "local": { "successRate": 0.88, "avgLatency": 600 } } }, "routingHistory": [...] } ``` ### Testing ```bash node test-learning-integration.js # ✅ All 12 tests passing # - Context extraction # - Rule-based routing # - Outcome recording # - Learning recommendations # - Full routing flow ``` --- ## 🤖 Phase 3: Specialized Subagent System ### Overview Six specialized AI roles with tailored system prompts, temperature settings, and structured verdict outputs. ### Available Roles | Role | Purpose | Backend | Temperature | Verdict Format | |------|---------|---------|-------------|----------------| | **code-reviewer** | Quality review, best practices | qwen3 | 0.3 | quality_score, issues, strengths | | **security-auditor** | Vulnerability detection, OWASP | deepseek3.1 | 0.2 | security_score, vulnerabilities, risk | | **planner** | Task breakdown, dependencies | qwen3 | 0.5 | complexity, subtasks, order | | **refactor-specialist** | Code improvement suggestions | deepseek3.1 | 0.4 | refactorings, patterns, priority | | **test-generator** | Test suite generation | deepseek3.1 | 0.6 | test_suites, coverage, strategy | | **documentation-writer** | Documentation creation | gemini | 0.7 | sections, quality, missing_docs | ### Tool API **Tool Definition:** ```javascript { name: 'spawn_subagent', description: 'Spawn specialized AI subagent with predefined roles', schema: { role: { enum: ['code-reviewer', 'security-auditor', ...] }, task: { type: 'string' }, // Required file_patterns: { type: 'array', items: { type: 'string' } }, // Optional context: { type: 'object' }, // Optional verdict_mode: { enum: ['summary', 'full'], default: 'summary' } } } ``` **Response Format:** ```javascript { success: true, role: 'code-reviewer', role_name: 'Code Reviewer', backend_used: 'qwen3', has_verdict: true, verdict: { quality_score: 7, issues: [...], strengths: [...], overall_assessment: '...' }, text_content: '...', // Analysis text without verdict raw_response: '...', // Full AI response metadata: { task_quality: 'good', files_analyzed: 3, verdict_valid: true, verdict_warnings: [], latency: 2400, tokens: 1850 } } ``` ### Usage Examples **1. Code Review** ```javascript const result = await server.handleSpawnSubagent({ role: 'code-reviewer', task: 'Review the authentication module for quality issues', file_patterns: ['src/auth/**/*.js'], verdict_mode: 'full' }); console.log(result.verdict.quality_score); // 7 console.log(result.verdict.issues.length); // 3 ``` **2. Security Audit** ```javascript const result = await server.handleSpawnSubagent({ role: 'security-auditor', task: 'Audit the API endpoints for OWASP vulnerabilities', file_patterns: ['api/**/*.js', 'middleware/**/*.js'] }); console.log(result.verdict.security_score); // 6 console.log(result.verdict.vulnerabilities); // [{ severity: 'high', ... }] ``` **3. Implementation Planning** ```javascript const result = await server.handleSpawnSubagent({ role: 'planner', task: 'Break down the task of implementing OAuth2 authentication', context: { current_auth: 'JWT', target: 'OAuth2' } }); console.log(result.verdict.subtasks); // [{ id: 't1', description: '...' }] console.log(result.verdict.implementation_order); // ['t1', 't2', 't3'] ``` ### File Pattern Resolution ```javascript // Glob patterns supported file_patterns: [ 'src/**/*.js', // All JS files in src '**/*.test.ts', // All test files '!node_modules/**', // Exclusions (automatic) 'api/{auth,users}/*.js' // Multiple directories ] // Automatic exclusions: // - node_modules // - dist/build // - .git // File limits: // - Max 50 files per request // - Max 100KB per file ``` ### Testing ```bash node test-spawn-subagent.js # ✅ All 14 tests passing # - Role availability # - Request validation # - Task quality assessment # - Verdict parsing # - All 6 roles execution ``` --- ## 🚀 Migration Guide ### From v1.2.0 to v1.3.0 **1. Update Main File** ```bash cp smart-ai-bridge-v1.1.0.js smart-ai-bridge.js ``` **2. Install Dependencies** ```bash npm install glob minimatch ``` **3. Directory Structure** No action needed - directories auto-created: - `backends/` - Backend adapters - `intelligence/` - Learning engine - `handlers/` - Subagent handler - `config/` - Role templates - `utils/` - Utilities - `data/learning/` - Learning state (auto-created) **4. Backward Compatibility** ✅ All existing tools work unchanged ✅ Existing MCP clients require no changes ✅ Environment variables remain the same **5. New Features** - Backend adapters work transparently - Learning engine starts with zero data, improves over time - New `spawn_subagent` tool available immediately --- ## 🧪 Testing ### Full Test Suite ```bash # Phase 1: Backend Adapters node test-backend-adapters.js # ✅ 12/12 tests passing # - Circuit breakers # - Fallback chains # - Health monitoring # - Metrics tracking # Phase 2: Learning Integration node test-learning-integration.js # ✅ 12/12 tests passing # - Context extraction # - Rule-based routing # - Outcome recording # - Learning recommendations # Phase 3: Subagent System node test-spawn-subagent.js # ✅ 14/14 tests passing # - Role validation # - Task quality assessment # - Verdict parsing # - All 6 roles execution # Combined: 38/38 tests passing (100%) ``` --- ## 📊 Performance Characteristics ### Backend Adapters - Circuit breaker overhead: <5ms - Fallback decision: <10ms - Health check: <50ms ### Learning Engine - Context extraction: <1ms - Recommendation lookup: <2ms - Outcome recording: <5ms - State persistence: <50ms (async) ### Subagent System - Request validation: <2ms - File pattern resolution: 50-200ms (depends on file count) - Verdict parsing: <5ms --- ## 🔒 Security Considerations ### File Access - Glob patterns respect `.gitignore` - Automatic exclusions: `node_modules`, `dist`, `.git` - File size limits: 100KB per file - Count limits: 50 files per request ### Verdict Parsing - Sandboxed JSON parsing (no eval) - Validation against expected schemas - Fallback to text-only on parse errors ### Circuit Breakers - Prevents request flooding on backend failures - Automatic recovery with exponential backoff - Per-backend isolation --- ## 🎯 Future Enhancements ### Planned for v1.4.0 - [ ] Multi-agent collaboration (subagents calling subagents) - [ ] Learning engine export/import - [ ] Custom role template registration - [ ] Verdict schema validation with JSON Schema - [ ] File diff analysis for subagents ### Experimental Features - [ ] Reinforcement learning for routing - [ ] A/B testing between backends - [ ] Automatic prompt optimization - [ ] Verdict confidence scoring --- ## 📝 API Reference ### New Tools **spawn_subagent** ```typescript spawn_subagent({ role: 'code-reviewer' | 'security-auditor' | 'planner' | 'refactor-specialist' | 'test-generator' | 'documentation-writer', task: string, file_patterns?: string[], context?: object, verdict_mode?: 'summary' | 'full' }): Promise<SubagentResult> ``` ### Modified Tools **ask** (enhanced with learning) - Now records outcomes for learning - Routing uses 4-tier priority system - Backend selection learns over time --- ## 🐛 Known Issues ### Verdict Parsing - JSON with escaped quotes in strings may fail parsing - Workaround: Use text content if verdict unavailable - Fix planned for v1.3.1 ### Learning Engine - Requires ~10 requests per pattern for confident recommendations - Cold start uses rule-based routing - Learning state grows unbounded (cleanup planned for v1.3.1) --- ## 📞 Support - **Issues**: [GitHub Issues](https://github.com/your-repo/smart-ai-bridge/issues) - **Discussions**: [GitHub Discussions](https://github.com/your-repo/smart-ai-bridge/discussions) - **Documentation**: See README.md and inline comments --- ## 📄 License MIT License - See LICENSE file --- ## 🙏 Acknowledgments - Backend adapter pattern inspired by MKG V2 - Compound learning engine ported from MKG V2 - Role templates influenced by LangChain agents - Circuit breaker pattern from Hystrix --- **Smart AI Bridge v1.3.0** - Production Ready **December 9, 2025**

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Platano78/Smart-AI-Bridge'

If you have feedback or need assistance with the MCP directory API, please join our Discord server