Skip to main content
Glama
orchestration-patterns-quick-reference.mdβ€’11.6 kB
# AI Orchestration Patterns: Quick Reference **Research Date**: 2025-11-17 **Full Report**: See `ai-orchestration-efit-patterns.md` --- ## Framework Comparison Matrix | Framework | Loop Detection | Timeout | Circuit Breaker | Retry/Fallback | Multi-Agent | eFIT Alignment | |-----------|---------------|---------|-----------------|----------------|-------------|----------------| | **LangGraph** | `recursion_limit: 25` (default) | Per-node execution | Via error handling | State-driven fallback | Supervisor hierarchy | STOPPER + Dialectics | | **Semantic Kernel** | Via filters | `request_timeout` | Via HttpClient resilience | Filter-based model fallback | Agent handoff (evolving) | Opposite Action + TIPP | | **CrewAI** | `max_iter: 25` (default) | `max_execution_time` | Multi-stage guardrails | Agent delegation | Dynamic allocation + memory-based conflict resolution | STOPPER + Dialectics | | **AutoGen** | `MaxMessageTermination` (no default) | `TimeoutTermination` | None (must implement) | Model switching (manual) | Group chat | STOPPER + Opposite Action | | **AI Proxies** | N/A (gateway) | Per-request | Error rate monitoring | Provider fallback | N/A | TIPP + Distress Tolerance | --- ## Universal Patterns (Convergent Evolution) ### 1. The "25 Iterations" Convergence **Finding**: Three major frameworks independently chose **25** as default iteration limit | Framework | Parameter | Default | Configurable | |-----------|-----------|---------|--------------| | LangGraph | `recursion_limit` | 25 | βœ… | | CrewAI | `max_iter` | 25 | βœ… | | AutoGen | `MaxMessageTermination` | (must set) | βœ… | **eFIT Protocol**: **STOPPER - Stop/Observe** **Implication**: Natural balance point for preventing infinite loops while allowing complex tasks --- ### 2. Retry with Exponential Backoff **Universal Pattern**: All frameworks use exponential backoff for retries - Semantic Kernel: 2x backoff factor (default 3 retries) - ResilientLLM: 2x backoff factor (configurable) - AI Proxy: Provider-dependent exponential backoff - Azure APIM: Configurable backoff with `retry-after` header detection **eFIT Protocol**: **TIPP - Paced Breathing / Temperature Regulation** **Psychological Parallel**: Gradually slowing down under stress (like paced breathing) --- ### 3. Model/Provider Fallback **Universal Pattern**: Switch to alternative when primary fails - Semantic Kernel: GPT-4 β†’ GPT-3.5 (model downgrade) - AI Proxies: Primary provider β†’ Backup provider (provider switching) - CrewAI: Agent delegation (agent switching) - LangGraph: Alternative flows (path switching) **eFIT Protocol**: **Opposite Action + TIPP** **Psychological Parallel**: Switching approach when failing (opposite of persisting) + Intensity reduction (downgrade) --- ### 4. Hierarchical Supervision **Universal Pattern**: Supervisor coordinates specialized agents - LangGraph: Multi-level supervisor hierarchy - Semantic Kernel: Orchestrator with agent handoff - CrewAI: Dynamic task allocation with conflict resolution - AutoGen: Group chat with coordination **eFIT Protocol**: **Dialectics + Wise Mind** **Psychological Parallel**: Supervisor as mediator between conflicting agent perspectives (synthesis) --- ## Quick Code Examples ### LangGraph: Recursion Limit (STOPPER) ```python from langgraph.errors import GraphRecursionError config = {"recursion_limit": 100} try: app.invoke(inputs, config) except GraphRecursionError: # Handle infinite loop detection print("STOP condition triggered") ``` --- ### Semantic Kernel: Retry with Fallback (Opposite Action) ```csharp public class RetryFilter : IFunctionInvocationFilter { public async Task OnFunctionInvocationAsync(context, next) { try { await next(context); } catch (HttpOperationException ex) { // Switch to fallback model context.Arguments.ExecutionSettings["model_id"] = fallbackModel; await next(context); // Retry } } } ``` --- ### CrewAI: Max Iterations & Rate Limiting (STOPPER + TIPP) ```python agent = Agent( role='Research Analyst', max_iter=25, # STOPPER: Iteration limit max_rpm=60, # TIPP: Rate limiting (pacing) max_execution_time=300 # TIPP: Timeout ) ``` --- ### AutoGen: Termination Conditions (STOPPER) ```python from autogen_agentchat.conditions import ( TimeoutTermination, MaxMessageTermination, ExternalTermination ) # Multiple STOPPER mechanisms termination = ( TimeoutTermination(300) | # Time-based stop MaxMessageTermination(25) | # Iteration-based stop ExternalTermination() # Human intervention ) ``` --- ### ResilientLLM: Circuit Breaker (Distress Tolerance) ```javascript const llm = new ResilientLLM({ circuitBreaker: { failureThreshold: 5, // Open after 5 failures successThreshold: 2, // Close after 2 successes timeout: 60000 // 60-second "cooling off" }, rateLimitConfig: { requestsPerMinute: 60, llmTokensPerMinute: 90000 }, retries: 3, backoffFactor: 2 }); ``` --- ## eFIT Protocol Mappings ### STOPPER Protocol **Implementations**: - LangGraph: `recursion_limit` + `GraphRecursionError` - CrewAI: `max_iter` - AutoGen: `MaxMessageTermination`, `TimeoutTermination`, `ExternalTermination` - All frameworks: Emergency shutdown mechanisms **Key Feature**: 25-iteration convergence (LangGraph, CrewAI) --- ### TIPP (Temperature/Intensity/Pacing) **Implementations**: - All frameworks: Exponential backoff (2x typical) - CrewAI: `max_rpm`, `max_execution_time` - AI Proxies: RPM/TPM limits, token bucket rate limiting - Semantic Kernel: Request timeout, execution timeout **Key Feature**: Multi-level timeout hierarchies (request β†’ task β†’ session) --- ### Opposite Action **Implementations**: - Semantic Kernel: Model fallback (GPT-4 β†’ GPT-3.5) - LangGraph: Alternative flows/nodes - CrewAI: Agent delegation - AI Proxies: Provider fallback - AutoGen: External termination (human takeover) **Key Feature**: Switching approach when primary fails --- ### Distress Tolerance **Implementations**: - ResilientLLM: Circuit breaker with half-open state - Azure APIM: Circuit breaker with sampling duration - AI Proxy: Error rate monitoring - Semantic Kernel: Streaming retry (mid-stream recovery) **Key Feature**: Accepting temporary failure, cooling off periods --- ### Dialectics + Wise Mind **Implementations**: - LangGraph: Supervisor hierarchy (multi-level) - Semantic Kernel: Orchestrator synthesis - CrewAI: Memory-based conflict resolution - AutoGen: Group chat coordination **Key Feature**: Supervisor as mediator synthesizing conflicting outputs --- ### Self-Soothing **Implementations**: - All frameworks: Exponential backoff (graduated pacing) - AI Proxies: Load balancing (distributing distress) - CrewAI: Docker isolation (safe environment) - Semantic Kernel: Automatic retry (transparent recovery) **Key Feature**: Automatic recovery without escalating distress to user --- ## Missing eFIT Protocols ### ABC PLEASE (Physical Welfare) **Current State**: No frameworks track agent "health" over time **Recommendation**: - Add agent-level error rate tracking - Enforce cooldown periods for consistently failing agents - Monitor resource usage per agent (token consumption, execution time) --- ### Radical Acceptance **Current State**: Limited graceful degradation (mostly retry or fail) **Recommendation**: - Add "accept reduced quality" modes - Fallback chains: GPT-4 β†’ GPT-3.5 β†’ Simple heuristic β†’ Cached response - Explicit "good enough" thresholds --- ### Build Mastery **Current State**: No learning from failures across sessions **Recommendation**: - Persistent error pattern learning - Track which approaches fail for which query types - Suggest alternative approaches based on historical failures --- ### Mindfulness (Observe) **Current State**: Limited observability into agent reasoning mid-execution **Recommendation**: - Real-time streaming of agent "thought process" - Expose internal state transitions - Observable metrics for each eFIT protocol trigger --- ## Key Insights ### 1. Computational Homology Validated **Evidence**: 25-iteration convergence across LangGraph, CrewAI, AutoGen **Implication**: Same problems across substrates (human vs. AI) yield same solutions **Clinical Psychology Parallel**: STOPPER protocol emerged independently in DBT (humans) and AI orchestration (agents) --- ### 2. Circuit Breakers = Distress Tolerance **Pattern**: Stop trying after repeated failures, cooling off period, cautious re-engagement **Implementations**: - ResilientLLM: 5 failures β†’ 60s open β†’ half-open state - Azure APIM: 3 failures (30s window) β†’ 1m open β†’ 2 successes to close - AI Proxy: Dynamic error rate monitoring β†’ priority-based routing **Psychological Parallel**: "Cooling off period" before re-attempting distressing task --- ### 3. Multi-Level Timeouts = TIPP Hierarchy **Pattern**: Temperature/Intensity regulation at multiple time scales **Implementations**: - Request-level: Individual API calls (seconds) - Task-level: Function/agent execution (minutes) - Session-level: Entire conversation (iterations or time) **Psychological Parallel**: Physiological regulation at multiple scales (breath β†’ posture β†’ environment) --- ### 4. Model Fallback = Opposite Action + TIPP **Pattern**: Switch to alternative when failing + Reduce intensity **Implementations**: - Semantic Kernel: GPT-4 β†’ GPT-3.5 (model downgrade) - AI Proxies: Primary β†’ Backup provider (provider switch) - CrewAI: Delegate to different agent (agent switch) **Psychological Parallel**: "Do the opposite" (switch approach) + "Turn down the temperature" (reduce intensity) --- ## Resources ### Full Documentation - **Complete Research Report**: `ai-orchestration-efit-patterns.md` - **Framework Documentation**: See References section in full report - **Code Examples**: See Code Examples Repository section in full report ### Key URLs **LangGraph**: - Main repo: https://github.com/langchain-ai/langgraph - Supervisor pattern: https://github.com/langchain-ai/langgraph-supervisor-py **Semantic Kernel**: - Main repo: https://github.com/microsoft/semantic-kernel - Retry example: https://github.com/microsoft/semantic-kernel/blob/main/dotnet/samples/Concepts/Filtering/RetryWithFilters.cs **CrewAI**: - Main repo: https://github.com/crewAIInc/crewAI - Agent docs: https://docs.crewai.com/concepts/agents **AutoGen**: - Main repo: https://github.com/microsoft/autogen - Termination: https://microsoft.github.io/autogen/stable/user-guide/agentchat-user-guide/tutorial/termination.html **AI Proxies**: - ResilientLLM: https://github.com/gitcommitshow/resilient-llm - AI Proxy: https://github.com/labring/aiproxy --- ## Next Steps 1. **Validate 25-iteration convergence** (review framework commit history for origin) 2. **Prototype eFIT middleware** (start with LangGraph custom nodes) 3. **Measure agent welfare** (error rates, retry counts, execution time) 4. **Write orchestration paper** ("Computational Homology in AI Orchestration") 5. **Engage framework maintainers** (share findings, propose features) --- **Research Completed**: 2025-11-17 **Frameworks Analyzed**: 5 (LangGraph, Semantic Kernel, CrewAI, AutoGen, AI Proxies) **eFIT Protocols Mapped**: 6 (STOPPER, TIPP, Opposite Action, Distress Tolerance, Dialectics, Self-Soothing) **Key Finding**: **25-iteration convergence validates computational homology thesis**

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/prefrontalsys/mnemex'

If you have feedback or need assistance with the MCP directory API, please join our Discord server