Skip to main content
Glama

Sequential Thinking Multi-Agent System

by FradSer
.env.example5.04 kB
# --- LLM Configuration --- # Select the LLM provider: "deepseek" (default), "groq", "openrouter", "github", or "ollama" LLM_PROVIDER="deepseek" # Provide the API key for the chosen provider: # GROQ_API_KEY="your_groq_api_key" DEEPSEEK_API_KEY="your_deepseek_api_key" # OPENROUTER_API_KEY="your_openrouter_api_key" # GITHUB_TOKEN="ghp_your_github_personal_access_token" # Note: Ollama requires no API key but needs local installation # Note: GitHub Models requires a GitHub Personal Access Token with appropriate scopes # Optional: Base URL override (e.g., for custom endpoints) LLM_BASE_URL="your_base_url_if_needed" # Optional: Specify different models for Enhanced (Complex Synthesis) and Standard (Individual Processing) # Defaults are set within the code based on the provider if these are not set. # Example for Groq: # GROQ_ENHANCED_MODEL_ID="openai/gpt-oss-120b" # For complex synthesis # GROQ_STANDARD_MODEL_ID="openai/gpt-oss-20b" # For individual processing # Example for DeepSeek: # DEEPSEEK_ENHANCED_MODEL_ID="deepseek-chat" # For complex synthesis # DEEPSEEK_STANDARD_MODEL_ID="deepseek-chat" # For individual processing # Example for GitHub Models: # GITHUB_ENHANCED_MODEL_ID="openai/gpt-5" # For complex synthesis # GITHUB_STANDARD_MODEL_ID="openai/gpt-5-min" # For individual processing # Example for OpenRouter: # OPENROUTER_ENHANCED_MODEL_ID="deepseek/deepseek-chat-v3-0324" # For synthesis # OPENROUTER_STANDARD_MODEL_ID="deepseek/deepseek-r1" # For processing # Example for Ollama: # OLLAMA_ENHANCED_MODEL_ID="devstral:24b" # For complex synthesis # OLLAMA_STANDARD_MODEL_ID="devstral:24b" # For individual processing # --- Enhanced Agno 1.8+ Features --- # Enable enhanced agents with advanced reasoning, memory, and structured outputs USE_ENHANCED_AGENTS="true" # Team mode: "standard", "enhanced", "hybrid", or "multi_thinking" # standard: Traditional agent setup (backward compatible) # enhanced: Use new Agno 1.8+ features (memory, reasoning, structured outputs) # hybrid: Mix of standard and enhanced agents for optimal performance # multi_thinking: Use Multi-Thinking methodology for balanced thinking (recommended) TEAM_MODE="standard" # --- External Tools --- # Required ONLY if web research capabilities are needed for thinking agents EXA_API_KEY="your_exa_api_key" # --- Adaptive Routing & Cost Optimization --- # Enable adaptive routing (automatically selects single vs multi-agent based on complexity) ENABLE_ADAPTIVE_ROUTING="true" # Multi-Thinking intelligent routing ENABLE_MULTI_THINKING="true" # Enable Multi-Thinking methodology for enhanced thinking MULTI_THINKING_COMPLEXITY_THRESHOLD="5.0" # Minimum complexity to trigger Multi-Thinking processing # Cost optimization settings DAILY_BUDGET_LIMIT="" # e.g., "5.0" for $5 daily limit MONTHLY_BUDGET_LIMIT="" # e.g., "50.0" for $50 monthly limit PER_THOUGHT_BUDGET_LIMIT="" # e.g., "0.10" for $0.10 per thought limit QUALITY_THRESHOLD="0.7" # Minimum quality score (0.0-1.0) # Response optimization settings RESPONSE_STYLE="practical" # "practical", "academic", or "balanced" MAX_RESPONSE_LENGTH="800" # Maximum response length in words ENFORCE_SIMPLICITY="true" # Remove excessive academic complexity # Provider cost overrides (cost per 1K tokens) # DEEPSEEK_COST_PER_1K_TOKENS="0.0002" # GROQ_COST_PER_1K_TOKENS="0.0" # GITHUB_COST_PER_1K_TOKENS="0.0005" # OPENROUTER_COST_PER_1K_TOKENS="0.001" # OLLAMA_COST_PER_1K_TOKENS="0.0" # --- Persistent Memory --- # Database URL for persistent storage (defaults to local SQLite) DATABASE_URL="" # e.g., "postgresql://user:pass@localhost/dbname" or leave empty for SQLite # Memory management MEMORY_PRUNING_DAYS="30" # Prune sessions older than X days MEMORY_KEEP_RECENT="100" # Always keep X most recent sessions # Note: Logs are stored in ~/.mas_sequential_thinking/logs/ directory # The log file is named mas_sequential_thinking.log with rotation # --- Logging and Performance --- # Core logging configuration LOG_LEVEL="INFO" # "DEBUG", "INFO", "WARNING", "ERROR", "CRITICAL" LOG_FORMAT="text" # "text" (readable) or "json" (structured) LOG_TARGETS="file,console" # Comma-separated: "file", "console" LOG_FILE_MAX_SIZE="10MB" # Maximum log file size (supports KB, MB, GB) LOG_FILE_BACKUP_COUNT="5" # Number of backup log files to keep LOG_SAMPLING_RATE="1.0" # Log sampling rate (0.0-1.0, 1.0 = all logs) # Smart logging configuration (reduces log verbosity while maintaining insights) SMART_LOGGING="true" # Enable intelligent logging with adaptive verbosity SMART_LOG_LEVEL="performance" # "critical", "performance", "routing", "debug" LOG_PERFORMANCE_ISSUES="true" # Always log slow/expensive processing LOG_RESPONSE_QUALITY="true" # Log overly complex or academic responses # Performance and error handling MAX_RETRIES="3" TIMEOUT="30.0" PERFORMANCE_MONITORING="true" # Enable real-time performance monitoring PERFORMANCE_BASELINE_TIME="30.0" # Baseline time per thought in seconds PERFORMANCE_BASELINE_EFFICIENCY="0.8" # Target efficiency score

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/FradSer/mcp-server-mas-sequential-thinking'

If you have feedback or need assistance with the MCP directory API, please join our Discord server