Provides optional Docker-based code execution environment for the agents when executing code as part of conversations
Supports environment variable configuration through .env files for managing API keys and server settings
Supports installation via git clone from a repository
Hosts the server repository, enabling installation directly from GitHub
Enables integration with OpenAI models (like GPT-4) for agent conversations, with configurable LLM settings including model selection and temperature
Enhanced AutoGen MCP Server
A comprehensive MCP server that provides deep integration with Microsoft's AutoGen framework v0.9+, featuring the latest capabilities including prompts, resources, advanced workflows, and enhanced agent types. This server enables sophisticated multi-agent conversations through a standardized Model Context Protocol interface.
š Latest Features (v0.2.0)
⨠Enhanced MCP Support
Prompts: Pre-built templates for common workflows (code review, research, creative writing)
Resources: Real-time access to agent status, chat history, and configurations
Dynamic Content: Template-based prompts with arguments and embedded resources
Latest MCP SDK: Version 1.12.3 with full feature support
š¤ Advanced Agent Types
Assistant Agents: Enhanced with latest LLM capabilities
Conversable Agents: Flexible conversation patterns
Teachable Agents: Learning and memory persistence
Retrievable Agents: Knowledge base integration
Multimodal Agents: Image and document processing (when available)
š Sophisticated Workflows
Code Generation: Architect ā Developer ā Reviewer ā Executor pipeline
Research Analysis: Researcher ā Analyst ā Critic ā Synthesizer workflow
Creative Writing: Multi-stage creative collaboration
Problem Solving: Structured approach to complex problems
Code Review: Security ā Performance ā Style review teams
Custom Workflows: Build your own agent collaboration patterns
šÆ Enhanced Chat Capabilities
Smart Speaker Selection: Auto, manual, random, round-robin modes
Nested Conversations: Hierarchical agent interactions
Swarm Intelligence: Coordinated multi-agent problem solving
Memory Management: Persistent agent knowledge and preferences
Quality Checks: Built-in validation and improvement loops
Related MCP server: Stellastra MCP Server
š ļø Available Tools
Core Agent Management
create_agent- Create agents with advanced configurationscreate_workflow- Build complete multi-agent workflowsget_agent_status- Detailed agent metrics and health monitoring
Conversation Execution
execute_chat- Enhanced two-agent conversationsexecute_group_chat- Multi-agent group discussionsexecute_nested_chat- Hierarchical conversation structuresexecute_swarm- Swarm-based collaborative problem solving
Workflow Orchestration
execute_workflow- Run predefined workflow templatesmanage_agent_memory- Handle agent learning and persistenceconfigure_teachability- Enable/configure agent learning capabilities
š Available Prompts
autogen-workflow
Create sophisticated multi-agent workflows with customizable parameters:
Arguments:
task_description,agent_count,workflow_typeUse case: Rapid workflow prototyping and deployment
code-review
Set up collaborative code review with specialized agents:
Arguments:
code,language,focus_areasUse case: Comprehensive code quality assessment
research-analysis
Deploy research teams for in-depth topic analysis:
Arguments:
topic,depthUse case: Academic research, market analysis, technical investigation
š Available Resources
autogen://agents/list
Live list of active agents with status and capabilities
autogen://workflows/templates
Available workflow templates and configurations
autogen://chat/history
Recent conversation history and interaction logs
autogen://config/current
Current server configuration and settings
Installation
Installing via Smithery
To install AutoGen Server for Claude Desktop automatically via Smithery:
Manual Installation
Clone the repository:
Install Node.js dependencies:
Install Python dependencies:
Build the TypeScript project:
Set up configuration:
Configuration
Environment Variables
Create a .env file from the template:
Configuration File
Update config.json with your preferences:
Usage Examples
Using with Claude Desktop
Add to your claude_desktop_config.json:
Command Line Testing
Test the server functionality:
Using Prompts
The server provides several built-in prompts:
autogen-workflow - Create multi-agent workflows
code-review - Set up collaborative code review
research-analysis - Deploy research teams
Accessing Resources
Available resources provide real-time data:
autogen://agents/list- Current active agentsautogen://workflows/templates- Available workflow templatesautogen://chat/history- Recent conversation historyautogen://config/current- Server configuration
Workflow Examples
Code Generation Workflow
Research Workflow
Advanced Features
Agent Types
Assistant Agents: LLM-powered conversational agents
User Proxy Agents: Code execution and human interaction
Conversable Agents: Flexible conversation patterns
Teachable Agents: Learning and memory persistence (when available)
Retrievable Agents: Knowledge base integration (when available)
Chat Modes
Two-Agent Chat: Direct conversation between agents
Group Chat: Multi-agent discussions with smart speaker selection
Nested Chat: Hierarchical conversation structures
Swarm Intelligence: Coordinated problem solving (experimental)
Memory Management
Persistent agent memory across sessions
Conversation history tracking
Learning from interactions (teachable agents)
Memory cleanup and optimization
Troubleshooting
Common Issues
API Key Errors: Ensure your OpenAI API key is valid and has sufficient credits
Import Errors: Install all dependencies with
pip install -r requirements.txt --userBuild Failures: Check Node.js version (>= 18) and run
npm installChat Failures: Verify agent creation succeeded before attempting conversations
Debug Mode
Enable detailed logging:
Performance Tips
Use
gpt-4o-minifor faster, cost-effective operationsEnable caching for repeated operations
Set appropriate timeout values for long-running workflows
Use quality checks only when needed (increases execution time)
Development
Running Tests
Building
Contributing
Fork the repository
Create a feature branch
Make your changes
Add tests for new functionality
Submit a pull request
Version History
v0.2.0 (Latest)
⨠Enhanced MCP support with prompts and resources
š¤ Advanced agent types (teachable, retrievable)
š Sophisticated workflows with quality checks
šÆ Smart speaker selection and nested conversations
š Real-time resource monitoring
š§ Memory management and persistence
v0.1.0
Basic AutoGen integration
Simple agent creation and chat execution
MCP tool interface
Support
For issues and questions:
Check the troubleshooting section above
Review the test examples in
test_server.pyOpen an issue on GitHub with detailed reproduction steps
License
MIT License - see LICENSE file for details.
OpenAI API Key (optional, can also be set in config.json)
OPENAI_API_KEY=your-openai-api-key
Configure the server settings:
Available Operations
The server supports three main operations:
1. Creating Agents
2. One-on-One Chat
3. Group Chat
Error Handling
Common error scenarios include:
Agent Creation Errors
Execution Errors
Configuration Errors
Architecture
The server follows a modular architecture:
License
MIT License - See LICENSE file for details