Implements environment variable management through .env files for configuration of API keys and system settings
Supports project management through Git, with cloning repositories as part of the setup process
Enables project initialization from GitHub repositories as part of the setup workflow
Provides integration with OpenAI's API, requiring an API key for agent operations and model usage in the multi-agent system
Supports development of React components through specialized worker agents with frontend/React capabilities
Uses SQLite for storing system state and token usage data, with querying capabilities for resource monitoring
Agent-MCP
🚀 Advanced Tool Notice: This framework is designed for experienced AI developers who need sophisticated multi-agent orchestration capabilities. Agent-MCP requires familiarity with AI coding workflows, MCP protocols, and distributed systems concepts. We're actively working to improve documentation and ease of use. If you're new to AI-assisted development, consider starting with simpler tools and returning when you need advanced multi-agent capabilities.
💬 Join the Community: Connect with us on Discord to get help, share experiences, and collaborate with other developers building multi-agent systems.
Multi-Agent Collaboration Protocol for coordinated AI software development.
Think Obsidian for your AI agents - a living knowledge graph where multiple AI agents collaborate through shared context, intelligent task management, and real-time visualization. Watch your codebase evolve as specialized agents work in parallel, never losing context or stepping on each other's work.
Why Multiple Agents?
Beyond the philosophical issues, traditional AI coding assistants hit practical limitations:
- Context windows overflow on large codebases
- Knowledge gets lost between conversations
- Single-threaded execution creates bottlenecks
- No specialization - one agent tries to do everything
- Constant rework from lost context and confusion
The Multi-Agent Solution
Agent-MCP transforms AI development from a single assistant to a coordinated team:
Real-time visualization shows your AI team at work - purple nodes represent context entries, blue nodes are agents, and connections show active collaborations. It's like having a mission control center for your development team.
Core Capabilities
Parallel Execution
Multiple specialized agents work simultaneously on different parts of your codebase. Backend agents handle APIs while frontend agents build UI components, all coordinated through shared memory.
Persistent Knowledge Graph
Your project's entire context lives in a searchable, persistent memory bank. Agents query this shared knowledge to understand requirements, architectural decisions, and implementation details. Nothing gets lost between sessions.
Intelligent Task Management
Monitor every agent's status, assigned tasks, and recent activity. The system automatically manages task dependencies, prevents conflicts, and ensures work flows smoothly from planning to implementation.
Quick Start
Python Implementation (Recommended)
Node.js/TypeScript Implementation (Alternative)
MCP Integration Guide
What is MCP?
The Model Context Protocol (MCP) is an open standard that enables AI assistants to securely connect to external data sources and tools. Agent-MCP leverages MCP to provide seamless integration with various development tools and services.
Running Agent-MCP as an MCP Server
Agent-MCP can function as an MCP server, exposing its multi-agent capabilities to MCP-compatible clients like Claude Desktop, Cline, and other AI coding assistants.
Quick MCP Setup
MCP Server Configuration
Create an MCP configuration file (mcp_config.json
):
Using Agent-MCP with Claude Desktop
- Add to Claude Desktop Config:Open
~/Library/Application Support/Claude/claude_desktop_config.json
(macOS) or equivalent: - Restart Claude Desktop to load the MCP server
- Verify Connection: Claude should show "🔌 agent-mcp" in the conversation
MCP Tools Available
Once connected, you can use these MCP tools directly in Claude:
Agent Management
create_agent
- Spawn specialized agents (backend, frontend, testing, etc.)list_agents
- View all active agents and their statusterminate_agent
- Safely shut down agents
Task Orchestration
assign_task
- Delegate work to specific agentsview_tasks
- Monitor task progress and dependenciesupdate_task_status
- Track completion and blockers
Knowledge Management
ask_project_rag
- Query the persistent knowledge graphupdate_project_context
- Add architectural decisions and patternsview_project_context
- Access stored project information
Communication
send_agent_message
- Direct messaging between agentsbroadcast_message
- Send updates to all agentsrequest_assistance
- Escalate complex issues
Advanced MCP Configuration
Custom Transport Options:
Environment Variables:
MCP Client Examples
Python Client
JavaScript Client
Troubleshooting MCP Connection
Connection Issues:
Common Issues:
- Port conflicts: Change port with
--port
flag - Permission errors: Ensure OpenAI API key is set
- Client timeout: Increase timeout in client configuration
- Agent limit reached: Check active agent count with
list_agents
Integration Examples
VS Code with MCP: Use the MCP extension to integrate Agent-MCP directly into your editor workflow.
Terminal Usage:
CI/CD Integration:
How It Works: Breaking Complexity into Simple Steps
Every task can be broken down into linear steps. This is the core insight that makes Agent-MCP powerful.
The Problem with Complex Tasks
The Agent-MCP Solution
Each agent focuses on their linear chain. No confusion. No context pollution. Just clear, deterministic progress.
The 5-Step Workflow
1. Initialize Admin Agent
2. Load Your Project Blueprint (MCD)
The MCD (Main Context Document) is your project's comprehensive blueprint - think of it as writing the book of your application before building it. It includes:
- Technical architecture and design decisions
- Database schemas and API specifications
- UI component hierarchies and workflows
- Task breakdowns with clear dependencies
See our MCD Guide for detailed examples and templates.
3. Deploy Your Agent Team
Each agent specializes in their domain, leading to higher quality implementations and faster development.
4. Initialize and Deploy Workers
Important: Setting Agent Modes
Agent modes (like --worker
, --memory
, --playwright
) are not just flags - they activate specific behavioral patterns. In Claude Code, you can make these persistent by:
- Copy the mode instructions to your clipboard
- Type
#
to open Claude's memory feature - Paste the instructions for persistent behavior
Example for Claude Code memory:
This ensures consistent behavior across your entire session without repeating instructions.
5. Monitor and Coordinate
The dashboard provides real-time visibility into your AI development team:
Network Visualization - Watch agents collaborate and share information
Task Progress - Track completion across all parallel work streams
Memory Health - Ensure context remains fresh and accessible
Activity Timeline - See exactly what each agent is doing
Access at http://localhost:3847
after launching the dashboard.
Advanced Features
Specialized Agent Modes
Agent modes fundamentally change how agents behave. They're not just configuration - they're behavioral contracts that ensure agents follow specific patterns optimized for their role.
Standard Worker Mode
Optimized for implementation tasks:
- Granular file status checking before any edits
- Sequential task completion (one at a time)
- Automatic documentation of changes
- Integration with project RAG for context
- Task status updates after each completion
Frontend Specialist Mode
Enhanced with visual validation capabilities:
- All standard worker features
- Browser automation for component testing
- Screenshot capabilities for visual regression
- DOM interaction for end-to-end testing
- Component-by-component implementation with visual verification
Research Mode
Read-only access for analysis and planning:
- No file modifications allowed
- Deep context exploration via RAG
- Pattern identification across codebase
- Documentation generation
- Architecture analysis and recommendations
Memory Management Mode
For context curation and optimization:
- Memory health monitoring
- Stale context identification
- Knowledge graph optimization
- Context summarization for new agents
- Cross-agent knowledge transfer
Each mode enforces specific behaviors that prevent common mistakes and ensure consistent, high-quality output.
Project Memory Management
The system maintains several types of memory:
Project Context - Architectural decisions, design patterns, conventions
Task Memory - Current status, blockers, implementation notes
Agent Memory - Individual agent learnings and specializations
Integration Points - How different components connect
All memory is:
- Searchable via semantic queries
- Version controlled for rollback
- Tagged for easy categorization
- Automatically garbage collected when stale
Conflict Resolution
File-level locking prevents agents from overwriting each other's work:
- Agent requests file access
- System checks if file is locked
- If locked, agent works on other tasks or waits
- After completion, lock is released
- Other agents can now modify the file
This happens automatically - no manual coordination needed.
Short-Lived vs. Long-Lived Agents: The Critical Difference
Traditional Long-Lived Agents
Most AI coding assistants maintain conversations across entire projects:
- Accumulated context grows unbounded - mixing unrelated code, decisions, and conversations
- Confused priorities - yesterday's bug fix mingles with today's feature request
- Hallucination risks increase - agents invent connections between unrelated parts
- Performance degrades over time - every response processes irrelevant history
- Security vulnerability - one carefully crafted prompt could expose your entire project
Agent-MCP's Ephemeral Agents
Each agent is purpose-built for a single task:
- Minimal, focused context - only what's needed for the specific task
- Crystal clear objectives - one task, one goal, no ambiguity
- Deterministic behavior - limited context means predictable outputs
- Consistently fast responses - no context bloat to slow things down
- Secure by design - agents literally cannot access what they don't need
A Practical Example
Traditional Approach: "Update the user authentication system"
Agent-MCP Approach: Same request, broken into focused tasks
The Theory Behind Linear Decomposition
The Philosophy: Short-Lived Agents, Granular Tasks
Most AI development approaches suffer from a fundamental flaw: they try to maintain massive context windows with a single, long-running agent. This leads to:
- Context pollution - Irrelevant information drowns out what matters
- Hallucination risks - Agents invent connections between unrelated parts
- Security vulnerabilities - Agents with full context can be manipulated
- Performance degradation - Large contexts slow down reasoning
- Unpredictable behavior - Too much context creates chaos
Our Solution: Ephemeral Agents with Shared Memory
Agent-MCP implements a radically different approach:
Short-Lived, Focused Agents
Each agent lives only as long as their specific task. They:
- Start with minimal context (just what they need)
- Execute granular, linear tasks with clear boundaries
- Document their work in shared memory
- Terminate upon completion
Shared Knowledge Graph (RAG)
Instead of cramming everything into context windows:
- Persistent memory stores all project knowledge
- Agents query only what's relevant to their task
- Knowledge accumulates without overwhelming any single agent
- Clear separation between working memory and reference material
Result: Agents that are fast, focused, and safe. They can't be manipulated to reveal full project details because they never have access to it all at once.
Why This Matters for Safety
Traditional long-context agents are like giving someone your entire codebase, documentation, and secrets in one conversation. Our approach is like having specialized contractors who only see the blueprint for their specific room.
- Reduced attack surface - Agents can't leak what they don't know
- Deterministic behavior - Limited context means predictable outputs
- Audit trails - Every agent action is logged and traceable
- Rollback capability - Mistakes are isolated to specific tasks
The Cleanup Protocol: Keeping Your System Lean
Agent-MCP enforces strict lifecycle management:
Maximum 10 Active Agents
- Hard limit prevents resource exhaustion
- Forces thoughtful task allocation
- Maintains system performance
Automatic Cleanup Rules
- Agent finishes task → Immediately terminated
- Agent idle 60+ seconds → Killed and task reassigned
- Need more than 10 agents → Least productive agents removed
Why This Matters
- No zombie processes eating resources
- Fresh context for every task
- Predictable resource usage
- Clean system state always
This isn't just housekeeping - it's fundamental to the security and performance benefits of the short-lived agent model.
The Fundamental Principle
Any task that cannot be expressed as Step 1 → Step 2 → Step N
is not atomic enough.
This principle drives everything in Agent-MCP:
- Complex goals must decompose into linear sequences
- Linear sequences can execute in parallel when independent
- Each step must have clear prerequisites and deterministic outputs
- Integration points are explicit and well-defined
Why Linear Decomposition Works
Traditional Approach: "Build a user authentication system"
- Vague requirements lead to varied implementations
- Agents make different assumptions
- Integration becomes a nightmare
Agent-MCP Approach:
Each step is atomic, testable, and has zero ambiguity. Multiple agents can work these chains in parallel without conflict.
Why Developers Choose Agent-MCP
The Power of Parallel Development
Instead of waiting for one agent to finish the backend before starting the frontend, deploy specialized agents to work simultaneously. Your development speed is limited only by how well you decompose tasks.
No More Lost Context
Every decision, implementation detail, and architectural choice is stored in the shared knowledge graph. New agents instantly understand the project state without reading through lengthy conversation histories.
Predictable, Reliable Outputs
Focused agents with limited context produce consistent results. The same task produces the same quality output every time, making development predictable and testable.
Built-in Conflict Prevention
File-level locking and task assignment prevent agents from stepping on each other's work. No more merge conflicts from simultaneous edits.
Complete Development Transparency
Watch your AI team work in real-time through the dashboard. Every action is logged, every decision traceable. It's like having a live view into your development pipeline.
For Different Team Sizes
Solo Developers: Transform one AI assistant into a coordinated team. Work on multiple features simultaneously without losing track.
Small Teams: Augment human developers with AI specialists that maintain perfect context across sessions.
Large Projects: Handle complex systems where no single agent could hold all the context. The shared memory scales infinitely.
Learning & Teaching: Perfect for understanding software architecture. Watch how tasks decompose and integrate in real-time.
System Requirements
- Python: 3.10+ with pip or uv
- Node.js: 18.0.0+ (recommended: 22.16.0)
- npm: 9.0.0+ (recommended: 10.9.2)
- OpenAI API key (for embeddings and RAG)
- RAM: 4GB minimum
- AI coding assistant: Claude Code or Cursor
For consistent development environment:
Troubleshooting
"Admin token not found"
Check the server startup logs - token is displayed when MCP server starts.
"Worker can't access tasks"
Ensure you're using the worker token (not admin token) when initializing workers.
"Agents overwriting each other"
Verify all workers are initialized with the --worker
flag for proper coordination.
"Dashboard connection failed"
- Ensure MCP server is running first
- Check Node.js version (18+ required)
- Reinstall dashboard dependencies
"Memory queries returning stale data"
Run memory garbage collection through the dashboard or restart with --refresh-memory
.
Documentation
- Getting Started Guide - Complete walkthrough with examples
- MCD Creation Guide - Write effective project blueprints
- Theoretical Foundation - Understanding AI cognition
- Architecture Overview - System design and components
- API Reference - Complete technical documentation
Community and Support
Get Help
- Discord Community - Active developer discussions
- GitHub Issues - Bug reports and features
- Discussions - Share your experiences
Contributing We welcome contributions! See our Contributing Guide for:
- Code style and standards
- Testing requirements
- Pull request process
- Development setup
License
This project is licensed under the GNU Affero General Public License v3.0 (AGPL-3.0).
What this means:
- ✅ You can use, modify, and distribute this software
- ✅ You can use it for commercial purposes
- ⚠️ Important: If you run a modified version on a server that users interact with over a network, you must provide the source code to those users
- ⚠️ Any derivative works must also be licensed under AGPL-3.0
- ⚠️ You must include copyright notices and license information
See the LICENSE file for complete terms and conditions.
Why AGPL? We chose AGPL to ensure that improvements to Agent-MCP benefit the entire community, even when used in server/SaaS deployments. This prevents proprietary forks that don't contribute back to the ecosystem.
Built by developers who believe AI collaboration should be as sophisticated as human collaboration.
This server cannot be installed
remote-capable server
The server can be hosted and run remotely because it primarily relies on remote services or has no dependency on the local environment.
A Multi-Agent Collaboration Protocol server that enables coordinated AI collaboration through task management, context sharing, and agent interaction visualization.
Related MCP Servers
- -securityFlicense-qualityA comprehensive Model Context Protocol server implementation that enables AI assistants to interact with file systems, databases, GitHub repositories, web resources, and system tools while maintaining security and control.Last updated -71
- -securityFlicense-qualityA Model Context Protocol server that enables AI agents to interact with n8n workflows and automation tools through a standardized interface, allowing execution of workflows and access to n8n functions.Last updated -
- AsecurityAlicenseAqualityA Model Context Protocol server providing AI assistants with comprehensive project, task, and subtask management capabilities with project-specific storage.Last updated -294559MIT License
- -securityFlicense-qualityMulti-Agent Conversation Protocol server that enables interaction with Asana's task management API, allowing users to manage projects, tasks, and team collaboration through natural language.Last updated -