Integrations
Provides tools for running, managing and creating CrewAI workflows, enabling multi-agent operations through configuration in YAML files.
Enables installation directly from GitHub repository, with the server code hosted on GitHub.
Allows installation of the MCP Crew AI server package directly from the Python Package Index.
MCP Crew AI Server
MCP Crew AI Server is a lightweight Python-based server designed to run, manage and create CrewAI workflows. This project leverages the Model Context Protocol (MCP) to communicate with Large Language Models (LLMs) and tools such as Claude Desktop or Cursor IDE, allowing you to orchestrate multi-agent workflows with ease.
Features
- Automatic Configuration: Automatically loads agent and task configurations from two YAML files (
agents.yml
andtasks.yml
), so you don't need to write custom code for basic setups. - Command Line Flexibility: Pass custom paths to your configuration files via command line arguments (
--agents
and--tasks
). - Seamless Workflow Execution: Easily run pre-configured workflows through the MCP
run_workflow
tool. - Local Development: Run the server locally in STDIO mode, making it ideal for development and testing.
Installation
There are several ways to install the MCP Crew AI server:
Option 1: Install from PyPI (Recommended)
Option 2: Install from GitHub
Option 3: Clone and Install
Requirements
- Python 3.11+
- MCP SDK
- CrewAI
- PyYAML
Configuration
- agents.yml: Define your agents with roles, goals, and backstories.
- tasks.yml: Define tasks with descriptions, expected outputs, and assign them to agents.
Example agents.yml
:
Example tasks.yml
:
Usage
Once installed, you can run the MCP CrewAI server using either of these methods:
Standard Python Command
Using UV Execution (uvx)
For a more streamlined experience, you can use the UV execution command:
Or run just the server directly:
This will start the server using default configuration from environment variables.
Command Line Options
--agents
: Path to the agents YAML file (required)--tasks
: Path to the tasks YAML file (required)--topic
: The main topic for the crew to work on (default: "Artificial Intelligence")--process
: Process type to use (choices: "sequential" or "hierarchical", default: "sequential")--verbose
: Enable verbose output--variables
: JSON string or path to JSON file with additional variables to replace in YAML files--version
: Show version information and exit
Advanced Usage
You can also provide additional variables to be used in your YAML templates:
These variables will replace placeholders in your YAML files. For example, {topic}
will be replaced with "Machine Learning" and {year}
with "2025".
Contributing
Contributions are welcome! Please open issues or submit pull requests with improvements, bug fixes, or new features.
Licence
This project is licensed under the MIT Licence. See the LICENSE file for details.
Happy workflow orchestration!
This server cannot be installed
local-only server
The server can only run on the client's local machine because it depends on local resources.
A lightweight Python-based server designed to run, manage and create CrewAI workflows using the Model Context Protocol for communicating with LLMs and tools like Claude Desktop or Cursor IDE.
Related MCP Servers
- -securityFlicense-qualityThis is an MCP server that facilitates building tools for interacting with various APIs and workflows, supporting Python-based development with potential for customizable prompts and user configurations.Last updated -Python
- -securityFlicense-qualityA Model Context Protocol server that enables users to kickoff and monitor deployed CrewAI workflows through Claude Desktop.Last updated -1Python
- -securityAlicense-qualityA Python package that implements a Model Context Protocol server for integrating with Flowise API, allowing users to list chatflows, create predictions, and dynamically register tools for Flowise chatflows or assistants.Last updated -PythonMIT License
- AsecurityFlicenseAqualityA Python server implementing the Model Context Protocol to provide customizable prompt templates, resources, and tools that enhance LLM interactions in the continue.dev environment.Last updated -2Python