Provides configuration for containerized deployment of the MCP server, allowing for local testing and isolated runtime environments.
Supports configuration via environment variables for customizing server behavior, API connections, and authentication settings.
Enables AI agents to interact with n8n workflows through natural language, offering tools for workflow management (listing, creating, updating, deleting, activating/deactivating workflows), execution management, and triggering workflows via webhooks with custom data.
Enables running the MCP server directly as a Node.js application for advanced self-hosting scenarios.
Offers one-click deployment to the Railway platform with automatic environment configuration and hosting.
One Click Deploy n8n MCP Server
A Model Context Protocol (MCP) server that allows AI agents to interact with n8n workflows through natural language.
Deployment
One-Click Deploy to Railway
Note: After clicking the button, Railway will prompt you to configure the necessary environment variables (see below).
Docker Deployment (Manual / Local Testing)
The Dockerfile
in this repository is configured to build the n8n-mcp-server
and run it with Supergateway.
- Build the Docker Image:
- Run the Docker Container:Replace placeholder values with your actual n8n credentials. The server will be accessible via SSE on
http://localhost:8080
. Supergateway provides default paths/sse
for the event stream and/message
for posting messages.
Configuration
The server requires the following environment variables. When deploying to Railway using the button, you will be prompted for these. For local Docker runs, pass them using the -e
flag as shown above.
N8N_API_URL
: Your n8n instance API URL (e.g.,https://n8n.example.com/api/v1
). Required.N8N_API_KEY
: Your n8n API Key. Required and treated as a secret.N8N_WEBHOOK_USERNAME
: A username for basic authentication on n8n webhook nodes (if your workflows use webhook triggers secured with basic auth). Default:anyname
.N8N_WEBHOOK_PASSWORD
: A password for basic authentication on n8n webhook nodes. Default:somepassword
.DEBUG
: Set totrue
for verbose logging from the n8n-mcp-server and Supergateway, orfalse
for production. Default:false
.PORT
: The port the application will listen on. Railway sets this automatically. Supergateway uses this variable. TheDockerfile
default is8080
.
Generating an n8n API Key
- Open your n8n instance in a browser.
- Go to Settings > API (or a similar path depending on your n8n version).
- Create a new API key with appropriate permissions.
- Copy the key.
Connecting to the Server (Client Integration)
Once the n8n-mcp-server
is running (e.g., deployed on Railway or locally in Docker), it exposes an MCP interface over Server-Sent Events (SSE).
The Supergateway instance within the Docker container (as defined in Dockerfile
) typically makes the MCP server available at:
- SSE Stream:
http://<server_address>:<port>/sse
- Message Endpoint:
http://<server_address>:<port>/message
(If deployed on Railway, <server_address>:<port>
will be your public Railway URL, e.g., https://my-n8n-mcp.up.railway.app
)
There are a couple of ways AI agents or MCP clients can connect:
- Direct SSE Connection:
If your MCP client (e.g., your AI agent's framework) natively supports connecting to an MCP server via an SSE URL and a message endpoint, configure it with the URLs mentioned above.Example
mcp.json
configuration for direct SSE:When you deploy add your variables and make sure to expose the 8080 port in railway. - Using Supergateway on the Client-Side (SSE-to-stdio bridge):
If your MCP client expects to launch a local command that communicates via stdio (standard input/output), you can use another Supergateway instance locally on the client's machine to bridge the remote SSE connection back to stdio.Example
mcp.json
or similar client configuration:In this client-side Supergateway setup:- Your AI agent's MCP client runs
npx -y supergateway --sse ...
as its command. - This local Supergateway connects to your remote
n8n-mcp-server
's SSE endpoint. - It then presents an MCP interface over stdio to your AI agent.
- Your AI agent's MCP client runs
Available Tools
The server provides the following tools (accessed via the MCP connection established above):
Using Webhooks
This MCP server supports executing workflows through n8n webhooks. To use this functionality:
- Create a webhook-triggered workflow in n8n.
- Set up Basic Authentication on your webhook node (optional, but recommended).
- Use the
run_webhook
tool to trigger the workflow, passing just the workflow name.
Example (conceptual client-side code):
Webhook authentication (if used) is handled using the N8N_WEBHOOK_USERNAME
and N8N_WEBHOOK_PASSWORD
environment variables configured for the server.
Workflow Management
workflow_list
: List all workflowsworkflow_get
: Get details of a specific workflowworkflow_create
: Create a new workflowworkflow_update
: Update an existing workflowworkflow_delete
: Delete a workflowworkflow_activate
: Activate a workflowworkflow_deactivate
: Deactivate a workflow
Execution Management
execution_run
: Execute a workflow via the API // Note: run_webhook is already listed above, often preferred for triggering.execution_get
: Get details of a specific executionexecution_list
: List executions for a workflow //execution_stop
might not be implemented in all n8n versions or the base server.
Self-Hosting (Advanced)
For users who prefer to run the server outside of Docker or a platform like Railway, you can run the Node.js application directly. This gives you more control but requires manual setup of the execution environment and potentially Supergateway if SSE is desired.
- Clone the Repository:
- Install Dependencies:
- Build the Server:This compiles the TypeScript to JavaScript in the
build
directory. - Configure Environment Variables:
Create a
.env
file in the project root (you can copy.env.example
) and fill in your n8n API details (N8N_API_URL
,N8N_API_KEY
, etc.) and any other required variables likePORT
(if not 8080) orDEBUG
. - Run the stdio MCP Server:This will start the
n8n-mcp-server
communicating over standard input/output (stdio). - **Exposing via SSE (Optional, Manual Supergateway Setup):
If you need to access this self-hosted server via SSE, you will need to run your own instance of Supergateway to wrap the stdio command above. For example:Ensure that the environment variables configured in step 4 are accessible to the
node build/index.js
process when launched by Supergateway.
This method is more involved than the Docker or Railway deployments, which handle the Supergateway integration automatically within the container.
Credits
Based on this repo: https://github.com/leonardsellem/n8n-mcp-server/
License
MIT
This server cannot be installed
A Model Context Protocol server that allows AI agents to interact with n8n workflows through natural language, enabling workflow management and execution via SSE connections.
Related MCP Servers
- -securityAlicense-qualityA Model Context Protocol server enabling AI agents to access and manipulate ServiceNow data through natural language interactions, allowing users to search for records, update them, and manage scripts.Last updated -9PythonMIT License
- -securityAlicense-qualityA Model Context Protocol server that enables AI assistants to interact with n8n workflows through natural language, supporting actions like listing, creating, updating, executing and monitoring workflows.Last updated -388464TypeScriptMIT License
- -securityAlicense-qualityA Model Context Protocol server that enables AI assistants to interact with n8n workflows through natural language, providing access to n8n's complete API functionality including workflow management, user administration, and credential handling.Last updated -95JavaScriptMIT License
- -securityFlicense-qualityA Model Context Protocol server that allows AI assistants to interact with Prefect's workflow automation platform through natural language, enabling users to manage flows, deployments, tasks, and other Prefect resources via conversational commands.Last updated -4Python