Enables the generation of text-to-speech narration, vocal samples, and sound effects with direct import of the resulting audio files into Ableton Live tracks.
Provides comprehensive tools for the programmatic creation and manipulation of MIDI clips and notes, including batch editing, transposition, and quantization within the Ableton Live environment.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@Ableton MCP ExtendedCreate a 4-bar MIDI clip with a C major progression on the Piano track."
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
Ableton MCP Extended
Control Ableton Live using natural language via AI assistants like Claude or Cursor. This project provides a robust Model Context Protocol (MCP) server that translates natural language commands into precise actions within your Ableton Live session.
Video demonstration: https://www.youtube.com/watch?v=7ZKPIrJuuKk
This tool is designed for producers, developers, and AI enthusiasts who want to streamline their music production workflow, experiment with generative music, and build custom integrations with Ableton Live.
You can transform this conversation:
Into this music production session:
https://github.com/user-attachments/assets/d6ef2de5-bdeb-4097-acc0-67d70f7f85b3
Key Features
This project provides comprehensive, programmatic control over the Ableton Live environment.
Session and Transport Control:
Start and stop playback.
Get session info, including tempo, time signature, and track count.
Manage scenes: create, delete, rename, and fire.
Track Management:
Create, rename, and get detailed information for MIDI and audio tracks.
Control track properties: volume, panning, mute, solo, and arm.
Manage track grouping and folding states.
MIDI Clip and Note Manipulation:
Create and name MIDI clips with specified lengths.
Add, delete, transpose, and quantize notes within clips.
Perform batch edits on multiple notes in a single operation.
Adjust clip loop parameters and follow actions.
Device and Parameter Control:
Load instruments and effects from Ableton's browser by URI.
Get a full list of parameters for any device on a track.
Set and batch-set device parameters using normalized values (0.0 to 1.0).
Automation and Envelopes:
Add and clear automation points for any device parameter within a clip. [This feature isn't working perfectly yet.]
Get information about existing clip envelopes.
Browser Integration:
Navigate and list items from Ableton's browser.
Load instruments, effects, and samples directly from a browser path or URI.
Import audio files directly into audio tracks or clip slots.
Voice & Audio Generation
Text-to-Speech Integration: Generate narration, vocal samples, or spoken elements through ElevenLabs MCP [included].
Custom Voice Creation: Clone voices for unique character in your tracks
Sound Effects: Create custom SFX with AI
Direct Import: Generated audio appears instantly in your Ableton session
Extensible Framework for Custom Tools
Example: XY Mouse Controller: Demonstrates creating custom Ableton controllers with the MCP framework
Ultra-Low Latency: High-performance UDP protocol enables responsive real-time control
Unlimited Possibilities: Build your own custom tools and controllers for Ableton Live
Quick Start (5 Minutes)
Prerequisites
Ableton Live 11+ (any edition)
Python 3.10 or higher
Claude Desktop or Cursor IDE
1. Get the Code
2. Install Ableton Script
Find your Ableton Remote Scripts folder:
Windows:
C:\Users\[You]\Documents\Ableton\User Library\Remote Scripts\Mac:
~/Library/Preferences/Ableton/Live [Version]/User Remote Scripts/
Create folder:
AbletonMCPCopy
AbletonMCP_Remote_Script/__init__.pyinto this folder
3. Configure Ableton
Open Ableton Live
Go to Preferences → Link, Tempo & MIDI
Set Control Surface to "AbletonMCP"
Set Input/Output to "None"
4. Connect AI Assistant
For Claude Desktop:
For Cursor: Add MCP server in Settings → MCP with the same path.
5. Start Creating!
Open your AI assistant and try:
"Create a new MIDI track with a piano"
"Add a simple drum beat"
"What tracks do I currently have?"
How It Works
You issue a command in plain English to your AI assistant (e.g., "Create a new MIDI track and name it 'Bass'").
The AI Assistant understands the intent and calls the appropriate tool from the MCP server.
The MCP Server (server.py) receives the tool call and constructs a specific JSON command.
The Ableton Remote Script (init.py), running inside Live, receives the JSON command via a socket connection.
The Remote Script executes the command using the official Ableton Live API, making the change in your session instantly.
Advanced Features
For real-time parameter control with ultra-low latency:
This demonstrates how to build:
Custom real-time controllers for Ableton
Expressive performance tools
Interactive music applications
This repository can be integrated with other MCP servers, such as one for ElevenLabs, to generate and import audio directly into your project.
Set up the ElevenLabs MCP server according to its instructions.
Update your AI assistant's config to include both servers.
Example mcp-config.json:
Components Overview
This project includes several specialized components:
Core MCP Server
Standard TCP communication for reliable AI control
Extensive Ableton Live API integration
Compatible with Claude Desktop, Cursor, and Gemini CLI.
Hybrid TCP/UDP Server
High-performance real-time parameter control
Ultra-low latency for live performance
Perfect for controllers and interactive tools
ElevenLabs Integration
Professional text-to-speech generation
Custom voice creation and cloning
Direct import into Ableton sessions
Real-time SFX generation
Experimental Tools & Examples
XY Mouse Controller: Example demonstrating how to build custom Ableton controllers
Extensible Framework: Foundation for creating your own control interfaces
Proof of Concept: Shows the power and flexibility of the MCP approach
Documentation
Installation Guide - Detailed setup instructions
User Guide - What, which, and how
Community & Support
GitHub Issues: Bug reports and feature requests
Discussions: Share your creations and get help
Share Your Creations
Tag me with your AI-generated experiments! I love seeing what the community creates:
YouTube | Instagram | Patreon | Website
What's Next
Fixing Automation Point Placement Bugs
VST Plugin Support - Control third-party plugins [Though it can be achieved throught the "Configure" parameter function]
Arrangement View - Full timeline control
Hardware Integration - Bridge MIDI controllers through AI
Advanced AI - Smarter and better music understanding and generation
License & Credits
This project is licensed under the MIT License - see LICENSE for details.
Built with:
Model Context Protocol - AI integration framework
ElevenLabs API - Professional voice generation
Ableton Live - Digital audio workstation
Inspired by: The original ableton-mcp project
Made with ❤️ for the music production community
If this project helps your creativity, consider giving it a ⭐ star!