Provides a high-level API for video editing by wrapping FFmpeg, enabling tools for trimming, merging, adding text or audio overlays, format conversion, and metadata extraction.
What is mcp-video?
mcp-video is an open-source video editing server built on the Model Context Protocol (MCP). It gives AI agents and any MCP-compatible client the ability to programmatically edit video files.
Think of it as ffmpeg with an API that AI agents can actually use. Instead of memorizing cryptic command-line flags, an agent calls structured tools with clear parameters and gets structured results back.
The Problem It Solves
AI agents can write code, analyze documents, and browse the web — but they can't edit video. Existing video editing tools are either:
GUI-only (Premiere, DaVinci, CapCut) — agents can't use them
Raw FFmpeg wrappers — require memorizing hundreds of flags
Cloud APIs (Render, Bannerbear) — expensive, slow, vendor lock-in
mcp-video bridges this gap. It's a local, fast, free video editing layer that any AI agent can use through a standard protocol.
Three Ways to Use It
Interface | Best For | Example |
MCP Server | AI agents (Claude Code, Cursor) | "Trim this video and add a title" |
Python Client | Scripts, automation, pipelines |
|
CLI | Shell scripts, quick ops |
|
Install
Prerequisites
FFmpeg must be installed on your system:
# macOS
brew install ffmpeg
# For full text overlay support (drawtext filter):
# brew install freetype harfbuzz
# brew reinstall --build-from-source ffmpeg
# Verify: ffmpeg -filters | grep drawtext
# Ubuntu/Debian
sudo apt install ffmpeg
# Windows
# Download from https://ffmpeg.org/download.htmlInstallation
pip install mcp-videoOr with UVX (no install needed):
uvx mcp_videoQuick Start
1. As an MCP Server (for AI agents)
Add to your Claude Code or Claude Desktop MCP settings:
{
"mcpServers": {
"mcp_video": {
"command": "uvx",
"args": ["mcp_video"]
}
}
}Or if installed via pip:
{
"mcpServers": {
"mcp_video": {
"command": "mcp_video"
}
}
}Then just ask your agent: "Trim this video from 0:30 to 1:00, add a title card, and resize for TikTok."
2. As a Python Library
from mcp_video import Client
editor = Client()
# Get video info
info = editor.info("interview.mp4")
print(f"Duration: {info.duration}s, Resolution: {info.resolution}")
# Trim a clip
clip = editor.trim("interview.mp4", start="00:02:15", duration="00:00:30")
# Merge clips with transitions
video = editor.merge(
clips=["intro.mp4", clip.output_path, "outro.mp4"],
transitions=["fade", "dissolve", "fade"],
)
# Add text overlay
video = editor.add_text(
video=video.output_path,
text="EPISODE 42: The Future of AI",
position="top-center",
size=48,
)
# Add background music
video = editor.add_audio(
video=video.output_path,
audio="music.mp3",
volume=0.7,
fade_in=2.0,
fade_out=3.0,
)
# Resize for TikTok (9:16)
video = editor.resize(video=video.output_path, aspect_ratio="9:16")
# Export final video
result = editor.export(video.output_path, quality="high")
print(result)
# EditResult(output_path='interview_9:16.mp4', duration=45.0, resolution='1080x1920', ...)3. As a CLI Tool
# Get video metadata
mcp_video info video.mp4
# Generate a fast low-res preview
mcp_video preview video.mp4
# Extract storyboard frames for review
mcp_video storyboard video.mp4 -n 12
# Trim a clip
mcp_video trim video.mp4 -s 00:02:15 -d 30 -o trimmed.mp4
# Convert to a different format
mcp_video convert video.mp4 -f webm -q highMCP Tools
mcp-video exposes 19 tools for AI agents. All tools return structured JSON with success, output_path, and operation metadata. On failure, they return {"success": false, "error": {...}} with auto-fix suggestions.
New in v0.2.0: Progress callbacks provide real-time feedback on long-running operations (merge, convert, export), and visual verification returns thumbnails so agents can confirm results without opening files.
Video Operations
Tool | Description | Key Parameters |
| Get video metadata |
|
| Trim clip by timestamp |
|
| Concatenate multiple clips |
|
| Change playback speed |
|
Effects & Overlays
Tool | Description | Key Parameters |
| Overlay text/title |
|
| Add or replace audio |
|
| Burn SRT/VTT subtitles |
|
| Add image watermark |
|
| Crop to rectangular region |
|
| Rotate and/or flip video |
|
| Video fade in/out |
|
Format & Quality
Tool | Description | Key Parameters |
| Change resolution/aspect ratio |
|
| Convert format |
|
| Render with quality settings |
|
Analysis & Extraction
Tool | Description | Key Parameters |
| Extract a single frame |
|
| Generate fast low-res preview |
|
| Extract key frames as grid |
|
| Extract audio track |
|
Advanced
Tool | Description | Key Parameters |
| Full timeline-based edit |
|
MCP Resources
Resource URI | Description |
| Video metadata as JSON |
| Key frame timestamps |
| Audio track info |
| Available templates, presets, and formats |
Python Client API
Full reference for the Client class:
from mcp_video import Client
editor = Client()Methods
Method | Returns | Description |
|
| Video metadata (duration, resolution, codec, fps, size) |
|
| Trim by start time + duration or end time |
|
| Concatenate clips with per-pair transitions |
|
| Overlay text on video |
|
| Add or replace audio track |
|
| Resize or change aspect ratio |
|
| Convert format (mp4/webm/gif/mov) |
|
| Change playback speed |
|
| Extract single frame |
|
| Fast low-res preview |
|
| Key frames + grid |
|
| Burn subtitles into video |
|
| Add image watermark |
|
| Crop to rectangular region |
|
| Rotate and/or flip video |
|
| Video fade in/out effect |
|
| Render with quality settings |
|
| Execute full timeline edit from JSON |
|
| Extract audio as file path |
Return Models
VideoInfo(path, duration, width, height, fps, codec, audio_codec, ...)
# .resolution -> "1920x1080"
# .aspect_ratio -> "16:9"
# .size_mb -> 5.42
EditResult(success=True, output_path, duration, resolution, size_mb, format, operation)
ThumbnailResult(success=True, frame_path, timestamp)
StoryboardResult(success=True, frames=["f1.jpg", ...], grid="grid.jpg", count=8)CLI Reference
mcp_video [command] [options]
Commands:
info Get video metadata
trim Trim a video
merge Merge multiple clips
add-text Overlay text on a video
add-audio Add or replace audio track
resize Resize or change aspect ratio
convert Convert video format
speed Change playback speed
thumbnail Extract a single frame
preview Generate fast low-res preview
storyboard Extract key frames as storyboard
subtitles Burn subtitles into video
watermark Add image watermark
crop Crop to rectangular region
rotate Rotate and/or flip video
fade Add video fade in/out
export Export with quality settings
extract-audio Extract audio track
edit Execute timeline-based edit from JSON
Options:
--mcp Run as MCP server (default when no command given)
-h, --help Show helpExamples
# Get metadata as JSON
mcp_video info video.mp4
# Preview with custom downscale
mcp_video preview video.mp4 -s 2
# Storyboard with 12 frames
mcp_video storyboard video.mp4 -n 12 -o ./frames
# Trim from 2:15 for 30 seconds
mcp_video trim video.mp4 -s 00:02:15 -d 30 -o clip.mp4
# Convert to GIF at medium quality
mcp_video convert video.mp4 -f gif -q medium
# Default: run MCP server
mcp_video --mcpTimeline DSL
For complex multi-track edits, describe everything in a single JSON object:
editor.edit({
"width": 1080,
"height": 1920,
"tracks": [
{
"type": "video",
"clips": [
{"source": "intro.mp4", "start": 0, "duration": 5},
{"source": "main.mp4", "start": 5, "trim_start": 10, "duration": 30},
{"source": "outro.mp4", "start": 35, "duration": 10},
],
"transitions": [
{"after_clip": 0, "type": "fade", "duration": 1.0},
{"after_clip": 1, "type": "dissolve", "duration": 1.0},
],
},
{
"type": "audio",
"clips": [
{"source": "music.mp3", "start": 0, "volume": 0.7, "fade_in": 2},
],
},
{
"type": "text",
"elements": [
{"text": "EPISODE 42", "start": 0, "duration": 3, "position": "top-center",
"style": {"size": 48, "color": "white", "shadow": True}},
],
},
],
"export": {"format": "mp4", "quality": "high"},
})Timeline Schema
Field | Type | Default | Description |
| int | 1920 | Output width |
| int | 1080 | Output height |
| Track[] | [] | Video, audio, and text tracks |
| str | "mp4" | mp4, webm, gif, mov |
| str | "high" | low, medium, high, ultra |
Track types: video, audio, text
Video clip fields: source, start, duration, trim_start, trim_end, volume, fade_in, fade_out
Transition fields: after_clip (index), type (fade/dissolve/wipe-*), duration
Text element fields: text, start, duration, position, style (font/size/color/shadow)
Positions: top-left, top-center, top-right, center-left, center, center-right, bottom-left, bottom-center, bottom-right
Templates
Pre-built templates for common social media formats:
from mcp_video.templates import tiktok_template, youtube_video_template
# TikTok (9:16, 1080x1920)
timeline = tiktok_template(
video_path="clip.mp4",
caption="Check this out!",
music_path="bgm.mp3",
)
# YouTube Shorts (9:16, title at top)
timeline = youtube_shorts_template("clip.mp4", title="My Short")
# Instagram Reel (9:16)
timeline = instagram_reel_template("clip.mp4", caption="Reel caption")
# YouTube Video (16:9, 1920x1080)
timeline = youtube_video_template(
video_path="video.mp4",
title="My Amazing Video",
outro_path="subscribe.mp4",
music_path="bgm.mp3",
)
# Instagram Post (1:1, 1080x1080)
timeline = instagram_post_template("clip.mp4", caption="Post caption")
# Execute any template
result = editor.edit(timeline)Template Registry
from mcp_video.templates import TEMPLATES
print(list(TEMPLATES.keys()))
# ['tiktok', 'youtube-shorts', 'instagram-reel', 'youtube', 'instagram-post']
# Call any template by name
timeline = TEMPLATES["tiktok"](video_path="clip.mp4", caption="Hello!")Quality Presets
Quality | CRF | Encoder Preset | Max Height | Use Case |
| 35 | fast | 480p | Drafts, previews |
| 28 | medium | 720p | Social media |
| 23 | slow | 1080p | Production |
| 18 | veryslow | 1080p | Final output |
Lower CRF = better quality, larger file. The preset controls encoding speed (slower = better compression).
Aspect Ratios
Ratio | Resolution | Platforms |
| 1920x1080 | YouTube |
| 1080x1920 | TikTok, Reels, Shorts |
| 1080x1080 | Instagram Post |
| 1080x1350 | Instagram Feed |
| 1440x1080 | Classic video |
| 2560x1080 | Ultrawide |
editor.resize("video.mp4", aspect_ratio="9:16") # TikTok
editor.resize("video.mp4", aspect_ratio="16:9") # YouTube
editor.resize("video.mp4", aspect_ratio="1:1") # InstagramError Handling
mcp-video parses FFmpeg errors and returns structured, actionable error responses:
{
"success": false,
"error": {
"type": "encoding_error",
"code": "unsupported_codec",
"message": "Codec error: vp9 — Auto-convert input from vp9 to H.264/AAC before editing",
"suggested_action": {
"auto_fix": true,
"description": "Auto-convert input from vp9 to H.264/AAC before editing"
},
"documentation_url": "https://github.com/pastorsimon1798/mcp-video#codec-compatibility"
}
}Error Types
Error | Type | Auto-Fix | Description |
| dependency_error | No | FFmpeg not installed |
| dependency_error | No | FFprobe not installed |
| input_error | No | File doesn't exist or invalid |
| encoding_error | Yes | Unsupported codec |
| encoding_error | Yes | Clips have different resolutions |
| processing_error | No | FFmpeg processing failed |
| export_error | No | Export/rendering failed |
| resource_error | No | Insufficient disk space or memory |
Testing
mcp-video has 276 tests across the full testing pyramid:
tests/
├── conftest.py # Shared fixtures (sample video, audio, SRT, VTT, watermark PNG, WebM)
├── test_models.py # 48 tests — Pydantic model validation (no FFmpeg needed)
├── test_errors.py # 42 tests — Error classes and FFmpeg error parsing (no FFmpeg)
├── test_templates.py # 21 tests — Template functions and registry (no FFmpeg)
├── test_client.py # 31 tests — Python Client API wrapper
├── test_server.py # 36 tests — MCP tool layer
├── test_engine.py # 26 tests — Core FFmpeg engine operations
├── test_engine_advanced.py # 44 tests — Edge cases, new operations, per-transition merge
├── test_cli.py # 6 tests — CLI commands via subprocess
└── test_e2e.py # 8 tests — Full end-to-end workflowsRunning Tests
# Install dev dependencies
pip install -e ".[dev]"
# Run all tests
pytest tests/ -v
# Run only unit tests (no FFmpeg needed)
pytest tests/test_models.py tests/test_errors.py tests/test_templates.py -v
# Run with coverage
pytest tests/ --cov=mcp_video --cov-report=term-missingTest Pyramid
Layer | Tests | What It Tests |
Unit | 111 | Models, errors, templates — pure Python, no FFmpeg |
Integration | 143 | Client, server, engine, CLI — real FFmpeg operations |
E2E | 8 | Multi-step workflows (TikTok, YouTube, GIF, speed) |
Architecture
mcp_video/
├── __init__.py # Exports Client
├── __main__.py # CLI entry point (argparse)
├── client.py # Python Client class (wraps engine)
├── engine.py # FFmpeg engine (all video operations)
├── errors.py # Error types + FFmpeg stderr parser
├── models.py # Pydantic models (VideoInfo, EditResult, Timeline DSL)
├── templates.py # Platform templates (TikTok, YouTube, Instagram)
└── server.py # MCP server (19 tools + 4 resources)Dependencies:
mcp>=1.0.0— Model Context Protocol SDKpydantic>=2.0— Data validationffmpeg— Video processing (external, required)
Supported Formats
Video
Format | Container | Video Codec | Audio Codec |
MP4 | mp4 | H.264 (libx264) | AAC |
WebM | webm | VP9 (libvpx-vp9) | Opus |
MOV | mov | H.264 (libx264) | PCM |
GIF | gif | Palette-based | None |
Audio (extraction)
MP3, AAC, WAV, OGG, FLAC
Subtitles
SRT, WebVTT (burned into video)
Development
# Clone
git clone https://github.com/pastorsimon1798/mcp-video.git
cd mcp-video
# Setup
python -m venv .venv
source .venv/bin/activate
pip install -e ".[dev]"
# Run tests
pytest tests/ -v
# Run a single test file
pytest tests/test_client.py -v
# Run with verbose output
pytest tests/ -v --tb=longRoadmap
Progress callbacks for long-running operations (v0.2.0)
Visual verification with thumbnail output (v0.2.0)
Batch processing mode (edit 100 videos at once)
Streaming upload/download (S3, GCS integration)
Web UI for non-agent users
FFmpeg filter auto-detection and graceful fallback
Video effects (blur, color grading)
Audio normalization and noise reduction
Thumbnail selection via AI scene detection
Plugin system for custom filters
License
Apache 2.0 — see LICENSE. Use it however you want.
Acknowledgments
Built on FFmpeg and the Model Context Protocol.