Short Video Maker MCP

Short Video Maker

An open source automated video creation tool for generating short-form video content. Short Video Maker combines text-to-speech, automatic captions, background videos, and music to create engaging short videos from simple text inputs.

This repository was open-sourced by the AI Agents A-Z Youtube Channel. We encourage you to check out the channel for more AI-related content and tutorials.

Hardware requirements

  • CPU: at least 2 cores are recommended
  • GPU: optional, makes the caption generation a lot faster (whisper.cpp) and the video rendering somewhat faster

Watch the official video on how to generate videos with n8n

Running the Project

The easiest way to run the project with GPU support out of the box:

LOG_LEVEL=debug PEXELS_API_KEY= npx short-video-maker

Using Docker

CPU image

docker run -it --rm --name short-video-maker -p 3123:3123 \ -e PEXELS_API_KEY= \ gyoridavid/short-video-maker:latest

NVIDIA GPUs

docker run -it --rm --name shorts-video-maker -p 3123:3123 \ -e PEXELS_API_KEY= --gpus=all \ gyoridavid/short-video-maker:latest-cuda

Find help

Join our Discord community for support and discussions.

Environment Variables

VariableDescription
PEXELS_API_KEYYour Pexels API key for background video sourcing
PORTPort for the API/MCP server (default: 3123)
LOG_LEVELLog level for the server (default: info, options: trace, debug, info, warn, error)
WHISPER_VERBOSEVerbose mode for Whisper (default: false)

Example

{ "scenes": [ { "text": "Hello world! Enjoy using this tool to create awesome AI workflows", "searchTerms": ["rainbow"] } ], "config": { "paddingBack": 1500, "music": "happy" } }

Features

  • Generate complete short videos from text prompts
  • Text-to-speech conversion
  • Automatic caption generation and styling
  • Background video search and selection via Pexels
  • Background music with genre/mood selection
  • Serve as both REST API and Model Context Protocol (MCP) server

How It Works

Shorts Creator takes simple text inputs and search terms, then:

  1. Converts text to speech using Kokoro TTS
  2. Generates accurate captions via Whisper
  3. Finds relevant background videos from Pexels
  4. Composes all elements with Remotion
  5. Renders a professional-looking short video with perfectly timed captions

Dependencies for the video generation

DependencyVersionLicensePurpose
Remotion^4.0.286Remotion LicenseVideo composition and rendering
Whisper CPPv1.5.5MITSpeech-to-text for captions
FFmpeg^2.1.3LGPL/GPLAudio/video manipulation
Kokoro.js^1.2.0MITText-to-speech generation
Pexels APIN/APexels TermsBackground videos

How to contribute?

PRs are welcome. See the CONTRIBUTING.md file for instructions on setting up a local development environment.

API Usage

REST API

The following REST endpoints are available:

  1. GET /api/short-video/:id - Get a video by ID and also can be downloaded like this :

curl -o output.mp4 http://localhost:3123/api/short-video/<videoId>

  1. POST /api/short-video - Create a new video
    { "scenes": [ { "text": "This is the text to be spoken in the video", "searchTerms": ["nature sunset"] } ], "config": { "paddingBack": 3000, "music": "chill" } }
  2. DELETE /api/short-video/:id - Delete a video by ID
  3. GET /api/music-tags - Get available music tags

Model Context Protocol (MCP)

The service also implements the Model Context Protocol:

  1. GET /mcp/sse - Server-sent events for MCP
  2. POST /mcp/messages - Send messages to MCP server

Available MCP tools:

  • create-short-video - Create a video from a list of scenes
  • get-video-status - Check video creation status

License

This project is licensed under the MIT License.

Acknowledgments

  • ❤️ Remotion for programmatic video generation
  • ❤️ Whisper for speech-to-text
  • ❤️ Pexels for video content
  • ❤️ FFmpeg for audio/video processing
  • ❤️ Kokoro for TTS
ID: pyvkl848iz