Enables downloading and transcribing news videos from CNN using yt-dlp and OpenAI Whisper, with support for multiple output formats and transcription models.
Enables downloading and transcribing educational videos from Coursera using yt-dlp and OpenAI Whisper, with support for multiple output formats and transcription models.
Enables downloading and transcribing videos from Dailymotion using yt-dlp and OpenAI Whisper, with support for multiple output formats and language detection.
Enables downloading and transcribing educational videos from edX using yt-dlp and OpenAI Whisper, with support for multiple output formats and language detection.
Enables downloading and transcribing videos from Facebook using yt-dlp and OpenAI Whisper, with support for multiple output formats and language detection.
Enables downloading and transcribing videos from Instagram using yt-dlp and OpenAI Whisper, with support for multiple output formats and transcription models.
Enables downloading and transcribing educational videos from Khan Academy using yt-dlp and OpenAI Whisper, with support for multiple output formats and configurable transcription settings.
Enables downloading and transcribing news videos from NBC using yt-dlp and OpenAI Whisper, with support for multiple output formats and language options.
Enables downloading and transcribing videos from Reddit using yt-dlp and OpenAI Whisper, with support for multiple output formats and configurable transcription settings.
Enables downloading and transcribing videos from TikTok using yt-dlp and OpenAI Whisper, with support for multiple output formats and configurable transcription settings.
Enables downloading and transcribing videos from Twitch using yt-dlp and OpenAI Whisper, with support for multiple output formats and language options.
Enables downloading and transcribing educational videos from Udemy using yt-dlp and OpenAI Whisper, with support for multiple output formats and language options.
Enables downloading and transcribing videos from Vimeo using yt-dlp and OpenAI Whisper, with support for multiple output formats and language options.
Enables downloading and transcribing videos from YouTube using yt-dlp and OpenAI Whisper, with support for multiple output formats (TXT, JSON, Markdown) and configurable transcription models.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@Video Transcriber MCP Servertranscribe this YouTube tutorial video in Spanish"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
Video Transcriber MCP Server
A Model Context Protocol (MCP) server that transcribes videos from 1000+ platforms using OpenAI's Whisper model. Built with TypeScript for type safety and available via npx for easy installation.
π Looking for better performance? Check out the Rust version which uses whisper.cpp for significantly faster transcription with lower memory usage. Available on crates.io with
cargo install video-transcriber-mcp.
β¨ What's New
π Multi-Platform Support: Now supports 1000+ video platforms (YouTube, Vimeo, TikTok, Twitter/X, Facebook, Instagram, Twitch, educational sites, and more) via yt-dlp
π» Cross-Platform: Works on macOS, Linux, and Windows
ποΈ Configurable Whisper Models: Choose from tiny, base, small, medium, or large models
π Language Support: Transcribe in 90+ languages or use auto-detection
π Automatic Retries: Network failures are handled automatically with exponential backoff
π― Platform Detection: Automatically detects the video platform
π List Supported Sites: New tool to see all 1000+ supported platforms
β‘ Improved Error Handling: More specific and helpful error messages
π Better Filename Handling: Improved sanitization preserving more characters
β οΈ Legal Notice
This tool is intended for educational, accessibility, and research purposes only.
Before using this tool, please understand:
Most platforms' Terms of Service generally prohibit downloading content
You are responsible for ensuring your use complies with applicable laws
This tool should primarily be used for:
β Your own content
β Creating accessibility features (captions for deaf/hard of hearing)
β Educational and research purposes (where permitted)
β Content you have explicit permission to download
Please read
We do not encourage or endorse violation of any platform's Terms of Service or copyright infringement. Use responsibly and ethically.
Features
π₯ Download audio from 1000+ video platforms (powered by yt-dlp)
π Transcribe local video files** (mp4, avi, mov, mkv, and more)
π€ Transcribe using OpenAI Whisper (local, no API key needed)
ποΈ Configurable Whisper models (tiny, base, small, medium, large)
π Support for 90+ languages with auto-detection
π Generate transcripts in multiple formats (TXT, JSON, Markdown)
π List and read previous transcripts as MCP resources
π Integrate seamlessly with Claude Code or any MCP client
β‘ TypeScript + npx for easy installation
π Full type safety with TypeScript
π Automatic dependency checking
π Automatic retry logic for network failures
π― Platform detection (shows which platform you're transcribing from)
Supported Platforms
Thanks to yt-dlp, this tool supports 1000+ video platforms including:
Social Media: YouTube, TikTok, Twitter/X, Facebook, Instagram, Reddit, LinkedIn
Video Hosting: Vimeo, Dailymotion, Twitch
Educational: Coursera, Udemy, Khan Academy, LinkedIn Learning, edX
News: BBC, CNN, NBC, PBS
Conference/Tech: YouTube (tech talks), Vimeo (conferences)
And many, many more!
Run the list_supported_sites tool to see the complete list of 1000+ supported platforms.
Prerequisites
macOS
brew install yt-dlp # Video downloader (supports 1000+ sites)
brew install openai-whisper # Whisper transcription
brew install ffmpeg # Audio processingLinux
# Ubuntu/Debian
sudo apt update
sudo apt install ffmpeg
pip install yt-dlp openai-whisper
# Fedora/RHEL
sudo dnf install ffmpeg
pip install yt-dlp openai-whisper
# Arch Linux
sudo pacman -S ffmpeg
pip install yt-dlp openai-whisperWindows
Option 1: Using pip (recommended)
# Install Python from python.org first
pip install yt-dlp openai-whisper
# Install ffmpeg using Chocolatey
choco install ffmpeg
# Or download ffmpeg from: https://ffmpeg.org/download.htmlOption 2: Using winget
winget install yt-dlp.yt-dlp
winget install Gyan.FFmpeg
pip install openai-whisperVerify installations (all platforms)
yt-dlp --version
whisper --version
ffmpeg -versionQuick Start
For End Users (Using npx)
Add to your Claude Code config (~/.claude/settings.json):
{
"mcpServers": {
"video-transcriber": {
"command": "npx",
"args": ["-y", "video-transcriber-mcp"]
}
}
}Or use directly from GitHub:
{
"mcpServers": {
"video-transcriber": {
"command": "npx",
"args": [
"-y",
"github:nhatvu148/video-transcriber-mcp"
]
}
}
}That's it! No installation needed. npx will automatically download and run the package.
For Local Development
# Clone the repository
git clone https://github.com/nhatvu148/video-transcriber-mcp.git
cd video-transcriber-mcp
# Install dependencies
npm install
# or
bun install
# Build the project
npm run build
# Use in Claude Code with local path
{
"mcpServers": {
"video-transcriber": {
"command": "npx",
"args": ["-y", "/path/to/video-transcriber-mcp"]
}
}
}Usage
From Claude Code
Once configured, you can use these tools in Claude Code:
Transcribe a video from any platform
Please transcribe this YouTube video: https://www.youtube.com/watch?v=VIDEO_IDTranscribe this TikTok video: https://www.tiktok.com/@user/video/123456789Get the transcript from this Vimeo video with high accuracy: https://vimeo.com/123456789
(use model: large)Transcribe this Spanish tutorial video: https://youtube.com/watch?v=VIDEO_ID
(language: es)Transcribe a local video file
Transcribe this local video file: /Users/myname/Videos/meeting.mp4Transcribe ~/Downloads/lecture.mov with high accuracy
(use model: medium)Claude will use the transcribe_video tool automatically with optional parameters for model and language.
List all supported platforms
What platforms can you transcribe videos from?List available transcripts
List all my video transcriptsCheck dependencies
Check if my video transcriber dependencies are installedRead a transcript
Show me the transcript for [video name]Programmatic Usage
If you install the package:
npm install video-transcriber-mcpYou can import and use it programmatically:
import { transcribeVideo, checkDependencies, WhisperModel } from 'video-transcriber-mcp';
// Check dependencies
checkDependencies();
// Transcribe a video from URL with custom options
const result = await transcribeVideo({
url: 'https://www.youtube.com/watch?v=VIDEO_ID',
outputDir: '/path/to/output',
model: 'medium', // tiny, base, small, medium, large
language: 'en', // or 'auto' for auto-detection
onProgress: (progress) => console.log(progress)
});
// Or transcribe a local video file
const localResult = await transcribeVideo({
url: '/path/to/video.mp4', // Local file path instead of URL
outputDir: '/path/to/output',
model: 'base',
language: 'auto',
onProgress: (progress) => console.log(progress)
});
console.log('Title:', result.metadata.title);
console.log('Platform:', result.metadata.platform);
console.log('Files:', result.files);Output
Transcripts are saved to ~/Downloads/video-transcripts/ by default.
For each video, three files are generated:
.txt- Plain text transcript.json- JSON with timestamps and metadata.md- Markdown with video metadata and formatted transcript
Example
~/Downloads/video-transcripts/
βββ 7JBuA1GHAjQ-From-AI-skeptic-to-UNFAIR-advantage.txt
βββ 7JBuA1GHAjQ-From-AI-skeptic-to-UNFAIR-advantage.json
βββ 7JBuA1GHAjQ-From-AI-skeptic-to-UNFAIR-advantage.mdMCP Tools
transcribe_video
Transcribe videos from 1000+ platforms or local video files to text.
Parameters:
url(required): Video URL from any supported platform OR path to a local video file (mp4, avi, mov, mkv, etc.)output_dir(optional): Output directory pathmodel(optional): Whisper model - "tiny", "base" (default), "small", "medium", "large"language(optional): Language code (ISO 639-1: "en", "es", "fr", etc.) or "auto" (default)
Model Comparison:
Model | Speed | Accuracy | Use Case |
tiny | β‘β‘β‘β‘β‘ | ββ | Quick drafts, testing |
base | β‘β‘β‘β‘ | βββ | General use (default) |
small | β‘β‘β‘ | ββββ | Better accuracy |
medium | β‘β‘ | βββββ | High accuracy |
large | β‘ | ββββββ | Best accuracy, slow |
list_transcripts
List all available transcripts with metadata.
Parameters:
output_dir(optional): Directory to list
check_dependencies
Verify that all required dependencies are installed.
list_supported_sites
List all 1000+ supported video platforms.
Configuration Examples
Claude Code (Recommended)
{
"mcpServers": {
"video-transcriber": {
"command": "npx",
"args": ["-y", "video-transcriber-mcp"]
}
}
}From GitHub (Latest)
{
"mcpServers": {
"video-transcriber": {
"command": "npx",
"args": ["-y", "github:nhatvu148/video-transcriber-mcp"]
}
}
}Local Development
{
"mcpServers": {
"video-transcriber": {
"command": "npx",
"args": ["-y", "/absolute/path/to/video-transcriber-mcp"]
}
}
}Development
Setup
# Install dependencies
npm install
# Build the project
npm run build
# Type check
npm run check
# Development mode (requires Bun)
bun run dev
# Clean build artifacts
npm run cleanProject Structure
video-transcriber-mcp/
βββ src/
β βββ index.ts # MCP server implementation
β βββ transcriber.ts # Core transcription logic
βββ dist/ # Built JavaScript (generated)
βββ package.json # Package configuration
βββ tsconfig.json # TypeScript configuration
βββ LICENSE # MIT License
βββ README.md # This fileScripts
Command | Description |
| Compile TypeScript to JavaScript |
| Development mode with hot reload (Bun) |
| TypeScript type checking |
| Remove dist/ directory |
| Pre-publish build (automatic) |
Publishing
# Build the project
npm run build
# Test locally first
npx . --help
# Publish to npm (bump version first)
npm version patch # or minor, major
npm publish
# Or publish from GitHub
# Push to GitHub and users can use:
# npx github:username/video-transcriber-mcpTroubleshooting
Dependencies not installed
See the Prerequisites section above for platform-specific installation instructions.
npx can't find the package
Make sure the package is:
Published to npm, OR
Available on GitHub with proper package.json
TypeScript errors
npm run checkPermission denied
The build process automatically makes dist/index.js executable via the fix-shebang script.
"Unsupported URL" error
The platform might not be supported by yt-dlp. Run list_supported_sites to see all supported platforms.
Performance
Video Length | Processing Time (base model) | Output Size |
5 minutes | ~1-2 minutes | ~5-10 KB |
10 minutes | ~2-4 minutes | ~10-20 KB |
30 minutes | ~5-10 minutes | ~30-50 KB |
1 hour | ~10-20 minutes | ~60-100 KB |
Times are approximate and depend on CPU speed and model choice
Advanced Configuration
Custom Whisper Model
Specify in the tool call parameters:
{
"url": "https://youtube.com/watch?v=...",
"model": "large"
}Custom Language
Specify the language code:
{
"url": "https://youtube.com/watch?v=...",
"language": "es"
}Custom Output Directory
Specify in the tool call:
{
"url": "https://youtube.com/watch?v=...",
"output_dir": "/custom/path"
}Contributing
Contributions welcome! Please:
Fork the repository
Create a feature branch
Make your changes
Add tests if applicable
Submit a pull request
License
MIT License - see LICENSE file for details
TypeScript vs Rust Version
This is the TypeScript version - great for:
β Quick setup with npx (no installation)
β Node.js ecosystem familiarity
β Easy to modify and extend
β Good for learning and prototyping
Consider the Rust version if you need:
π Faster transcription (uses whisper.cpp)
πΎ Lower memory usage
β‘ Native performance
π¦ Standalone binary (no Node.js required)
Both versions support the same MCP protocol and work identically with Claude Code!
Links
π¦ Rust Version β For better performance
Acknowledgments
OpenAI Whisper for transcription
yt-dlp for multi-platform video downloading (1000+ sites)
Model Context Protocol SDK
Claude by Anthropic
Made with β€οΈ for the MCP community