Provides speech recognition and audio transcription capabilities using OpenAI's Whisper model, supporting multiple model sizes, batch processing, and output formats including VTT, SRT, and JSON.
Whisper Speech Recognition MCP Server
中文文档
A high-performance speech recognition MCP server based on Faster Whisper, providing efficient audio transcription capabilities.
Features
Integrated with Faster Whisper for efficient speech recognition
Batch processing acceleration for improved transcription speed
Automatic CUDA acceleration (if available)
Support for multiple model sizes (tiny to large-v3)
Output formats include VTT subtitles, SRT, and JSON
Support for batch transcription of audio files in a folder
Model instance caching to avoid repeated loading
Dynamic batch size adjustment based on GPU memory
Installation
Dependencies
Python 3.10+
faster-whisper>=0.9.0
torch==2.6.0+cu126
torchaudio==2.6.0+cu126
mcp[cli]>=1.2.0
Installation Steps
Clone or download this repository
Create and activate a virtual environment (recommended)
Install dependencies:
PyTorch Installation Guide
Install the appropriate version of PyTorch based on your CUDA version:
CUDA 12.6:
pip install torch==2.6.0 torchvision==0.21.0 torchaudio==2.6.0 --index-url https://download.pytorch.org/whl/cu126CUDA 12.1:
pip install torch==2.5.1 torchvision==0.20.1 torchaudio==2.5.1 --index-url https://download.pytorch.org/whl/cu121CPU version:
pip install torch==2.6.0 torchvision==0.21.0 torchaudio==2.6.0 --index-url https://download.pytorch.org/whl/cpu
You can check your CUDA version with nvcc --version or nvidia-smi.
Usage
Starting the Server
On Windows, simply run start_server.bat.
On other platforms, run:
Configuring Claude Desktop
Open the Claude Desktop configuration file:
Windows:
%APPDATA%\Claude\claude_desktop_config.jsonmacOS:
~/Library/Application Support/Claude/claude_desktop_config.json
Add the Whisper server configuration:
Restart Claude Desktop
Available Tools
The server provides the following tools:
get_model_info - Get information about available Whisper models
transcribe - Transcribe a single audio file
batch_transcribe - Batch transcribe audio files in a folder
Performance Optimization Tips
Using CUDA acceleration significantly improves transcription speed
Batch processing mode is more efficient for large numbers of short audio files
Batch size is automatically adjusted based on GPU memory size
Using VAD (Voice Activity Detection) filtering improves accuracy for long audio
Specifying the correct language can improve transcription quality
Local Testing Methods
Use MCP Inspector for quick testing:
Use Claude Desktop for integration testing
Use command line direct invocation (requires mcp[cli]):
Error Handling
The server implements the following error handling mechanisms:
Audio file existence check
Model loading failure handling
Transcription process exception catching
GPU memory management
Batch processing parameter adaptive adjustment
Project Structure
whisper_server.py: Main server codemodel_manager.py: Whisper model loading and cachingaudio_processor.py: Audio file validation and preprocessingformatters.py: Output formatting (VTT, SRT, JSON)transcriber.py: Core transcription logicstart_server.bat: Windows startup script
License
MIT
Acknowledgements
This project was developed with the assistance of these amazing AI tools and models:
GitHub Copilot - AI pair programmer
Trae - Agentic AI coding assistant
Cline - AI-powered terminal
DeepSeek - Advanced AI model
Claude-3.7-Sonnet - Anthropic's powerful AI assistant
Gemini-2.0-Flash - Google's multimodal AI model
VS Code - Powerful code editor
Whisper - OpenAI's speech recognition model
Faster Whisper - Optimized Whisper implementation
Special thanks to these incredible tools and the teams behind them.