Utilizes macOS system features for local audio processing and displays native notification banners for monitoring voice activity status.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@Voice MCPlisten to me"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
voice-mcp
Bidirectional voice MCP server for Claude Code. Adds listen() and speak() tools so Claude can hear you and talk back — all running locally on Apple Silicon via mlx-audio.
User speaks → listen() → mic + VAD → Voxtral Realtime STT → text to Claude
Claude responds → speak(text) → Kokoro TTS → AudioPlayer → speakersRequirements
macOS with Apple Silicon (M1+)
Python 3.11+
A working microphone and speakers
Setup
git clone <this-repo> && cd voice-mcp
uv syncFirst run downloads the models from HuggingFace (~2.5 GB total):
STT:
mlx-community/Voxtral-Mini-4B-Realtime-2602-int4(Voxtral Realtime, 4-bit quantized)TTS:
mlx-community/Kokoro-82M-bf16(Kokoro, 82M params)
Configure Claude Code
The repo includes .mcp.json so Claude Code auto-discovers the server. Just open Claude Code from the project directory:
cd voice-mcp
claudeOn first use, Claude Code will prompt you to approve the MCP server.
To use from a different project, add to that project's .mcp.json:
{
"mcpServers": {
"voice": {
"type": "stdio",
"command": "uv",
"args": ["--directory", "/path/to/voice-mcp", "run", "server.py"]
}
}
}Usage
Voice input
Tell Claude "listen to me" or use the /listen slash command. You'll hear a rising chime when the mic is active — speak naturally, and recording stops automatically after 1.5s of silence (falling chime).
Voice output
Claude will speak conversational responses automatically. Code, data, and technical details stay in the terminal.
Change voice
Use /voice to browse all 54 available voices across 9 languages, or /voice am_echo to switch directly.
Tools
listen(duration?)
Records audio from the microphone and returns a transcription.
Default: VAD-based — waits for speech, stops after silence
With : fixed-length recording (seconds)
STT languages: ar, de, en, es, fr, hi, it, ja, ko, nl, pt, ru, zh
speak(text, voice?, speed?, lang?)
Speaks text aloud through the computer's speakers.
voice: Kokoro voice ID (default:
af_heart). See/voicefor optionsspeed: playback speed multiplier (default: 1.0)
lang: language code —
aAmerican English,bBritish English,eSpanish,fFrench,hHindi,iItalian,jJapanese,pPortuguese,zMandarin
How it works
The server runs as a stdio subprocess managed by Claude Code. Audio I/O (mic + speakers) happens in the server process; only text crosses the MCP protocol.
STT: Voxtral Realtime (4B params, int4) — streams audio through a causal encoder-decoder with adaptive RMS normalization
TTS: Kokoro (82M params, bf16) — ALBERT text encoder → prosody predictor → iSTFTNet vocoder
VAD: webrtcvad (mode 3) with energy-based fallback
Audio cues: synthesized tones via sounddevice — rising chime = listening, falling chime = done
Notifications: macOS banners via hooks when listening starts/stops and when speaking
Models are pre-loaded at server startup via FastMCP's lifespan hook, so the first tool call is fast.
Demos
Turn your sound on — this demo has voice!
Claude Code
https://github.com/user-attachments/assets/22493ae6-2da9-4a06-b8ef-be203c98b35d
OpenAI Codex
https://github.com/user-attachments/assets/09b6dceb-d83d-46a9-a2fd-d66fa3bc1186
Mistral Vibe
https://github.com/user-attachments/assets/7530373f-719d-46c5-8a35-2337e8121356