Leverages OpenAI's API for speech-to-text and text-to-speech services, supporting both cloud-based processing and compatibility with local alternatives through an OpenAI-compatible API interface.
Features a demo video showcasing Voice Mode functionality that users can watch to understand how the voice interactions work.
Voice Mode
Install via:
uvx voice-mode
|pip install voice-mode
| getvoicemode.com
Natural voice conversations for AI assistants. Voice Mode brings human-like voice interactions to Claude, ChatGPT, and other LLMs through the Model Context Protocol (MCP).
🖥️ Compatibility
Runs on: Linux • macOS • Windows (WSL) • NixOS | Python: 3.10+
✨ Features
- 🎙️ Voice conversations with Claude - ask questions and hear responses
- 🔄 Multiple transports - local microphone or LiveKit room-based communication
- 🗣️ OpenAI-compatible - works with any STT/TTS service (local or cloud)
- ⚡ Real-time - low-latency voice interactions with automatic transport selection
- 🔧 MCP Integration - seamless with Claude Desktop and other MCP clients
- 🎯 Silence detection - automatically stops recording when you stop speaking (no more waiting!)
🎯 Simple Requirements
All you need to get started:
- 🔑 OpenAI API Key (or compatible service) - for speech-to-text and text-to-speech
- 🎤 Computer with microphone and speakers OR ☁️ LiveKit server (LiveKit Cloud or self-hosted)
Quick Start
📖 Using a different tool? See our Integration Guides for Cursor, VS Code, Gemini CLI, and more!
🎬 Demo
Watch Voice Mode in action with Claude Code:
Voice Mode with Gemini CLI
See Voice Mode working with Google's Gemini CLI (their implementation of Claude Code):
Example Usage
Once configured, try these prompts with Claude:
👨💻 Programming & Development
"Let's debug this error together"
- Explain the issue verbally, paste code, and discuss solutions"Walk me through this code"
- Have Claude explain complex code while you ask questions"Let's brainstorm the architecture"
- Design systems through natural conversation"Help me write tests for this function"
- Describe requirements and iterate verbally
💡 General Productivity
"Let's do a daily standup"
- Practice presentations or organize your thoughts"Interview me about [topic]"
- Prepare for interviews with back-and-forth Q&A"Be my rubber duck"
- Explain problems out loud to find solutions
🎯 Voice Control Features
"Read this error message"
(Claude speaks, then waits for your response)"Just give me a quick summary"
(Claude speaks without waiting)- Use
converse("message", wait_for_response=False)
for one-way announcements
The converse
function makes voice interactions natural - it automatically waits for your response by default, creating a real conversation flow.
Supported Tools
Voice Mode works with your favorite AI coding assistants:
- 🤖 Claude Code - Anthropic's official CLI
- 🖥️ Claude Desktop - Desktop application
- 🌟 Gemini CLI - Google's CLI tool
- ⚡ Cursor - AI-first code editor
- 💻 VS Code - With MCP preview support
- 🦘 Roo Code - AI dev team in VS Code
- 🔧 Cline - Autonomous coding agent
- ⚡ Zed - High-performance editor
- 🏄 Windsurf - Agentic IDE by Codeium
- 🔄 Continue - Open-source AI assistant
Installation
Prerequisites
- Python >= 3.10
- Astral UV - Package manager (install with
curl -LsSf https://astral.sh/uv/install.sh | sh
) - OpenAI API Key (or compatible service)
System Dependencies
Note for WSL2 users: WSL2 requires additional audio packages (pulseaudio, libasound2-plugins) for microphone access. See our WSL2 Microphone Access Guide if you encounter issues.
Follow the Ubuntu/Debian instructions above within WSL.
Voice Mode includes a flake.nix with all required dependencies. You can either:
- Use the development shell (temporary):
- Install system-wide (see Installation section below)
Quick Install
Configuration for AI Coding Assistants
📖 Looking for detailed setup instructions? Check our comprehensive Integration Guides for step-by-step instructions for each tool!
Below are quick configuration snippets. For full installation and setup instructions, see the integration guides above.
Or with environment variables:
macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
Windows: %APPDATA%\Claude\claude_desktop_config.json
Add to your Cline MCP settings:
Windows:
macOS/Linux:
Add to your .continue/config.json
:
Add to ~/.cursor/mcp.json
:
Add to your VS Code MCP config:
Add to your Zed settings.json:
- Open VS Code Settings (
Ctrl/Cmd + ,
) - Search for "roo" in the settings search bar
- Find "Roo-veterinaryinc.roo-cline → settings → Mcp_settings.json"
- Click "Edit in settings.json"
- Add Voice Mode configuration:
Alternative Installation Options
1. Install with nix profile (user-wide):
2. Add to NixOS configuration (system-wide):
3. Add to home-manager:
4. Run without installing:
Tools
Tool | Description | Key Parameters |
---|---|---|
converse | Have a voice conversation - speak and optionally listen | message , wait_for_response (default: true), listen_duration (default: 30s), transport (auto/local/livekit) |
listen_for_speech | Listen for speech and convert to text | duration (default: 5s) |
check_room_status | Check LiveKit room status and participants | None |
check_audio_devices | List available audio input/output devices | None |
start_kokoro | Start the Kokoro TTS service | models_dir (optional, defaults to ~/Models/kokoro) |
stop_kokoro | Stop the Kokoro TTS service | None |
kokoro_status | Check the status of Kokoro TTS service | None |
Note: The converse
tool is the primary interface for voice interactions, combining speaking and listening in a natural flow.
Configuration
- 📖 Integration Guides - Step-by-step setup for each tool
- 🔧 Configuration Reference - All environment variables
- 📁 Config Examples - Ready-to-use configuration files
Quick Setup
The only required configuration is your OpenAI API key:
Optional Settings
Audio Format Configuration
Voice Mode uses PCM audio format by default for TTS streaming for optimal real-time performance:
- PCM (default for TTS): Zero latency, best streaming performance, uncompressed
- MP3: Wide compatibility, good compression for uploads
- WAV: Uncompressed, good for local processing
- FLAC: Lossless compression, good for archival
- AAC: Good compression, Apple ecosystem
- Opus: Small files but NOT recommended for streaming (quality issues)
The audio format is automatically validated against provider capabilities and will fallback to a supported format if needed.
Local STT/TTS Services
For privacy-focused or offline usage, Voice Mode supports local speech services:
- Whisper.cpp - Local speech-to-text with OpenAI-compatible API
- Kokoro - Local text-to-speech with multiple voice options
These services provide the same API interface as OpenAI, allowing seamless switching between cloud and local processing.
OpenAI API Compatibility Benefits
By strictly adhering to OpenAI's API standard, Voice Mode enables powerful deployment flexibility:
- 🔀 Transparent Routing: Users can implement their own API proxies or gateways outside of Voice Mode to route requests to different providers based on custom logic (cost, latency, availability, etc.)
- 🎯 Model Selection: Deploy routing layers that select optimal models per request without modifying Voice Mode configuration
- 💰 Cost Optimization: Build intelligent routers that balance between expensive cloud APIs and free local models
- 🔧 No Lock-in: Switch providers by simply changing the
BASE_URL
- no code changes required
Example: Simply set OPENAI_BASE_URL
to point to your custom router:
The OpenAI SDK handles this automatically - no Voice Mode configuration needed!
Architecture
Troubleshooting
Common Issues
- No microphone access: Check system permissions for terminal/application
- WSL2 Users: See WSL2 Microphone Access Guide
- UV not found: Install with
curl -LsSf https://astral.sh/uv/install.sh | sh
- OpenAI API error: Verify your
OPENAI_API_KEY
is set correctly - No audio output: Check system audio settings and available devices
Debug Mode
Enable detailed logging and audio file saving:
Debug audio files are saved to: ~/voicemode_recordings/
Audio Diagnostics
Run the diagnostic script to check your audio setup:
This will check for required packages, audio services, and provide specific recommendations.
Audio Saving
To save all audio files (both TTS output and STT input):
Audio files are saved to: ~/voicemode_audio/
with timestamps in the filename.
Documentation
📚 Read the full documentation at voice-mode.readthedocs.io
Getting Started
- Integration Guides - Step-by-step setup for all supported tools
- Configuration Guide - Complete environment variable reference
Development
- Using uv/uvx - Package management with uv and uvx
- Local Development - Development setup guide
- Audio Formats - Audio format configuration and migration
- Statistics Dashboard - Performance monitoring and metrics
Service Guides
- Whisper.cpp Setup - Local speech-to-text configuration
- Kokoro Setup - Local text-to-speech configuration
- LiveKit Integration - Real-time voice communication
Troubleshooting
- WSL2 Microphone Access - WSL2 audio setup
- Migration Guide - Upgrading from older versions
Links
- Website: getvoicemode.com
- Documentation: voice-mode.readthedocs.io
- GitHub: github.com/mbailey/voicemode
- PyPI: pypi.org/project/voice-mode
- npm: npmjs.com/package/voicemode
Community
- Discord: Join our community
- Twitter/X: @getvoicemode
- YouTube: @getvoicemode
See Also
- 🚀 Integration Guides - Setup instructions for all supported tools
- 🔧 Configuration Reference - Environment variables and options
- 🎤 Local Services Setup - Run TTS/STT locally for privacy
- 🐛 Troubleshooting - Common issues and solutions
License
MIT - A Failmode Project
This server cannot be installed
hybrid server
The server is able to function both locally and remotely, depending on the configuration or use case.
Natural voice conversations for AI assistants that brings human-like voice interactions to Claude, ChatGPT, and other LLMs through the Model Context Protocol (MCP).
Related MCP Servers
- AsecurityAlicenseAqualityA Model Context Protocol server that enables AI assistants like Claude to interact with Google Cloud Platform environments through natural language, allowing users to query and manage GCP resources during conversations.Last updated -910262TypeScriptMIT License
- -securityAlicense-qualityA Model Context Protocol server that integrates high-quality text-to-speech capabilities with Claude Desktop and other MCP-compatible clients, supporting multiple voice options and audio formats.Last updated -TypeScriptMIT License
- -securityFlicense-qualityMCP ChatGPT Responses connects Claude to ChatGPT through two essential tools: standard queries for AI-to-AI conversations and web-enabled requests for current information. It uses OpenAI's Responses API to maintain conversation state automatically.Last updated -5Python
- -security-license-qualityA Model Context Protocol (MCP) server that allows AI assistants like Claude to interact with Go's Language Server Protocol (LSP) and benefit from advanced Go code analysis features.Last updated -GoApache 2.0