Skip to main content
Glama

Voice Mode

by mbailey
18
  • Linux
  • Apple

Voice Mode

Install via: uvx voice-mode | pip install voice-mode | getvoicemode.com

Natural voice conversations for AI assistants. Voice Mode brings human-like voice interactions to Claude, ChatGPT, and other LLMs through the Model Context Protocol (MCP).

🖥️ Compatibility

Runs on: Linux • macOS • Windows (WSL) | Python: 3.10+ | Tested: Ubuntu 24.04 LTS, Fedora 42

✨ Features

  • 🎙️ Voice conversations with Claude - ask questions and hear responses
  • 🔄 Multiple transports - local microphone or LiveKit room-based communication
  • 🗣️ OpenAI-compatible - works with any STT/TTS service (local or cloud)
  • ⚡ Real-time - low-latency voice interactions with automatic transport selection
  • 🔧 MCP Integration - seamless with Claude Desktop and other MCP clients

🎯 Simple Requirements

All you need to get started:

  1. 🔑 OpenAI API Key (or compatible service) - for speech-to-text and text-to-speech
  2. 🎤 Computer with microphone and speakers OR ☁️ LiveKit server (LiveKit Cloud or self-hosted)

Quick Start

claude mcp add --scope user voice-mode uvx voice-mode export OPENAI_API_KEY=your-openai-key claude > /converse

🎬 Demo

Watch Voice Mode in action:

Example Usage

Once configured, try these prompts with Claude:

  • "Let's have a voice conversation"
  • "Ask me about my day using voice"
  • "Tell me a joke" (Claude will speak and wait for your response)
  • "Say goodbye" (Claude will speak without waiting)

The new converse function makes voice interactions more natural - it automatically waits for your response by default.

Claude Desktop Setup

Add to your Claude Desktop configuration file:

macOS: ~/Library/Application Support/Claude/claude_desktop_config.json Windows: %APPDATA%\Claude\claude_desktop_config.json

{ "mcpServers": { "voicemode": { "command": "uvx", "args": ["voice-mode"], "env": { "OPENAI_API_KEY": "your-openai-key" } } } }
{ "mcpServers": { "voicemode": { "command": "voicemode", "env": { "OPENAI_API_KEY": "your-openai-key" } } } }

Tools

ToolDescriptionKey Parameters
converseHave a voice conversation - speak and optionally listenmessage, wait_for_response (default: true), listen_duration (default: 10s), transport (auto/local/livekit)
listen_for_speechListen for speech and convert to textduration (default: 5s)
check_room_statusCheck LiveKit room status and participantsNone
check_audio_devicesList available audio input/output devicesNone
start_kokoroStart the Kokoro TTS servicemodels_dir (optional, defaults to ~/Models/kokoro)
stop_kokoroStop the Kokoro TTS serviceNone
kokoro_statusCheck the status of Kokoro TTS serviceNone

Note: The converse tool is the primary interface for voice interactions, combining speaking and listening in a natural flow.

Configuration

📖 See docs/configuration.md for complete setup instructions for all MCP hosts

📁 Ready-to-use config files in config-examples/

Quick Setup

The only required configuration is your OpenAI API key:

export OPENAI_API_KEY="your-key"

Optional Settings

# Custom STT/TTS services (OpenAI-compatible) export STT_BASE_URL="http://localhost:2022/v1" # Local Whisper export TTS_BASE_URL="http://localhost:8880/v1" # Local TTS export TTS_VOICE="alloy" # Voice selection # LiveKit (for room-based communication) # See docs/livekit/ for setup guide export LIVEKIT_URL="wss://your-app.livekit.cloud" export LIVEKIT_API_KEY="your-api-key" export LIVEKIT_API_SECRET="your-api-secret" # Debug mode export VOICE_MCP_DEBUG="true" # Save all audio (TTS output and STT input) export VOICE_MCP_SAVE_AUDIO="true"

Local STT/TTS Services

For privacy-focused or offline usage, Voice Mode supports local speech services:

  • Whisper.cpp - Local speech-to-text with OpenAI-compatible API
  • Kokoro - Local text-to-speech with multiple voice options

These services provide the same API interface as OpenAI, allowing seamless switching between cloud and local processing.

OpenAI API Compatibility Benefits

By strictly adhering to OpenAI's API standard, Voice Mode enables powerful deployment flexibility:

  • 🔀 Transparent Routing: Users can implement their own API proxies or gateways outside of Voice Mode to route requests to different providers based on custom logic (cost, latency, availability, etc.)
  • 🎯 Model Selection: Deploy routing layers that select optimal models per request without modifying Voice Mode configuration
  • 💰 Cost Optimization: Build intelligent routers that balance between expensive cloud APIs and free local models
  • 🔧 No Lock-in: Switch providers by simply changing the BASE_URL - no code changes required

Example: Simply set OPENAI_BASE_URL to point to your custom router:

export OPENAI_BASE_URL="https://router.example.com/v1" export OPENAI_API_KEY="your-key" # Voice Mode now uses your router for all OpenAI API calls

The OpenAI SDK handles this automatically - no Voice Mode configuration needed!

Architecture

┌─────────────────────┐ ┌──────────────────┐ ┌─────────────────────┐ │ Claude/LLM │ │ LiveKit Server │ │ Voice Frontend │ │ (MCP Client) │◄────►│ (Optional) │◄───►│ (Optional) │ └─────────────────────┘ └──────────────────┘ └─────────────────────┘ │ │ │ │ ▼ ▼ ┌─────────────────────┐ ┌──────────────────┐ │ Voice MCP Server │ │ Audio Services │ │ • converse │ │ • OpenAI APIs │ │ • listen_for_speech│◄───►│ • Local Whisper │ │ • check_room_status│ │ • Local TTS │ │ • check_audio_devices └──────────────────┘ └─────────────────────┘

Troubleshooting

Common Issues

  • No microphone access: Check system permissions for terminal/application
  • UV not found: Install with curl -LsSf https://astral.sh/uv/install.sh | sh
  • OpenAI API error: Verify your OPENAI_API_KEY is set correctly
  • No audio output: Check system audio settings and available devices

Debug Mode

Enable detailed logging and audio file saving:

export VOICE_MCP_DEBUG=true

Debug audio files are saved to: ~/voice-mcp_recordings/

Audio Saving

To save all audio files (both TTS output and STT input):

export VOICE_MCP_SAVE_AUDIO=true

Audio files are saved to: ~/voice-mcp_audio/ with timestamps in the filename.

License

MIT - A Failmode Project

-
security - not tested
F
license - not found
-
quality - not tested

Natural voice conversations for AI assistants that brings human-like voice interactions to Claude, ChatGPT, and other LLMs through the Model Context Protocol (MCP).

  1. 🖥️ Compatibility
    1. ✨ Features
      1. 🎯 Simple Requirements
        1. Quick Start
          1. 🎬 Demo
            1. Example Usage
              1. Claude Desktop Setup
                1. Tools
                  1. Configuration
                    1. Quick Setup
                    2. Optional Settings
                  2. Local STT/TTS Services
                    1. OpenAI API Compatibility Benefits
                  3. Architecture
                    1. Troubleshooting
                      1. Common Issues
                      2. Debug Mode
                      3. Audio Saving
                    2. Links
                      1. License

                        Related MCP Servers

                        • A
                          security
                          A
                          license
                          A
                          quality
                          A Model Context Protocol server that enables AI assistants like Claude to interact with Google Cloud Platform environments through natural language, allowing users to query and manage GCP resources during conversations.
                          Last updated -
                          9
                          102
                          62
                          TypeScript
                          MIT License
                        • -
                          security
                          A
                          license
                          -
                          quality
                          A Model Context Protocol server that integrates high-quality text-to-speech capabilities with Claude Desktop and other MCP-compatible clients, supporting multiple voice options and audio formats.
                          Last updated -
                          TypeScript
                          MIT License
                        • -
                          security
                          F
                          license
                          -
                          quality
                          MCP ChatGPT Responses connects Claude to ChatGPT through two essential tools: standard queries for AI-to-AI conversations and web-enabled requests for current information. It uses OpenAI's Responses API to maintain conversation state automatically.
                          Last updated -
                          5
                          Python
                        • -
                          security
                          -
                          license
                          -
                          quality
                          A Model Context Protocol (MCP) server that allows AI assistants like Claude to interact with Go's Language Server Protocol (LSP) and benefit from advanced Go code analysis features.
                          Last updated -
                          Go
                          Apache 2.0

                        View all related MCP servers

                        MCP directory API

                        We provide all the information about MCP servers via our MCP API.

                        curl -X GET 'https://glama.ai/api/mcp/v1/servers/mbailey/voicemode'

                        If you have feedback or need assistance with the MCP directory API, please join our Discord server