livechat-mcp
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@livechat-mcpstart voice conversation with assistant"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
livechat-mcp
A Model Context Protocol (MCP) server that lets you have a continuous voice conversation with your AI coding assistant. You speak, your speech is transcribed locally with Whisper, and each utterance is delivered to the assistant as if you'd typed it. No tab switching, no copy/paste, no batch recording.
Works with any MCP host. First-class support for:
Claude Code
Codex CLI
Gemini CLI
Requirements
macOS, Linux, or Windows (native via PowerShell, or under WSL2 / Git Bash).
Python 3.10+
An MCP host installed (Claude Code, Codex, Gemini, etc.)
A working microphone
~500 MB disk for Whisper model cache + dependencies
uvfor project management (recommended)
Quick install (recommended)
From a clone of the repo:
# macOS / Linux / Git Bash on Windows
./install.sh# Native Windows PowerShell
.\install.ps1The bootstrap detects the OS, installs portaudio if needed (brew / apt / dnf /
pacman / zypper — Windows wheels ship it bundled), installs uv if missing,
runs uv sync, drops the wizard into ~/.local/bin, and launches the
interactive setup wizard.
Windows: native locking uses
msvcrtand takeover signaling is file-based, so nofcntldependency. The interactive wizard is a bash script —install.ps1invokes it through Git Bash, which it offers to install viawingetif missing.
Manual setup
If you'd rather install step-by-step, here's what install.sh does:
1. Install portaudio
sounddevice needs portaudio.
macOS:
brew install portaudioDebian/Ubuntu:
sudo apt-get install libportaudio2 portaudio19-devFedora/RHEL:
sudo dnf install portaudio portaudio-develArch:
sudo pacman -S portaudio
2. Install uv if you don't have it
curl -LsSf https://astral.sh/uv/install.sh | sh3. Clone and install dependencies
cd livechat-mcp
uv syncThis will create .venv/ and install mcp, faster-whisper, sounddevice,
silero-vad, torch, etc.
4. Run the setup wizard
install -m 0755 bin/livechat-mcp ~/.local/bin/livechat-mcp
livechat-mcp setupThe wizard will:
Ask which assistants to install for (Claude Code / Codex / Gemini, any combination).
Copy the
/livechatand/endlivechatslash commands to hosts that support custom slash commands. For Codex, it installs both legacy prompt files and alivechatskill, because current Codex CLI releases do not expose custom prompts as/livechat.Register the MCP server in each host's config file.
Walk you through the tunable env vars (silence threshold, Whisper model, etc.) — press Enter to keep defaults.
Make sure ~/.local/bin is on your PATH (it already is if you used the
official uv installer).
If you'd rather wire things up by hand, the manual steps for each host are below.
5. Grant microphone permission
macOS: the first time the server tries to capture audio, macOS will prompt your terminal app (Terminal, iTerm, Ghostty, Warp, etc.) for mic access. If you miss the prompt, enable it manually:
System Settings → Privacy & Security → Microphone → enable for your terminal
If you skip this, audio capture silently returns silence and nothing will ever transcribe.
Windows: Settings → Privacy & security → Microphone → allow desktop apps to access the microphone (and ensure your terminal is permitted).
Linux: usually no prompt — just make sure your user has the right ALSA / PulseAudio / Pipewire access (typically the
audiogroup).
6. Pre-download the Whisper model (optional)
The first run downloads base.en (~150 MB). You can pre-warm it:
uv run python -c "from faster_whisper import WhisperModel; WhisperModel('base.en', device='cpu', compute_type='int8')"Manual install (skip if you used livechat-mcp setup)
Claude Code
Copy the slash commands:
mkdir -p ~/.claude/commands
cp commands/livechat.md ~/.claude/commands/
cp commands/endlivechat.md ~/.claude/commands/Register the MCP server:
claude mcp add livechat -- uv --directory "$(pwd)" run livechat-mcpOr edit ~/.claude.json directly:
{
"mcpServers": {
"livechat": {
"command": "uv",
"args": ["--directory", "/absolute/path/to/livechat-mcp", "run", "livechat-mcp"]
}
}
}Codex CLI
Install the Codex skill and legacy prompt files:
mkdir -p ~/.codex/skills/livechat
cp skills/livechat/SKILL.md ~/.codex/skills/livechat/
mkdir -p ~/.codex/prompts
cp commands/livechat.md ~/.codex/prompts/
cp commands/endlivechat.md ~/.codex/prompts/Register the MCP server in ~/.codex/config.toml:
[mcp_servers.livechat]
command = "uv"
args = ["--directory", "/absolute/path/to/livechat-mcp", "run", "livechat-mcp"]Gemini CLI
Gemini uses TOML for custom commands. The wizard generates these for you;
to do it by hand, see commands/gemini/livechat.toml.template (created by
running livechat-mcp setup once).
Register the MCP server in ~/.gemini/settings.json:
{
"mcpServers": {
"livechat": {
"command": "uv",
"args": ["--directory", "/absolute/path/to/livechat-mcp", "run", "livechat-mcp"]
}
}
}Usage
Open your assistant's CLI in any terminal:
claude # or: codex or: geminiThen in the assistant prompt:
/livechat # Claude Code, Gemini CLI
use livechat # Codex CLICodex restart required. Codex only loads skills and MCP servers at startup. If you ran the wizard while Codex was open, quit and relaunch before using
use livechat.
Codex 0.128.0 does not support user-defined /livechat slash commands; / is
currently reserved for Codex's built-in commands. The setup installs a
discoverable livechat skill instead, so you can type use livechat or open
/skills and pick livechat.
The assistant will call get_voice_input and start listening. Speak
normally. When you pause for ~1.5 seconds, your utterance is finalized,
transcribed, and sent as a prompt. The assistant responds, then immediately
listens for the next utterance.
While the assistant is generating a response, the mic is still hot — anything
you say during that time queues up and is delivered all at once on the next
get_voice_input call.
Ending a session
Three ways:
/endlivechat— cleanest, runs from the assistant prompt. (You'll need to interrupt the current turn first if it's mid-response.)Wake phrase — say
terminate voice session now. The transcription triggers shutdown. The phrase is intentionally awkward to avoid collisions with real review content. Configurable viaLIVECHAT_END_PHRASE.Ctrl+C — kills the MCP server. The assistant will see a tool error on the next call and stop the loop.
Configuration
All tunables live in livechat_mcp/config.py and can be overridden via env vars:
Var | Default | Notes |
|
| English-only: |
|
| Language code ( |
|
|
|
|
|
|
|
| Silence after speech to end an utterance |
|
| Silero VAD speech probability threshold |
|
| Minimum utterance length (filters coughs) |
|
| Force-cut runaway utterances |
|
| How long |
|
| Spoken phrase to end the session |
| unset | Set to |
The easy way to set these is livechat-mcp set KEY VALUE — it edits the
env block in every host config it finds (Claude / Codex / Gemini).
livechat-mcp show # print current env block(s)
livechat-mcp set LIVECHAT_SILENCE_SEC 1.5
livechat-mcp unset LIVECHAT_DEBUGRestart your assistant CLI after any change — MCP env vars are read by the server at startup.
To do it manually, edit the env field of the livechat MCP entry in each
host's config. Example for Claude Code:
{
"mcpServers": {
"livechat": {
"command": "uv",
"args": ["--directory", "/abs/path", "run", "livechat-mcp"],
"env": {
"LIVECHAT_WHISPER_MODEL": "small.en",
"LIVECHAT_DEBUG": "1"
}
}
}
}Troubleshooting
Nothing happens when I speak.
Check (in order): mic permission for your terminal app, mic input level
(System Settings → Sound), set LIVECHAT_DEBUG=1 and watch stderr for VAD
events, lower LIVECHAT_VAD_THRESHOLD to 0.3.
Transcriptions are inaccurate.
Upgrade model: LIVECHAT_WHISPER_MODEL=small.en or medium.en. medium.en
is noticeably slower on CPU (still real-time-ish) but much better for
technical vocabulary.
Utterance ends too quickly / too slowly.
Tune LIVECHAT_SILENCE_SEC (or run livechat-mcp set LIVECHAT_SILENCE_SEC 1.5).
1.0–4.5 is the useful range — lower feels snappier but risks cutting
mid-thought pauses.
uv not found.
Either install uv (recommended) or change the MCP config command to a
direct invocation of python -m livechat_mcp.server from inside an activated
venv.
The server starts but the assistant never calls the tool.
Make sure /livechat was invoked. Without the slash command, the assistant
has no instruction to enter the loop.
Server logs go into the assistant's UI as garbage / break the protocol.
This shouldn't happen — all server logging goes to stderr. If you see it,
file a bug. Make sure you have not added any print(...) statements without
file=sys.stderr.
portaudio errors on startup.
Install it: brew install portaudio. If it's installed and still failing, try
brew reinstall portaudio and reinstall sounddevice: uv sync --reinstall.
How it works (short version)
[mic] → [Silero VAD] → [Whisper] → [queue] ← [get_voice_input tool] ← [Assistant]
↑________background thread, always running________↑The audio pipeline is decoupled from the MCP tool, so the mic is always hot while the server is up. Utterances spoken while the assistant is generating a response are queued and delivered on the next tool call.
License
MIT.
This server cannot be installed
Resources
Unclaimed servers have limited discoverability.
Looking for Admin?
If you are the server author, to access and configure the admin panel.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/brunocramos/livechat-mcp'
If you have feedback or need assistance with the MCP directory API, please join our Discord server