Provides text-to-speech capabilities through Deepgram's API, enabling the Reachy Mini robot to speak using the speak tool.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@Reachy Mini MCPshow me a curious expression"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
Reachy Mini MCP
Give your AI a body.
This MCP server lets AI systems control Pollen Robotics' Reachy Mini robot—speak, listen, see, and express emotions through physical movement. Works with Claude, GPT, Grok, or any MCP-compatible AI.
7 tools. 30 minutes to first demo. Zero robotics expertise required.
https://reachy-mini-mcp-969sxyq.gamma.site/
For AI Systems
Token-efficient tool reference for programmatic use:
Tool | Args | Purpose |
|
| Voice + gesture, optionally listen after |
|
| STT via Deepgram Nova-2 |
| - | Camera capture (base64 JPEG) |
|
| Express emotion or play recorded move |
|
| Head positioning (degrees) |
|
| neutral / sleep / wake |
|
| Find available recorded moves |
speak()
Supports embedded move markers for choreographed speech:
speak("[move:curious1] What's this? [move:surprised1] Oh wow!")Moves fire right before their speech chunk. Use listen_after=5 to hear response.
show()
Built-in emotions (fast, local):
neutral, curious, uncertain, recognition, joy, thinking, listening, agreeing, disagreeing, sleepy, surprised, focused
Recorded moves (81 from Pollen):
show(move="loving1")
show(move="fear1")
show(move="serenity1")Use discover() to see all available moves.
Quick Start
# Install
cd reachy-mini-mcp
poetry install
# Set API key (required for speak/listen)
export DEEPGRAM_API_KEY=your_key_here
# Start simulator daemon
mjpython -m reachy_mini.daemon.app.main --sim --scene minimal
# Run MCP server
poetry run python src/server.pyArchitecture
AI (Claude/GPT/Grok) → MCP Server → SDK → Daemon → Robot/Simulator7 tools following Miller's Law—fits in working memory.
Voice Providers
Provider | Status | Use Case |
✅ Supported | xAI's expressive voice (Eve, Ara, Leo, Rex, Sal) | |
✅ Supported | TTS (Aura 2) + STT (Nova 2) |
Grok Voice is used automatically when XAI_API_KEY is set. Falls back to Deepgram otherwise.
MCP Config
Claude Desktop
~/Library/Application Support/Claude/claude_desktop_config.json:
{
"mcpServers": {
"reachy-mini": {
"command": "poetry",
"args": ["-C", "/path/to/reachy-mini-mcp", "run", "python", "src/server.py"],
"env": {
"DEEPGRAM_API_KEY": "your_key_here"
}
}
}
}Claude Code
~/.claude.json:
{
"mcpServers": {
"reachy-mini": {
"command": "poetry",
"args": ["-C", "/path/to/reachy-mini-mcp", "run", "python", "src/server.py"],
"env": {
"DEEPGRAM_API_KEY": "your_key_here"
}
}
}
}Environment Variables
Variable | Required | Default | Purpose |
| No | - | Grok Voice TTS (preferred) |
| No |
| Grok voice: ara, eve, leo, rex, sal |
| Yes* | - | STT (always required for listen) + TTS fallback |
| No |
| Daemon API endpoint |
*Required for listen(). Also required for speak() if XAI_API_KEY not set
Requirements
Python 3.10+
reachy-mini SDK (installed via poetry)
MuJoCo (for simulation)
Deepgram API key (for speak/listen)
Hardware Notes
Simulator:
mjpythonrequired on macOS for MuJoCo visualizationReal hardware: Same MCP server, daemon auto-connects
Port conflicts: Zenoh uses 7447, daemon uses 8321 by default
License
MIT License - see LICENSE
Acknowledgments
Reachy Mini SDK by Pollen Robotics (Apache 2.0)
Grok Voice integration pattern from dillera's reachy_mini_conversation_app
Links
Reachy Mini SDK (Apache 2.0)
This server cannot be installed
Resources
Looking for Admin?
Admins can modify the Dockerfile, update the server description, and track usage metrics. If you are the server author, to access the admin panel.