Utilizes Ollama as an LLM backend to provide the robot with smart sentiment analysis and AI-generated responses based on the context of coding tasks.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@Reachy Claude MCPCelebrate that I finally fixed that tricky bug!"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
Reachy Claude MCP
MCP server that brings Reachy Mini to life as your coding companion in Claude Code.
Reachy reacts to your coding sessions with emotions, speech, and celebratory dances - making coding more interactive and fun!
Features
Feature | Basic | + LLM | + Memory |
Robot emotions & animations | ✅ | ✅ | ✅ |
Text-to-speech (Piper TTS) | ✅ | ✅ | ✅ |
Session tracking (SQLite) | ✅ | ✅ | ✅ |
Smart sentiment analysis | ❌ | ✅ | ✅ |
AI-generated responses | ❌ | ✅ | ✅ |
Semantic problem search | ❌ | ❌ | ✅ |
Cross-project memory | ❌ | ❌ | ✅ |
Requirements
Python 3.10+
Reachy Mini robot or the simulation (see below)
Audio output (speakers/headphones)
Platform Support
Platform | Basic | LLM (MLX) | LLM (Ollama) | Memory |
macOS Apple Silicon | ✅ | ✅ | ✅ | ✅ |
macOS Intel | ✅ | ❌ | ✅ | ✅ |
Linux | ✅ | ❌ | ✅ | ✅ |
Windows | ⚠️ Experimental | ❌ | ✅ | ✅ |
Quick Start
Install the package:
pip install reachy-claude-mcpStart Reachy Mini simulation (if you don't have the physical robot):
# On macOS with Apple Silicon mjpython -m reachy_mini.daemon.app.main --sim --scene minimal # On other platforms python -m reachy_mini.daemon.app.main --sim --scene minimalAdd to Claude Code (
~/.mcp.json):{ "mcpServers": { "reachy-claude": { "command": "reachy-claude" } } }Start Claude Code and Reachy will react to your coding!
Installation Options
Basic (robot + TTS only)
Without LLM features, Reachy uses keyword matching for sentiment - still works great!
With LLM (Smart Responses)
Option A: MLX (Apple Silicon only - fastest)
Option B: Ollama (cross-platform)
The system automatically picks the best available backend: MLX → Ollama → keyword fallback.
Full Features (requires Qdrant)
Development Install
Running Reachy Mini
No Robot? Use the Simulation!
You don't need a physical Reachy Mini to use this. The simulation works great:
The simulation dashboard will be available at http://localhost:8000.
Physical Robot
Follow the Reachy Mini setup guide to connect to your physical robot.
Configuration
Environment Variables
Variable | Default | Description |
|
| Data directory for database, memory, voice models |
LLM Settings | ||
|
| MLX model (Apple Silicon) |
|
| Ollama server URL |
|
| Ollama model name |
Memory Settings | ||
|
| Qdrant server host |
|
| Qdrant server port |
Voice Settings | ||
| (auto-download) | Path to custom Piper voice model |
MCP Tools
Basic Interactions
Tool | Description |
| Speak a summary (1-2 sentences) + play emotion |
| Play emotion animation only |
| Success animation + excited speech |
| Thinking/processing animation |
| Start-of-session greeting |
| End-of-session goodbye |
| Error acknowledgment |
| Quick nod without speaking |
Dance Moves
Tool | Description |
| Perform a dance move |
| Dance while speaking |
| Major milestone celebration |
| After fixing a tricky bug |
Smart Features
Tool | Description |
| Auto-analyze output and react appropriately |
| Context-aware greeting based on history |
| Search past solutions across projects |
| Save problem-solution pairs for future |
| Mark relationships between projects |
Utilities
Tool | Description |
| List available emotions |
| List available dance moves |
| Memory statistics across sessions |
| All projects Reachy remembers |
Available Emotions
Available Dances
Celebrations: celebrate, victory, playful, party Acknowledgments: nod, agree, listening, acknowledge Reactions: mind_blown, recovered, fixed_it, whoa Subtle: idle, processing, waiting, thinking_dance Expressive: peek, glance, sharp, funky, smooth, spiral
Usage Examples
Claude can call these tools during coding sessions:
Architecture
Troubleshooting
Voice model not found
The voice model auto-downloads on first use. If you have issues:
No audio on Linux
Install PulseAudio or ALSA utilities:
LLM not working
Check which backend is available:
MLX: Only works on Apple Silicon Macs. Install with
pip install "reachy-claude-mcp[llm]"Ollama: Make sure Ollama is running (
ollama serve) and you've pulled a model (ollama pull qwen2.5:1.5b)
If neither is available, the system falls back to keyword-based sentiment detection (still works, just less smart).
Qdrant connection failed
Make sure Qdrant is running:
Or point to a remote Qdrant instance:
Simulation won't start
If mjpython isn't found, you may need to install MuJoCo separately or use regular Python:
On Linux, you may need to set MUJOCO_GL=egl or MUJOCO_GL=osmesa for headless rendering.
License
MIT