Utilizes Ollama as an LLM backend to provide the robot with smart sentiment analysis and AI-generated responses based on the context of coding tasks.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@Reachy Claude MCPCelebrate that I finally fixed that tricky bug!"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
Reachy Claude MCP
MCP server that brings Reachy Mini to life as your coding companion in Claude Code.
Reachy reacts to your coding sessions with emotions, speech, and celebratory dances - making coding more interactive and fun!
Features
Feature | Basic | + LLM | + Memory |
Robot emotions & animations | ✅ | ✅ | ✅ |
Text-to-speech (Piper TTS) | ✅ | ✅ | ✅ |
Session tracking (SQLite) | ✅ | ✅ | ✅ |
Smart sentiment analysis | ❌ | ✅ | ✅ |
AI-generated responses | ❌ | ✅ | ✅ |
Semantic problem search | ❌ | ❌ | ✅ |
Cross-project memory | ❌ | ❌ | ✅ |
Requirements
Python 3.10+
Reachy Mini robot or the simulation (see below)
Audio output (speakers/headphones)
Platform Support
Platform | Basic | LLM (MLX) | LLM (Ollama) | Memory |
macOS Apple Silicon | ✅ | ✅ | ✅ | ✅ |
macOS Intel | ✅ | ❌ | ✅ | ✅ |
Linux | ✅ | ❌ | ✅ | ✅ |
Windows | ⚠️ Experimental | ❌ | ✅ | ✅ |
Quick Start
Install the package:
pip install reachy-claude-mcpStart Reachy Mini simulation (if you don't have the physical robot):
# On macOS with Apple Silicon mjpython -m reachy_mini.daemon.app.main --sim --scene minimal # On other platforms python -m reachy_mini.daemon.app.main --sim --scene minimalAdd to Claude Code (
~/.mcp.json):{ "mcpServers": { "reachy-claude": { "command": "reachy-claude" } } }Start Claude Code and Reachy will react to your coding!
(Optional) Add instructions for Claude - Copy
examples/CLAUDE.mdto your project root or~/projects/CLAUDE.md. This teaches Claude when and how to use Reachy's tools effectively.
Installation Options
Basic (robot + TTS only)
pip install reachy-claude-mcpWithout LLM features, Reachy uses keyword matching for sentiment - still works great!
With LLM (Smart Responses)
Option A: MLX (Apple Silicon only - fastest)
pip install "reachy-claude-mcp[llm]"Option B: Ollama (cross-platform)
# Install Ollama from https://ollama.ai
ollama pull qwen2.5:1.5b
# Then just use the basic install - Ollama is auto-detected
pip install reachy-claude-mcpThe system automatically picks the best available backend: MLX → Ollama → keyword fallback.
Full Features (requires Qdrant)
pip install "reachy-claude-mcp[all]"
# Start Qdrant vector database
docker compose up -dDevelopment Install
git clone https://github.com/mchardysam/reachy-claude-mcp.git
cd reachy-claude-mcp
# Install with all features
pip install -e ".[all]"
# Or specific features
pip install -e ".[llm]" # MLX sentiment analysis (Apple Silicon)
pip install -e ".[memory]" # Qdrant vector storeRunning Reachy Mini
No Robot? Use the Simulation!
You don't need a physical Reachy Mini to use this. The simulation works great:
# On macOS with Apple Silicon, use mjpython for the MuJoCo GUI
mjpython -m reachy_mini.daemon.app.main --sim --scene minimal
# On Linux/Windows/Intel Mac
python -m reachy_mini.daemon.app.main --sim --scene minimalThe simulation dashboard will be available at http://localhost:8000.
Physical Robot
Follow the Reachy Mini setup guide to connect to your physical robot.
Configuration
Environment Variables
Variable | Default | Description |
|
| Data directory for database, memory, voice models |
LLM Settings | ||
|
| MLX model (Apple Silicon) |
|
| Ollama server URL |
|
| Ollama model name |
Memory Settings | ||
|
| Qdrant server host |
|
| Qdrant server port |
Voice Settings | ||
| (auto-download) | Path to custom Piper voice model |
MCP Tools
Basic Interactions
Tool | Description |
| Speak a summary (1-2 sentences) + play emotion |
| Play emotion animation only |
| Success animation + excited speech |
| Thinking/processing animation |
| Start-of-session greeting |
| End-of-session goodbye |
| Error acknowledgment |
| Quick nod without speaking |
Dance Moves
Tool | Description |
| Perform a dance move |
| Dance while speaking |
| Major milestone celebration |
| After fixing a tricky bug |
Smart Features
Tool | Description |
| Auto-analyze output and react appropriately |
| Context-aware greeting based on history |
| Search past solutions across projects |
| Save problem-solution pairs for future |
| Mark relationships between projects |
Utilities
Tool | Description |
| List available emotions |
| List available dance moves |
| Memory statistics across sessions |
| All projects Reachy remembers |
Available Emotions
amazed, angry, anxious, attentive, bored, calm, celebrate, come, confused,
curious, default, disgusted, done, excited, exhausted, frustrated, go_away,
grateful, happy, helpful, inquiring, irritated, laugh, lonely, lost, loving,
neutral, no, oops, proud, relieved, sad, scared, serene, shy, sleep, success,
surprised, thinking, tired, uncertain, understanding, wake_up, welcoming, yesAvailable Dances
Celebrations: celebrate, victory, playful, party Acknowledgments: nod, agree, listening, acknowledge Reactions: mind_blown, recovered, fixed_it, whoa Subtle: idle, processing, waiting, thinking_dance Expressive: peek, glance, sharp, funky, smooth, spiral
Usage Examples
Claude can call these tools during coding sessions:
# After completing a task
robot_respond(summary="Done! Fixed the type error.", emotion="happy")
# When celebrating a win
robot_celebrate(message="Tests are passing!")
# Big milestone
robot_big_celebration(message="All tests passing! Ship it!")
# When starting to think
robot_thinking()
# Session start
robot_wake_up(greeting="Good morning! Let's write some code!")
# Session end
robot_sleep(message="Great session! See you tomorrow.")Architecture
src/reachy_claude_mcp/
├── server.py # MCP server with tools
├── config.py # Centralized configuration
├── robot_controller.py # Reachy Mini control
├── tts.py # Piper TTS (cross-platform)
├── memory.py # Session memory manager
├── database.py # SQLite project tracking
├── vector_store.py # Qdrant semantic search
├── llm_backends.py # LLM backend abstraction (MLX, Ollama)
└── llm_analyzer.py # Sentiment analysis and summarizationTroubleshooting
Voice model not found
The voice model auto-downloads on first use. If you have issues:
# Manual download
mkdir -p ~/.reachy-claude/voices
curl -L -o ~/.reachy-claude/voices/en_US-lessac-medium.onnx \
https://huggingface.co/rhasspy/piper-voices/resolve/main/en/en_US/lessac/medium/en_US-lessac-medium.onnx
curl -L -o ~/.reachy-claude/voices/en_US-lessac-medium.onnx.json \
https://huggingface.co/rhasspy/piper-voices/resolve/main/en/en_US/lessac/medium/en_US-lessac-medium.onnx.jsonNo audio on Linux
Install PulseAudio or ALSA utilities:
# Ubuntu/Debian
sudo apt install pulseaudio-utils
# Fedora
sudo dnf install pulseaudio-utilsLLM not working
Check which backend is available:
MLX: Only works on Apple Silicon Macs. Install with
pip install "reachy-claude-mcp[llm]"Ollama: Make sure Ollama is running (
ollama serve) and you've pulled a model (ollama pull qwen2.5:1.5b)
If neither is available, the system falls back to keyword-based sentiment detection (still works, just less smart).
Qdrant connection failed
Make sure Qdrant is running:
docker compose up -dOr point to a remote Qdrant instance:
export REACHY_QDRANT_HOST=your-qdrant-server.comSimulation won't start
If mjpython isn't found, you may need to install MuJoCo separately or use regular Python:
# Try without mjpython
python -m reachy_mini.daemon.app.main --sim --scene minimalOn Linux, you may need to set MUJOCO_GL=egl or MUJOCO_GL=osmesa for headless rendering.
License
MIT