Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@Reachy Mini MCP Serversay 'Hello everyone' and perform the groovy_sway_and_roll dance"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
Reachy Mini MCP Server
An MCP (Model Context Protocol) server for controlling Reachy Mini robots. This allows Claude Desktop and other MCP clients to interact with Reachy Mini robots through natural language.
Features
Dance: Play choreographed dance moves
Emotions: Express pre-recorded emotions
Head Movement: Move head in different directions
Camera: Capture images from the robot's camera
Head Tracking: Enable face tracking mode
🎤 Real-Time Local TTS: Text-to-speech runs entirely on-device with streaming audio - no cloud APIs, no latency, no API costs
Motion Control: Stop motions and query robot status
Installation
Configuration
Copy .env.example to .env and configure:
Available environment variables:
Variable | Description | Default |
| Robot name for Zenoh discovery |
|
| Enable camera capture |
|
| Start with head tracking enabled |
|
Usage
Running the server directly
Claude Code CLI
Add the MCP server using the claude mcp add command:
Claude Desktop Integration
Add to your Claude Desktop configuration file:
macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
Windows: %APPDATA%\Claude\claude_desktop_config.json
If using a virtual environment:
Available Tools
dance
Play a dance move on the robot.
Parameters:
move(string, optional): Dance name or "random". Default: "random"repeat(integer, optional): Number of times to repeat. Default: 1
Available moves: simple_nod, head_tilt_roll, side_to_side_sway, dizzy_spin, stumble_and_recover, interwoven_spirals, sharp_side_tilt, side_peekaboo, yeah_nod, uh_huh_tilt, neck_recoil, chin_lead, groovy_sway_and_roll, chicken_peck, side_glance_flick, polyrhythm_combo, grid_snap, pendulum_swing, jackson_square
play_emotion
Play a pre-recorded emotion.
Parameters:
emotion(string, required): Name of the emotion to play
move_head
Move the robot's head in a direction.
Parameters:
direction(string, required): One of "left", "right", "up", "down", "front"duration(float, optional): Movement duration in seconds. Default: 1.0
camera
Capture an image from the robot's camera.
Returns: Base64-encoded JPEG image
Note: Requires REACHY_MINI_ENABLE_CAMERA=true
head_tracking
Toggle head tracking mode.
Parameters:
enabled(boolean, required): True to enable, False to disable
stop_motion
Stop all current and queued motions immediately.
speak
Make the robot speak using real-time local text-to-speech with natural head movement animation.
Parameters:
text(string, required): The text to speakvoice(string, optional): Voice to use. Default: "alba"
Available voices: alba, marius, javert, jean, fantine, cosette, eponine, azelma
Note: Requires pocket-tts package. Install with uv pip install -e ".[speech]"
Key highlights:
100% Local: Runs entirely on your machine - no internet connection required after installation
Real-Time Streaming: Audio is generated and streamed in real-time for instant response
Zero API Costs: No cloud TTS services, no per-character fees, unlimited usage
Low Latency: Direct local processing means minimal delay between text input and speech output
Privacy: Your text never leaves your device
The robot's head will naturally sway and move while speaking, creating a more lifelike interaction.
get_status
Get the current robot status including connection state, queue size, and current pose.
Requirements
Python 3.10+
Reachy Mini SDK (
reachy_mini>=1.2.7)Running
reachy-mini-daemonor simulationZenoh network connectivity to the robot
Development
License
MIT