Skip to main content
Glama
PixelML
by PixelML

Reachy Mini MCP Server

An MCP (Model Context Protocol) server for controlling Reachy Mini robots. This allows Claude Desktop and other MCP clients to interact with Reachy Mini robots through natural language.

Features

  • Dance: Play choreographed dance moves

  • Emotions: Express pre-recorded emotions

  • Head Movement: Move head in different directions

  • Camera: Capture images from the robot's camera

  • Head Tracking: Enable face tracking mode

  • 🎤 Real-Time Local TTS: Text-to-speech runs entirely on-device with streaming audio - no cloud APIs, no latency, no API costs

  • Motion Control: Stop motions and query robot status

Installation

# Clone the repository cd reachy-mini-mcp # Create virtual environment uv venv --python 3.10 source .venv/bin/activate # Install dependencies uv pip install -e . # Optional: Install camera support uv pip install -e ".[camera]" # Optional: Install speech support (text-to-speech) uv pip install -e ".[speech]"

Configuration

Copy .env.example to .env and configure:

cp .env.example .env

Available environment variables:

Variable

Description

Default

REACHY_MINI_ROBOT_NAME

Robot name for Zenoh discovery

reachy-mini

REACHY_MINI_ENABLE_CAMERA

Enable camera capture

false

REACHY_MINI_HEAD_TRACKING_ENABLED

Start with head tracking enabled

false

Usage

Running the server directly

reachy-mini-mcp

Claude Code CLI

Add the MCP server using the claude mcp add command:

# Build from source (after cloning the repo) claude mcp add --transport stdio reachy-mini -- bash -c "cd /path/to/reachy-mini-mcp && uv run reachy-mini-mcp" # With camera support enabled claude mcp add --transport stdio reachy-mini --env REACHY_MINI_ENABLE_CAMERA=true -- bash -c "cd /path/to/reachy-mini-mcp && uv run reachy-mini-mcp" # With custom robot name claude mcp add --transport stdio reachy-mini --env REACHY_MINI_ROBOT_NAME=my-robot -- bash -c "cd /path/to/reachy-mini-mcp && uv run reachy-mini-mcp"

Claude Desktop Integration

Add to your Claude Desktop configuration file:

macOS: ~/Library/Application Support/Claude/claude_desktop_config.json Windows: %APPDATA%\Claude\claude_desktop_config.json

{ "mcpServers": { "reachy-mini": { "command": "reachy-mini-mcp", "env": { "REACHY_MINI_ENABLE_CAMERA": "true" } } } }

If using a virtual environment:

{ "mcpServers": { "reachy-mini": { "command": "/path/to/reachy-mini-mcp/.venv/bin/reachy-mini-mcp", "env": { "REACHY_MINI_ENABLE_CAMERA": "true" } } } }

Available Tools

dance

Play a dance move on the robot.

Parameters:

  • move (string, optional): Dance name or "random". Default: "random"

  • repeat (integer, optional): Number of times to repeat. Default: 1

Available moves: simple_nod, head_tilt_roll, side_to_side_sway, dizzy_spin, stumble_and_recover, interwoven_spirals, sharp_side_tilt, side_peekaboo, yeah_nod, uh_huh_tilt, neck_recoil, chin_lead, groovy_sway_and_roll, chicken_peck, side_glance_flick, polyrhythm_combo, grid_snap, pendulum_swing, jackson_square

play_emotion

Play a pre-recorded emotion.

Parameters:

  • emotion (string, required): Name of the emotion to play

move_head

Move the robot's head in a direction.

Parameters:

  • direction (string, required): One of "left", "right", "up", "down", "front"

  • duration (float, optional): Movement duration in seconds. Default: 1.0

camera

Capture an image from the robot's camera.

Returns: Base64-encoded JPEG image

Note: Requires REACHY_MINI_ENABLE_CAMERA=true

head_tracking

Toggle head tracking mode.

Parameters:

  • enabled (boolean, required): True to enable, False to disable

stop_motion

Stop all current and queued motions immediately.

speak

Make the robot speak using real-time local text-to-speech with natural head movement animation.

Parameters:

  • text (string, required): The text to speak

  • voice (string, optional): Voice to use. Default: "alba"

Available voices: alba, marius, javert, jean, fantine, cosette, eponine, azelma

Note: Requires pocket-tts package. Install with uv pip install -e ".[speech]"

Key highlights:

  • 100% Local: Runs entirely on your machine - no internet connection required after installation

  • Real-Time Streaming: Audio is generated and streamed in real-time for instant response

  • Zero API Costs: No cloud TTS services, no per-character fees, unlimited usage

  • Low Latency: Direct local processing means minimal delay between text input and speech output

  • Privacy: Your text never leaves your device

The robot's head will naturally sway and move while speaking, creating a more lifelike interaction.

get_status

Get the current robot status including connection state, queue size, and current pose.

Requirements

  • Python 3.10+

  • Reachy Mini SDK (reachy_mini>=1.2.7)

  • Running reachy-mini-daemon or simulation

  • Zenoh network connectivity to the robot

Development

# Install dev dependencies uv pip install -e ".[dev]" # Run linter ruff check . # Run type checker mypy src/

License

MIT

-
security - not tested
F
license - not found
-
quality - not tested

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/PixelML/reachy-mini-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server