robot_behavior
Control robot behaviors including animation, camera, navigation, and manipulation operations through a unified interface for physical and virtual robots.
Instructions
Robot behavior control portmanteau - Animation, camera, navigation, and manipulation.
PORTMANTEAU PATTERN: Consolidates animation, camera, navigation, and manipulation operations into a single unified tool. This reduces tool explosion while maintaining full functionality across all behavior categories.
CATEGORIES:
animation: Animation and pose control
camera: Camera feed and visual control
navigation: Path planning and navigation
manipulation: Arm and gripper control
ANIMATION ACTIONS:
animate_wheels: Rotate wheels during movement (Scout mecanum wheels)
animate_movement: Play movement animations (walk, turn, etc.)
set_pose: Set robot pose (sitting, standing, etc. for Unitree)
play_animation: Play custom animations
stop_animation: Stop current animation
get_animation_state: Get current animation state
CAMERA ACTIONS:
get_camera_feed: Get live camera feed (physical Scout camera)
get_virtual_camera: Get Unity camera view from robot perspective
set_camera_angle: Adjust camera angle
capture_image: Capture still image
start_streaming: Start video stream
stop_streaming: Stop video stream
get_camera_status: Get camera status and settings
NAVIGATION ACTIONS:
plan_path: Plan path from A to B (A* or RRT)
follow_path: Execute planned path
set_waypoint: Set navigation waypoint
clear_waypoints: Clear waypoint list
get_path_status: Check path execution status
avoid_obstacle: Dynamic obstacle avoidance
get_current_path: Get current path being followed
MANIPULATION ACTIONS:
move_arm: Move arm to target joint positions or end-effector pose
set_joint_positions: Set individual joint positions (dict of joint_name: angle)
set_end_effector_pose: Move end-effector to target pose (position + orientation)
get_arm_state: Get current arm joint positions and end-effector pose
open_gripper: Open gripper fully
close_gripper: Close gripper fully
set_gripper_position: Set gripper position (0.0 = open, 1.0 = closed)
get_gripper_state: Get current gripper position and force feedback
move_to_pose: Move arm to target pose with IK (inverse kinematics)
home_arm: Return arm to home/rest position
Args: robot_id: Robot identifier (e.g., "scout_01", "vbot_scout_01"). category: Behavior category: "animation", "camera", or "navigation". action: Action to perform (see category-specific actions above). # Animation parameters (used when category="animation") wheel_speeds: Wheel speeds for animate_wheels. animation_name: Animation name for play_animation. pose: Pose name for set_pose. animation_speed: Animation speed multiplier. loop: Whether to loop animation. # Camera parameters (used when category="camera") angle_x: Camera angle X (pitch) in degrees. angle_y: Camera angle Y (yaw) in degrees. output_path: Output file path for capture_image. stream_url: Stream URL for start_streaming. # Navigation parameters (used when category="navigation") start_position: Start position (x, y, z) for plan_path. goal_position: Goal position (x, y, z) for plan_path. waypoint: Waypoint position (x, y, z) for set_waypoint. obstacle_position: Obstacle position (x, y, z) for avoid_obstacle. path_id: Path identifier for follow_path or get_path_status. # Manipulation parameters (used when category="manipulation") joint_positions: Joint positions dict (e.g., {"shoulder": 45.0, "elbow": 90.0}). end_effector_pose: End-effector pose dict with position and orientation. gripper_position: Gripper position (0.0 = open, 1.0 = closed). arm_id: Arm identifier for multi-arm robots (e.g., "left", "right"). force_limit: Maximum force/torque limit for movement. manipulation_speed: Movement speed (0.0-1.0) for arm/gripper motion.
Returns: Dictionary containing operation result.
Examples: # Animation result = await robot_behavior( robot_id="scout_01", category="animation", action="play_animation", animation_name="walk", animation_speed=1.0, loop=True )
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| robot_id | Yes | ||
| category | Yes | ||
| action | Yes | ||
| wheel_speeds | No | ||
| animation_name | No | ||
| pose | No | ||
| animation_speed | No | ||
| loop | No | ||
| angle_x | No | ||
| angle_y | No | ||
| output_path | No | ||
| stream_url | No | ||
| start_position | No | ||
| goal_position | No | ||
| waypoint | No | ||
| obstacle_position | No | ||
| path_id | No | ||
| joint_positions | No | ||
| end_effector_pose | No | ||
| gripper_position | No | ||
| arm_id | No | ||
| force_limit | No | ||
| manipulation_speed | No |
Implementation Reference
- Main handler function for the 'robot_behavior' MCP tool. This portmanteau function handles animation, camera, navigation, and manipulation behaviors by routing to specialized sub-handlers based on the 'category' parameter. Includes full input schema via type hints and comprehensive docstring with examples.@self.mcp.tool() async def robot_behavior( robot_id: str, category: Literal["animation", "camera", "navigation", "manipulation"], action: str, # Animation parameters wheel_speeds: Optional[Dict[str, float]] = None, animation_name: Optional[str] = None, pose: Optional[str] = None, animation_speed: Optional[float] = None, loop: bool = False, # Camera parameters angle_x: Optional[float] = None, angle_y: Optional[float] = None, output_path: Optional[str] = None, stream_url: Optional[str] = None, # Navigation parameters start_position: Optional[Dict[str, float]] = None, goal_position: Optional[Dict[str, float]] = None, waypoint: Optional[Dict[str, float]] = None, obstacle_position: Optional[Dict[str, float]] = None, path_id: Optional[str] = None, # Manipulation parameters joint_positions: Optional[Dict[str, float]] = None, end_effector_pose: Optional[Dict[str, Any]] = None, gripper_position: Optional[float] = None, arm_id: Optional[str] = None, force_limit: Optional[float] = None, manipulation_speed: Optional[float] = None, ) -> Dict[str, Any]: """Robot behavior control portmanteau - Animation, camera, navigation, and manipulation. PORTMANTEAU PATTERN: Consolidates animation, camera, navigation, and manipulation operations into a single unified tool. This reduces tool explosion while maintaining full functionality across all behavior categories. CATEGORIES: - animation: Animation and pose control - camera: Camera feed and visual control - navigation: Path planning and navigation - manipulation: Arm and gripper control ANIMATION ACTIONS: - animate_wheels: Rotate wheels during movement (Scout mecanum wheels) - animate_movement: Play movement animations (walk, turn, etc.) - set_pose: Set robot pose (sitting, standing, etc. for Unitree) - play_animation: Play custom animations - stop_animation: Stop current animation - get_animation_state: Get current animation state CAMERA ACTIONS: - get_camera_feed: Get live camera feed (physical Scout camera) - get_virtual_camera: Get Unity camera view from robot perspective - set_camera_angle: Adjust camera angle - capture_image: Capture still image - start_streaming: Start video stream - stop_streaming: Stop video stream - get_camera_status: Get camera status and settings NAVIGATION ACTIONS: - plan_path: Plan path from A to B (A* or RRT) - follow_path: Execute planned path - set_waypoint: Set navigation waypoint - clear_waypoints: Clear waypoint list - get_path_status: Check path execution status - avoid_obstacle: Dynamic obstacle avoidance - get_current_path: Get current path being followed MANIPULATION ACTIONS: - move_arm: Move arm to target joint positions or end-effector pose - set_joint_positions: Set individual joint positions (dict of joint_name: angle) - set_end_effector_pose: Move end-effector to target pose (position + orientation) - get_arm_state: Get current arm joint positions and end-effector pose - open_gripper: Open gripper fully - close_gripper: Close gripper fully - set_gripper_position: Set gripper position (0.0 = open, 1.0 = closed) - get_gripper_state: Get current gripper position and force feedback - move_to_pose: Move arm to target pose with IK (inverse kinematics) - home_arm: Return arm to home/rest position Args: robot_id: Robot identifier (e.g., "scout_01", "vbot_scout_01"). category: Behavior category: "animation", "camera", or "navigation". action: Action to perform (see category-specific actions above). # Animation parameters (used when category="animation") wheel_speeds: Wheel speeds for animate_wheels. animation_name: Animation name for play_animation. pose: Pose name for set_pose. animation_speed: Animation speed multiplier. loop: Whether to loop animation. # Camera parameters (used when category="camera") angle_x: Camera angle X (pitch) in degrees. angle_y: Camera angle Y (yaw) in degrees. output_path: Output file path for capture_image. stream_url: Stream URL for start_streaming. # Navigation parameters (used when category="navigation") start_position: Start position (x, y, z) for plan_path. goal_position: Goal position (x, y, z) for plan_path. waypoint: Waypoint position (x, y, z) for set_waypoint. obstacle_position: Obstacle position (x, y, z) for avoid_obstacle. path_id: Path identifier for follow_path or get_path_status. # Manipulation parameters (used when category="manipulation") joint_positions: Joint positions dict (e.g., {"shoulder": 45.0, "elbow": 90.0}). end_effector_pose: End-effector pose dict with position and orientation. gripper_position: Gripper position (0.0 = open, 1.0 = closed). arm_id: Arm identifier for multi-arm robots (e.g., "left", "right"). force_limit: Maximum force/torque limit for movement. manipulation_speed: Movement speed (0.0-1.0) for arm/gripper motion. Returns: Dictionary containing operation result. Examples: # Animation result = await robot_behavior( robot_id="scout_01", category="animation", action="play_animation", animation_name="walk", animation_speed=1.0, loop=True ) # Camera result = await robot_behavior( robot_id="scout_01", category="camera", action="capture_image", output_path="C:/Images/capture.jpg" ) # Navigation result = await robot_behavior( robot_id="scout_01", category="navigation", action="plan_path", start_position={"x": 0, "y": 0, "z": 0}, goal_position={"x": 5, "y": 0, "z": 0} ) # Manipulation - Move arm result = await robot_behavior( robot_id="g1_01", category="manipulation", action="move_arm", joint_positions={"shoulder": 45.0, "elbow": 90.0, "wrist": 0.0} ) # Manipulation - Open gripper result = await robot_behavior( robot_id="g1_01", category="manipulation", action="open_gripper" ) """ try: robot = self.state_manager.get_robot(robot_id) if not robot: return format_error_response( f"Robot {robot_id} not found", error_type="not_found", robot_id=robot_id, category=category, action=action, ) # Route to category handler if category == "animation": return await self._handle_animation( robot, action, wheel_speeds, animation_name, pose, speed, loop ) elif category == "camera": return await self._handle_camera(robot, action, angle_x, angle_y, output_path, stream_url) elif category == "navigation": return await self._handle_navigation( robot, action, start_position, goal_position, waypoint, obstacle_position, path_id ) elif category == "manipulation": return await self._handle_manipulation( robot, action, joint_positions, end_effector_pose, gripper_position, arm_id, force_limit, manipulation_speed ) else: return format_error_response( f"Unknown category: {category}", error_type="validation_error", valid_categories=["animation", "camera", "navigation", "manipulation"], ) except Exception as e: return handle_tool_error("robot_behavior", e, robot_id=robot_id, category=category, action=action)
- src/robotics_mcp/server.py:477-478 (registration)Registration of the RobotBehaviorTool instance to the MCP server. Called within the _register_tools method of RoboticsMCP class.self.robot_behavior.register() # Behavior: animation, camera, navigation, manipulation logger.debug("Registered robot_behavior tool")
- src/robotics_mcp/server.py:185-185 (registration)Instantiation of the RobotBehaviorTool class instance in the RoboticsMCP server __init__ method.self.robot_behavior = RobotBehaviorTool(self.mcp, self.state_manager, self.mounted_servers)
- Detailed input/output schema and usage documentation for the robot_behavior tool, including all supported categories, actions, parameters, and examples."""Robot behavior control portmanteau - Animation, camera, navigation, and manipulation. PORTMANTEAU PATTERN: Consolidates animation, camera, navigation, and manipulation operations into a single unified tool. This reduces tool explosion while maintaining full functionality across all behavior categories. CATEGORIES: - animation: Animation and pose control - camera: Camera feed and visual control - navigation: Path planning and navigation - manipulation: Arm and gripper control ANIMATION ACTIONS: - animate_wheels: Rotate wheels during movement (Scout mecanum wheels) - animate_movement: Play movement animations (walk, turn, etc.) - set_pose: Set robot pose (sitting, standing, etc. for Unitree) - play_animation: Play custom animations - stop_animation: Stop current animation - get_animation_state: Get current animation state CAMERA ACTIONS: - get_camera_feed: Get live camera feed (physical Scout camera) - get_virtual_camera: Get Unity camera view from robot perspective - set_camera_angle: Adjust camera angle - capture_image: Capture still image - start_streaming: Start video stream - stop_streaming: Stop video stream - get_camera_status: Get camera status and settings NAVIGATION ACTIONS: - plan_path: Plan path from A to B (A* or RRT) - follow_path: Execute planned path - set_waypoint: Set navigation waypoint - clear_waypoints: Clear waypoint list - get_path_status: Check path execution status - avoid_obstacle: Dynamic obstacle avoidance - get_current_path: Get current path being followed MANIPULATION ACTIONS: - move_arm: Move arm to target joint positions or end-effector pose - set_joint_positions: Set individual joint positions (dict of joint_name: angle) - set_end_effector_pose: Move end-effector to target pose (position + orientation) - get_arm_state: Get current arm joint positions and end-effector pose - open_gripper: Open gripper fully - close_gripper: Close gripper fully - set_gripper_position: Set gripper position (0.0 = open, 1.0 = closed) - get_gripper_state: Get current gripper position and force feedback - move_to_pose: Move arm to target pose with IK (inverse kinematics) - home_arm: Return arm to home/rest position Args: robot_id: Robot identifier (e.g., "scout_01", "vbot_scout_01"). category: Behavior category: "animation", "camera", or "navigation". action: Action to perform (see category-specific actions above). # Animation parameters (used when category="animation") wheel_speeds: Wheel speeds for animate_wheels. animation_name: Animation name for play_animation. pose: Pose name for set_pose. animation_speed: Animation speed multiplier. loop: Whether to loop animation. # Camera parameters (used when category="camera") angle_x: Camera angle X (pitch) in degrees. angle_y: Camera angle Y (yaw) in degrees. output_path: Output file path for capture_image. stream_url: Stream URL for start_streaming. # Navigation parameters (used when category="navigation") start_position: Start position (x, y, z) for plan_path. goal_position: Goal position (x, y, z) for plan_path. waypoint: Waypoint position (x, y, z) for set_waypoint. obstacle_position: Obstacle position (x, y, z) for avoid_obstacle. path_id: Path identifier for follow_path or get_path_status. # Manipulation parameters (used when category="manipulation") joint_positions: Joint positions dict (e.g., {"shoulder": 45.0, "elbow": 90.0}). end_effector_pose: End-effector pose dict with position and orientation. gripper_position: Gripper position (0.0 = open, 1.0 = closed). arm_id: Arm identifier for multi-arm robots (e.g., "left", "right"). force_limit: Maximum force/torque limit for movement. manipulation_speed: Movement speed (0.0-1.0) for arm/gripper motion. Returns: Dictionary containing operation result. Examples: # Animation result = await robot_behavior( robot_id="scout_01", category="animation", action="play_animation", animation_name="walk", animation_speed=1.0, loop=True ) # Camera result = await robot_behavior( robot_id="scout_01", category="camera", action="capture_image", output_path="C:/Images/capture.jpg" ) # Navigation result = await robot_behavior( robot_id="scout_01", category="navigation", action="plan_path", start_position={"x": 0, "y": 0, "z": 0}, goal_position={"x": 5, "y": 0, "z": 0} ) # Manipulation - Move arm result = await robot_behavior( robot_id="g1_01", category="manipulation", action="move_arm", joint_positions={"shoulder": 45.0, "elbow": 90.0, "wrist": 0.0} ) # Manipulation - Open gripper result = await robot_behavior( robot_id="g1_01", category="manipulation", action="open_gripper" ) """
- Helper function for handling virtual animation behaviors, dispatching to Unity RobotAnimator methods via mounted server tool calls.async def _handle_virtual_animation( self, robot: Any, action: str, wheel_speeds: Optional[Dict[str, float]], animation_name: Optional[str], pose: Optional[str], animation_speed: Optional[float], loop: bool, ) -> Dict[str, Any]: """Handle virtual robot animation.""" try: if robot.platform == "unity" and "unity" in self.mounted_servers: method_map = { "animate_wheels": ("RobotAnimator", "AnimateWheels", {"robotId": robot.robot_id, "wheelSpeeds": wheel_speeds or {}}), "animate_movement": ("RobotAnimator", "AnimateMovement", {"robotId": robot.robot_id, "animationName": animation_name or "walk", "speed": animation_speed or 1.0, "loop": loop}), "set_pose": ("RobotAnimator", "SetPose", {"robotId": robot.robot_id, "pose": pose or "idle"}), "play_animation": ("RobotAnimator", "PlayAnimation", {"robotId": robot.robot_id, "animationName": animation_name or "idle", "speed": animation_speed or 1.0, "loop": loop}), "stop_animation": ("RobotAnimator", "StopAnimation", {"robotId": robot.robot_id}), "get_animation_state": ("RobotAnimator", "GetAnimationState", {"robotId": robot.robot_id}), } if action not in method_map: return format_error_response(f"Unknown animation action: {action}", error_type="validation_error") class_name, method_name, params = method_map[action] result = await call_mounted_server_tool( self.mounted_servers, "unity", "execute_unity_method", { "class_name": class_name, "method_name": method_name, "parameters": params, }, ) return format_success_response( f"Animation {action} executed for {robot.robot_id}", robot_id=robot.robot_id, category="animation", action=action, data=result, ) else: logger.info("Mock animation", robot_id=robot.robot_id, action=action) return format_success_response( f"Mock animation: {action} for {robot.robot_id}", robot_id=robot.robot_id, category="animation", action=action, data={"note": "Mock mode - Unity/VRChat not available"}, ) except Exception as e: return handle_tool_error("_handle_virtual_animation", e, robot_id=robot.robot_id, action=action)