human_eye_tool
Request human visual assistance to describe environments, identify objects, or locate specific items through a human-operated interface.
Instructions
人間が目で見て状況を説明したり、特定のものを探したりします。
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| prompt | Yes |
Implementation Reference
- human_mcp/mcp_server.py:44-61 (handler)The primary handler function for the 'human_eye_tool' MCP tool. Decorated with @mcp.tool(), it generates a unique task ID, stores the observation prompt in the database via db_utils, polls asynchronously for human-provided result using wait_for_task_completion, and returns the observation as a dictionary.async def human_eye_tool(prompt: str, ctx: Context) -> Dict[str, str]: """人間が目で見て状況を説明したり、特定のものを探したりします。""" task_id = str(uuid.uuid4()) instruction = f"👁️ 目を使って観察: {prompt}" # タスクをデータベースに追加 db_utils.add_task(task_id, instruction) # ログ出力 sys.stderr.write(f"Human task created: {task_id}. Waiting for completion...\n") # 結果を待機(非同期ポーリング) result = await wait_for_task_completion(task_id) # ログ出力 sys.stderr.write(f"Human task {task_id} completed.\n") return {"observation": result}
- human_mcp/tools.py:5-22 (schema)Explicit JSON schema definition for the human_eye_tool, detailing input (prompt string) and output (observation string) structures, descriptions, and requirements. Defined in the HUMAN_TOOLS list.{ "name": "human_eye_tool", "description": "人間が目で見て状況を説明したり、特定のものを探したりします。", "input_schema": { "type": "object", "properties": { "prompt": {"type": "string", "description": "観察するための指示"} }, "required": ["prompt"] }, "output_schema": { "type": "object", "properties": { "observation": {"type": "string", "description": "人間による観察結果"} }, "required": ["observation"] } },
- human_mcp/mcp_server.py:41-43 (registration)The FastMCP server instantiation and @mcp.tool() decorator on the handler function, which registers the human_eye_tool in the MCP server.mcp = FastMCP("human-mcp") @mcp.tool()