human_eye_tool
Submit visual observation requests to humans for real-time analysis and descriptions through a Streamlit UI, enabling AI assistants to leverage human visual capabilities.
Instructions
人間が目で見て状況を説明したり、特定のものを探したりします。
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| prompt | Yes |
Implementation Reference
- human_mcp/mcp_server.py:43-61 (handler)The main execution handler for human_eye_tool. Decorated with @mcp.tool() for automatic registration in FastMCP. Creates a DB task with the prompt, polls asynchronously for human-provided result, and returns it.@mcp.tool() async def human_eye_tool(prompt: str, ctx: Context) -> Dict[str, str]: """人間が目で見て状況を説明したり、特定のものを探したりします。""" task_id = str(uuid.uuid4()) instruction = f"👁️ 目を使って観察: {prompt}" # タスクをデータベースに追加 db_utils.add_task(task_id, instruction) # ログ出力 sys.stderr.write(f"Human task created: {task_id}. Waiting for completion...\n") # 結果を待機(非同期ポーリング) result = await wait_for_task_completion(task_id) # ログ出力 sys.stderr.write(f"Human task {task_id} completed.\n") return {"observation": result}
- human_mcp/tools.py:5-22 (schema)Input and output JSON schema for the human_eye_tool, matching the handler's parameters and return type.{ "name": "human_eye_tool", "description": "人間が目で見て状況を説明したり、特定のものを探したりします。", "input_schema": { "type": "object", "properties": { "prompt": {"type": "string", "description": "観察するための指示"} }, "required": ["prompt"] }, "output_schema": { "type": "object", "properties": { "observation": {"type": "string", "description": "人間による観察結果"} }, "required": ["observation"] } },
- human_mcp/mcp_server.py:43-43 (registration)The @mcp.tool() decorator registers the human_eye_tool with the FastMCP server.@mcp.tool()