Skip to main content
Glama

human_eye_tool

Submit visual observation requests to humans for real-time analysis and descriptions through a Streamlit UI, enabling AI assistants to leverage human visual capabilities.

Instructions

人間が目で見て状況を説明したり、特定のものを探したりします。

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
promptYes

Implementation Reference

  • The main execution handler for human_eye_tool. Decorated with @mcp.tool() for automatic registration in FastMCP. Creates a DB task with the prompt, polls asynchronously for human-provided result, and returns it.
    @mcp.tool() async def human_eye_tool(prompt: str, ctx: Context) -> Dict[str, str]: """人間が目で見て状況を説明したり、特定のものを探したりします。""" task_id = str(uuid.uuid4()) instruction = f"👁️ 目を使って観察: {prompt}" # タスクをデータベースに追加 db_utils.add_task(task_id, instruction) # ログ出力 sys.stderr.write(f"Human task created: {task_id}. Waiting for completion...\n") # 結果を待機(非同期ポーリング) result = await wait_for_task_completion(task_id) # ログ出力 sys.stderr.write(f"Human task {task_id} completed.\n") return {"observation": result}
  • Input and output JSON schema for the human_eye_tool, matching the handler's parameters and return type.
    { "name": "human_eye_tool", "description": "人間が目で見て状況を説明したり、特定のものを探したりします。", "input_schema": { "type": "object", "properties": { "prompt": {"type": "string", "description": "観察するための指示"} }, "required": ["prompt"] }, "output_schema": { "type": "object", "properties": { "observation": {"type": "string", "description": "人間による観察結果"} }, "required": ["observation"] } },
  • The @mcp.tool() decorator registers the human_eye_tool with the FastMCP server.
    @mcp.tool()

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/upamune/human-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server