Skip to main content
Glama
upamune
by upamune

human_taste_tool

Analyze food flavors and quality by having humans taste and describe sweetness, sourness, saltiness, bitterness, and umami to evaluate dishes or ingredients.

Instructions

人間が口を使って食べ物を味わい、その味を説明します。

例:
- 料理の味の評価
- 食材の新鮮さの確認
- 味の分析(甘味、酸味、塩味、苦味、うま味)

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
instructionYes

Implementation Reference

  • The core handler function for the 'human_taste_tool'. It is registered via the @mcp.tool() decorator. The function creates a unique task ID, formats the taste instruction, stores it in the database using db_utils.add_task, polls for completion using wait_for_task_completion, and returns the human-provided taste description.
    @mcp.tool()
    async def human_taste_tool(instruction: str, ctx: Context) -> Dict[str, str]:
        """人間が口を使って食べ物を味わい、その味を説明します。
    
        例:
        - 料理の味の評価
        - 食材の新鮮さの確認
        - 味の分析(甘味、酸味、塩味、苦味、うま味)
        """
        task_id = str(uuid.uuid4())
        formatted_instruction = f"👅 口を使って味わう: {instruction}"
    
        # タスクをデータベースに追加
        db_utils.add_task(task_id, formatted_instruction)
    
        # ログ出力
        sys.stderr.write(f"Human task created: {task_id}. Waiting for completion...\n")
    
        # 結果を待機(非同期ポーリング)
        result = await wait_for_task_completion(task_id)
    
        # ログ出力
        sys.stderr.write(f"Human task {task_id} completed.\n")
    
        return {"taste": result}
  • Helper function used by human_taste_tool (and other tools) to asynchronously poll the database for task completion with a timeout.
    async def wait_for_task_completion(task_id: str, timeout: int = 300) -> str:
        """タスクの完了を待機する(タイムアウト付き)"""
        start_time = asyncio.get_event_loop().time()
    
        while True:
            # 現在の経過時間を確認
            elapsed = asyncio.get_event_loop().time() - start_time
            if elapsed > timeout:
                return f"タイムアウト: {timeout}秒経過しても応答がありませんでした。"
    
            # タスクの状態を確認
            status, result = db_utils.get_task_result(task_id)
    
            if status == 'completed' and result is not None:
                return result
    
            # 1秒待機してから再確認
            await asyncio.sleep(1)
  • The @mcp.tool() decorator registers the human_taste_tool function with the MCP server.
    @mcp.tool()
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It mentions the action ('taste and describe') but fails to disclose key behavioral traits like whether this is a simulation or real-world interaction, time requirements, accuracy limitations, or any prerequisites (e.g., food availability). This leaves significant gaps in understanding how the tool behaves beyond its basic function.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded: the first sentence clearly states the purpose, followed by concise examples that illustrate usage without redundancy. Every sentence adds value by clarifying application scenarios, making it efficient and well-structured for quick comprehension.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of a human sensory tool with no annotations, no output schema, and minimal parameter documentation, the description is incomplete. It covers the basic purpose and examples but lacks details on behavioral aspects (e.g., how results are returned, limitations), parameter specifics, and differentiation from siblings, making it inadequate for full agent understanding without additional context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 1 parameter with 0% description coverage, so the description must compensate. It adds meaning by implying the 'instruction' parameter should specify what to taste (e.g., food items) and what aspects to describe (e.g., flavors), as shown in the examples. However, it doesn't detail the format or constraints of the instruction, leaving some ambiguity, which aligns with the baseline for moderate schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: humans use their mouth to taste food and describe its flavor. It specifies the action ('taste and describe') and resource ('food'), making it distinct from generic tools. However, it doesn't explicitly differentiate from sibling tools like 'human_mouth_tool', which might have overlapping functions, leaving room for ambiguity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides implied usage through examples (e.g., evaluating dish taste, checking ingredient freshness, analyzing flavors), suggesting when to use it. However, it lacks explicit guidance on when not to use it or alternatives, such as distinguishing from 'human_nose_tool' for aroma-related tasks or 'human_mouth_tool' for other oral functions, leaving the agent to infer context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/upamune/human-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server