Skip to main content
Glama
Ownership verified

Server Details

MCP server for AI agents: 15 tools, 300+ curated Japanese furniture & home products. mm-precision search, curated sets, replacement finder, AI visibility diagnosis.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.1/5 across 15 of 15 tools scored.

Server CoherenceA
Disambiguation4/5

Most tools have distinct purposes, such as calc_room_layout for layout simulation, compare_products for product comparison, and diagnose_ai_visibility for website visibility analysis. However, there is some overlap between search_products, search_rakuten_products, and search_amazon_products, which could cause confusion in selection, though their descriptions clarify their specific focuses (catalog vs. real-time vs. Amazon).

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern using snake_case, such as calc_room_layout, compare_products, and get_product_detail. This uniformity makes the tool set predictable and easy to navigate, with no deviations in naming conventions.

Tool Count5/5

With 15 tools, the server is well-scoped for its furniture and product discovery domain. Each tool serves a clear purpose, from product search and comparison to layout analysis and curation, providing comprehensive coverage without being overwhelming or redundant.

Completeness4/5

The tool set covers a wide range of operations, including search, comparison, detail retrieval, layout simulation, and curation, with good lifecycle support for product discovery. A minor gap is the lack of tools for user-specific actions like saving preferences or managing a shopping cart, but core workflows are well-supported.

Available Tools

18 tools
calc_room_layout部屋の床面に家具が収まるか簡易シミュレーションAInspect

「この部屋にベッドとデスクは入る?」のように家具の配置可否を確認するときに呼ぶ。部屋の有効寸法(mm)と家具リスト(幅/奥行/個数)からグリッド配置シミュレーションを実行。座標と回転有無を返す。扉・動線は未考慮のため目安として扱うこと。大型家具にはcarry_in_warnings(搬入経路チェック)が付く。risk=warning/criticalならユーザーに搬入注意を伝えること。

ParametersJSON Schema
NameRequiredDescriptionDefault
itemsYes
intentYes【必須】部屋の用途・制約
grid_step_mmNo
room_depth_mmYes部屋の有効奥行き(mm)
room_width_mmYes部屋の有効幅(mm)
margin_between_mmNo
wall_clearance_mmNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and does well: it explains the simulation method ('グリッド配置シミュレーションを実行'), output format ('座標と回転有無を返す'), and important limitations ('扉・動線は未考慮'). It doesn't mention performance characteristics like computation time or error conditions, but covers core behavioral aspects adequately.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Perfectly concise and well-structured: two sentences that front-load the use case, explain the process, and state limitations. Every word earns its place with no redundancy or fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a complex simulation tool with 7 parameters, 43% schema coverage, and no output schema, the description does well but has gaps. It explains the core functionality and limitations clearly, but doesn't address the intent parameter's purpose or optional parameters' effects. Given the complexity, it could benefit from slightly more detail about what the simulation actually does.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is only 43%, but the description adds meaningful context: it explains that inputs include '部屋の有効寸法(mm)と家具リスト(幅/奥行/個数)' (room effective dimensions and furniture list with width/depth/quantity). This clarifies the purpose of room_width_mm, room_depth_mm, and items parameters beyond what the schema provides. However, it doesn't explain intent or optional parameters like grid_step_mm.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: '家具の配置可否を確認する' (check furniture placement feasibility) with a specific example ('この部屋にベッドとデスクは入る?'). It distinguishes from siblings by focusing on room layout simulation rather than product search/comparison tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('家具の配置可否を確認するときに呼ぶ' - call when checking furniture placement). It explicitly states limitations ('扉・動線は未考慮のため目安として扱うこと' - doors/circulation not considered, treat as reference), but doesn't mention when NOT to use it or name specific alternatives among siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

compare_products製品比較(価格・サイズ・レビュー・耐荷重を並列比較)AInspect

「NクリックとKALLAXどっちがいい?」のように2〜5製品を比較するときに呼ぶ。価格・サイズ・レビュー・耐荷重を並列比較表で返す。カタログ一致時は内寸・互換収納・buy_guide(best_for/avoid_if)も付加。buy_guideのdecision_hintは比較recommendationにも反映済み。各商品のaffiliate_urlをユーザーに提示すること。

ParametersJSON Schema
NameRequiredDescriptionDefault
intentYes【必須】なぜ比較したいか
keywordsYes比較したい製品の検索キーワード(2〜5件)
compare_aspectsNo比較したい観点(省略時はデフォルト全項目)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It discloses key behavioral traits: returns a comparison table, includes affiliate URLs for user presentation, and adds catalog-matched details (internal dimensions, compatible storage, buy_guide). However, it lacks information about error handling, data sources, or performance characteristics like rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, starting with the core use case and key outputs. Every sentence adds value, though it could be slightly more structured (e.g., separating behavioral details from usage). No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations and no output schema, the description provides good context on what the tool does and returns, but lacks details on output structure (beyond 'parallel comparison table'), error cases, or data freshness. For a tool with 3 parameters and complex comparison logic, more behavioral transparency would improve completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description adds minimal parameter semantics beyond the schema—it mentions the tool handles 2-5 products (matching keywords array constraints) and implies compare_aspects defaults to all items, but doesn't elaborate on intent or keyword usage beyond what the schema describes.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: compare 2-5 products across specific attributes (price, size, reviews, load capacity) and return a parallel comparison table. It uses specific verbs ('compare', 'return') and distinguishes from siblings by focusing on multi-product comparison rather than single-product lookup or search.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool: 'when comparing 2-5 products' with an example query format ('NクリックとKALLAXどっちがいい?'). It implies alternatives by specifying the comparison scope, distinguishing it from single-product tools like get_product_detail or search tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

coordinate_storage棚+収納ボックスのコーディネート提案(個数計算付き)AInspect

「この棚に合うボックスは?」「カラーボックスの整理方法」のときに呼ぶ。棚の内寸から収納ボックスの入り数を計算し、1段あたり何個×全段=合計個数・合計金額を算出。設置場所(押入れ/洗面所/キッチン等)に応じたコーディネートのコツ+ペルソナ別推薦(persona_hints)も提供。大型棚にはcarry_in(搬入経路チェック)が付く。risk=warning/criticalならユーザーに搬入注意を伝えること。各商品のaffiliate_urlをユーザーに提示すること。

ParametersJSON Schema
NameRequiredDescriptionDefault
sceneNo設置場所ヒント('押入れ','洗面所','キッチン'等)
intentYes【必須】設置場所・用途・状況を詳細に
keywordYes棚の検索キーワード(例: 'カラーボックス 3段')
price_maxNo棚の予算上限(円)
shelf_countNo提案する棚の件数(1〜5)
storage_keywordNo収納ボックスの検索キーワード(省略時は自動推定)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It describes the tool's outputs (quantity calculations, cost totals, coordination tips, persona hints, affiliate URLs) but doesn't mention important behavioral aspects like whether this is a read-only operation, potential rate limits, authentication requirements, or error conditions. The description adds value by explaining what the tool provides but leaves gaps in behavioral transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded with the core purpose. It efficiently explains the calculation functionality, coordination tips, and persona-based recommendations in a single paragraph. While slightly dense, every sentence contributes meaningful information about the tool's capabilities and outputs.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (6 parameters, no annotations, no output schema), the description provides good coverage of what the tool does and when to use it. However, it lacks details about the output format, error handling, and behavioral constraints. For a tool that performs calculations and provides recommendations with affiliate links, more complete behavioral context would be beneficial.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all 6 parameters thoroughly. The description mentions 'scene' (location hints), 'persona_hints' (budget, brand recommendations, type advice), and affiliate URLs, but these don't add significant meaning beyond what the schema provides. The description implies persona_hints are part of the output, not a parameter, which is useful context but doesn't enhance parameter understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates storage box fit based on shelf dimensions, provides quantity and cost calculations, and offers coordination tips with persona-based recommendations. It uses specific verbs like 'calculate,' 'provide,' and 'offer' and distinguishes itself from siblings by focusing on storage coordination rather than general product search or layout calculation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool: 'when asking "what boxes fit this shelf?" or "how to organize color boxes?"' It provides clear context for usage scenarios involving storage coordination and box fitting calculations, though it doesn't explicitly mention when NOT to use it or name specific alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

diagnose_ai_visibilityAI可視性診断(AIO診断)AInspect

URLを指定すると、そのサイトがAIエージェント(GPT/Claude/Gemini等)からどの程度「見えている」かを診断する。llms.txt、robots.txt(AIクローラー許可)、構造化データ(JSON-LD)、OGPメタタグ、寸法データ表記、越境対応度をチェックし、0-100のスコアとA-Fグレードを返す。越境対応度(cross_border_readiness)は海外AIエージェントへの可視性を評価。AIOエージェンシーのデモとして「御社の商品、AIからこう見えています」と提示できる。

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYes診断対象のURL
intentYes【必須】なぜ診断が必要か
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses key behavioral traits: it performs checks on multiple technical aspects, returns a 0-100 score and A-F grade, and evaluates cross-border readiness. However, it lacks details on execution (e.g., timeouts, rate limits, authentication needs) and output format specifics beyond the score/grade, leaving gaps for an agent to invoke it correctly.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded: the first sentence states the core purpose, followed by specifics on checks and outputs, ending with a practical use case. Every sentence adds value, though the final sentence about the demo could be slightly trimmed for conciseness without losing clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations and no output schema, the description is moderately complete. It covers the purpose, checks performed, and output types (score/grade), but lacks details on behavioral constraints (e.g., rate limits, errors) and exact output structure. For a tool with 2 parameters and no structured output info, more behavioral context would improve completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description adds value by explaining the 'intent' parameter implicitly through the demo use case ('御社の商品、AIからこう見えています'), giving context beyond the schema's minimal description. It also clarifies that the URL is for diagnosis, aligning with the tool's purpose. This elevates the score above baseline.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: it diagnoses how visible a website is to AI agents by checking multiple technical factors (llms.txt, robots.txt, structured data, etc.) and returns a score and grade. It specifies the verb 'diagnose' and resource 'website visibility to AI agents,' distinguishing it from all sibling tools which focus on product search, layout calculation, or storage coordination.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool: when you need to assess a website's visibility to AI agents like GPT/Claude/Gemini, and it mentions a specific use case as a demo for an AI agency. However, it does not explicitly state when NOT to use it or name alternatives among the sibling tools, which are all unrelated to AI visibility diagnosis.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

find_product_gaps未充足需要・きつい寸法帯の抽出BInspect

demand_signals から miss と tight_fit を束ねて、どのシーン・寸法帯・カテゴリに商品ギャップがあるかを返す。Amazon出品候補、自社開発候補、優先して集める寸法データ帯の発見に使う。

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo返す候補数
intentYes【必須】なぜギャップ候補を抽出したいか
scene_nameNo特定シーンに絞る(例: '押入れ・クローゼット')
include_tight_fitNotight_fit も改善候補として含めるか
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions what the tool returns (product gaps by scene/size/category) and some use cases, but doesn't address critical behavioral aspects like whether this is a read-only operation, potential side effects, performance characteristics, error conditions, or response format. The description provides basic functional context but lacks comprehensive behavioral transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in two sentences that each serve a clear purpose: the first explains what the tool does, and the second explains its practical applications. It's appropriately sized and front-loaded with the core functionality, though the Japanese text mixed with English might slightly impact clarity for some users.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (4 parameters, no output schema, no annotations), the description provides adequate functional context but has significant gaps. It explains what the tool does and why to use it, but without annotations or output schema, it doesn't address behavioral characteristics, response format, or error handling. The description is complete enough for basic understanding but insufficient for confident tool invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema already documents all 4 parameters thoroughly. The description doesn't add any parameter-specific information beyond what's in the schema - it doesn't explain parameter relationships, provide examples beyond the schema's 'scene_name' example, or clarify how parameters interact. The baseline score of 3 reflects adequate but not enhanced parameter documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: it extracts product gaps from demand signals by bundling 'miss' and 'tight_fit' data, returning information about scenes, size ranges, and categories. It specifies the resource ('demand_signals') and verb ('extract'), though it doesn't explicitly differentiate from siblings like 'summarize_demand_signals' or 'search_products' beyond mentioning specific use cases.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool: for identifying product gaps to inform Amazon listing candidates, in-house development candidates, and priority data collection areas. However, it doesn't explicitly state when NOT to use it or name specific alternatives among the sibling tools, leaving some ambiguity about tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

find_replacement廃番・旧型の後継・代替品を探すAInspect

「この型番が売ってない」「生産終了した棚の代わり」のときに呼ぶ。カタログの後継候補(successors)と楽天の「後継」「新型」検索結果を返す。最終確認はメーカー公式で。楽天候補のaffiliate_urlをユーザーに提示すること。

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYes型番または商品名・特徴テキスト
intentYes【必須】なぜ代替が必要か
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It reveals that the tool returns both catalog successors and Rakuten search results, and mentions that affiliate URLs should be presented to users. However, it doesn't disclose important behavioral aspects like rate limits, authentication requirements, error handling, or whether this is a read-only operation. The description adds some context but leaves significant gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise and well-structured. Every sentence earns its place: the first establishes the use case, the second describes what the tool returns, and the third provides important usage guidance. There's zero wasted language and the information is front-loaded appropriately.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given that there are no annotations and no output schema, the description should do more to compensate. While it covers the purpose and usage guidelines well, it lacks information about return values, error conditions, rate limits, and authentication requirements. For a tool that queries external services and returns affiliate URLs, more behavioral context would be helpful to the agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters thoroughly. The description doesn't add any additional semantic information about the parameters beyond what's in the schema. It mentions the general use case but doesn't provide examples, format guidance, or constraints beyond what the schema already specifies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('find replacement', 'return successors and search results') and resources ('catalog successors', 'Rakuten search results'). It distinguishes itself from siblings like 'search_products' or 'search_rakuten_products' by focusing on discontinued/old model replacements rather than general product searches.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool ('when this model number isn't sold', 'when production has ended') and provides clear alternatives ('final confirmation should be done on the manufacturer's official site'). It also distinguishes usage from siblings by targeting replacement scenarios rather than general product discovery or comparison.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_curated_setsキュレーション済みセット提案(バンドル/ルームプリセット/インフルエンサーPick/ハックセット)AInspect

「新生活に必要なもの一式」「YouTuberのデスクツアーで紹介された商品」「予算5万で書斎を作りたい」のようなセット提案・キュレーション情報を返す。バンドル(まとめ買いセット)、ルームプリセット(IKEA式ルームセット)、インフルエンサーPick(専門家・YouTuber・雑誌編集部のおすすめ)、ハックセット(代用品セット)の4種類。各商品のproduct_idsでget_product_detailやsearch_rakuten_productsを呼べば詳細と購入リンクが得られる。

ParametersJSON Schema
NameRequiredDescriptionDefault
typeNo絞り込み: bundle / room_preset / influencer_pick / hack_set
sceneNoシーン(書斎、キッチン、リビング等)
intentYes【必須】なぜこの提案が必要か
keywordNoフリーワード検索
occasionNoオケージョン(新生活、引越し、出産準備等)
budget_maxNo予算上限(円)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It explains what the tool returns (curated set information) and mentions that product details can be obtained through other tools, but doesn't disclose important behavioral traits like whether this is a read-only operation, potential rate limits, authentication requirements, error conditions, or the format/structure of the returned data.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in two sentences: the first explains the purpose with concrete examples, the second lists the four set types and mentions follow-up actions. Every sentence adds value, though the second sentence could be slightly more concise by integrating the tool mention more smoothly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 6 parameters, no annotations, and no output schema, the description provides adequate context about what the tool does and the types of sets returned. However, it lacks information about the return format, pagination, error handling, and doesn't fully compensate for the absence of annotations. The mention of follow-up tools (get_product_detail, search_rakuten_products) helps with completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema has 100% description coverage, so all parameters are documented in the schema itself. The description doesn't add any parameter-specific information beyond what's already in the schema descriptions. It mentions the four types of sets which correspond to the 'type' enum values, but this is redundant with the schema. The baseline score of 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool returns curated set proposals with specific examples ('新生活に必要なもの一式') and explicitly lists the four types of sets (bundle, room_preset, influencer_pick, hack_set). It distinguishes from siblings like get_product_detail and search_rakuten_products by explaining that those tools provide detailed product information, while this one provides curated collections.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool (for curated set proposals) and mentions alternatives (get_product_detail, search_rakuten_products) for obtaining detailed product information. However, it doesn't explicitly state when NOT to use this tool or how it differs from other sibling tools like suggest_by_space or get_popular_products.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_product_detail家具・収納商品の詳細情報を取得AInspect

商品IDを指定して、特定の家具・収納商品のフルスペック(寸法・価格・在庫・素材など)を取得します。【重要】intentには、なぜこの詳細が必要か(例:購入前の最終確認、サイズの詳細確認、他商品との比較)を記述してください。【収益化】返却される affiliate_url をユーザーへの購入リンクとして使用してください。関連商品(同シリーズ・近いサイズ)も自動で提案されます。

ParametersJSON Schema
NameRequiredDescriptionDefault
idYes商品ID
intentYes【必須】詳細を見る理由
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and does well by disclosing key behavioral traits: it explains the affiliate_url usage for monetization ('返却される affiliate_url をユーザーへの購入リンクとして使用してください'), mentions automatic related product suggestions ('関連商品も自動で提案されます'), and clarifies the intent parameter requirement. It doesn't mention rate limits or authentication needs, but covers important operational aspects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded with the core purpose first. The three sentences each serve distinct purposes: (1) core functionality, (2) intent requirement with examples, (3) affiliate_url usage and related products. There's minimal waste, though the Japanese text is slightly verbose in translation.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 2-parameter tool with no annotations and no output schema, the description does well by covering purpose, usage context, behavioral traits (affiliate links, related products), and parameter guidance. It could be more complete by explicitly mentioning the return format or error cases, but given the tool's relative simplicity, it provides sufficient context for effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters. The description adds some value by emphasizing the intent parameter's importance ('【重要】intentには...を記述してください') and providing examples of intent content, but doesn't add significant semantic meaning beyond what the schema provides. Baseline 3 is appropriate when schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('取得します' - get/retrieve) and resource ('特定の家具・収納商品のフルスペック' - full specifications of specific furniture/storage products), with specific details about what information is retrieved (dimensions, price, stock, materials). It distinguishes from siblings like 'get_popular_products' (list) or 'search_products' (search) by focusing on detailed retrieval of a single product.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('商品IDを指定して' - when you have a product ID) and includes an important note about intent requirements. However, it doesn't explicitly mention when NOT to use it or name specific alternatives among the siblings (e.g., 'identify_product' might be for identification without full specs).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

identify_product写真・特徴テキストから製品を特定(型番・内寸・消耗品情報付き)AInspect

「この写真の棚は何?」「持ってる棚に合うボックスを知りたい」のときに呼ぶ。Vision AIで画像から抽出した特徴テキスト(ブランド/色/段数/素材/推定サイズ)を渡すと、カタログ+楽天から候補を返す。型番特定時は内寸・消耗品・互換ボックス情報付き。

ParametersJSON Schema
NameRequiredDescriptionDefault
intentYes【必須】なぜ特定したいか
featuresYes画像から読み取った特徴テキスト(ブランド、色、段数、素材、推定サイズ、形状特徴等)
brand_hintNoブランド名ヒント(ロゴが見えた場合)
dimensions_hintNo推定寸法(mm)分かる範囲で
include_compatibleNo互換収納・消耗品情報も含めるか(デフォルト: true)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses key behavioral traits: uses Vision AI for image analysis, searches both catalog and Rakuten, returns candidate products with model numbers and compatibility information. However, it doesn't mention rate limits, authentication needs, error conditions, or what happens when no matches are found.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with three sentences: usage scenarios, input requirements, and output details. Every sentence adds value - the first establishes context, the second explains the processing pipeline, and the third specifies the comprehensive output. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 5-parameter tool with no annotations and no output schema, the description does well by explaining the tool's purpose, usage context, input requirements, and output scope. However, it doesn't describe the return format (structure of candidate results) or error handling, which would be helpful given the absence of an output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters well. The description adds meaningful context about the 'features' parameter (specifying it comes from Vision AI extraction of brand/color/material/size characteristics) and implies the tool's behavior with the 'include_compatible' parameter, but doesn't provide additional syntax or format details beyond what the schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: identify products from photos/feature text, returning model numbers, internal dimensions, and consumable information. It uses specific verbs ('identify products', 'return candidates') and distinguishes from siblings like 'search_products' or 'get_product_detail' by emphasizing visual/feature-based identification rather than general searching or detail retrieval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool: 'when you want to know what shelf is in this photo' or 'when you want to know boxes that match your shelf'. It provides clear context with example scenarios and distinguishes it from general search tools by specifying it works with Vision AI-extracted feature text.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_categories製品カテゴリ一覧AInspect

「何が検索できる?」「どんなカテゴリがある?」のときに呼ぶ入口ツール。全31カテゴリと製品数・取扱ブランドを返す。カテゴリ名指定でそのカテゴリの製品一覧も取得可能。まずこのツールでカテゴリを把握→ユーザーに提示→選んだカテゴリでsearch_productsに進む。

ParametersJSON Schema
NameRequiredDescriptionDefault
intentYes【必須】カテゴリを見る目的
category_filterNo特定カテゴリに絞る(例: 'キッチン収納', 'デスク')
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It does reveal that the tool returns all 31 categories with product counts and handled brands, and can filter to specific categories. However, it doesn't mention response format, pagination, error conditions, or performance characteristics. The description provides basic operational context but lacks comprehensive behavioral details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly structured and concise: it starts with the primary use case, states what the tool returns, explains the filtering capability, and concludes with the recommended workflow. Every sentence serves a clear purpose with zero wasted words, and the information is front-loaded appropriately.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (2 parameters, no output schema, no annotations), the description provides strong contextual completeness. It explains the tool's role in the overall workflow, what data it returns, and how to transition to the next step. The main gap is the lack of output format details, but the description compensates well with clear operational guidance.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, so the schema already documents both parameters thoroughly. The description mentions category filtering ('カテゴリ名指定でそのカテゴリの製品一覧も取得可能') which aligns with the category_filter parameter, but doesn't add significant semantic value beyond what's in the schema. This meets the baseline expectation when schema coverage is high.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: it's an entry point tool that returns all 31 categories with product counts and handled brands, and can also retrieve product listings for a specific category. It uses specific verbs ('呼ぶ入口ツール', '返す', '取得可能') and distinguishes itself from sibling tools like search_products by being the first step in the workflow.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool ('「何が検索できる?」「どんなカテゴリがある?」のときに呼ぶ入口ツール') and outlines the complete workflow: first use this tool to understand categories, present them to the user, then proceed to search_products with the selected category. It clearly differentiates this from the sibling search_products tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

measure_from_photo写真+参照物から家具・スペースの寸法を推定AInspect

「写真を撮ったので寸法を測りたい」「この隙間に合う棚を探したい」のときに呼ぶ。 ユーザーが写真に名刺・ペットボトル・A4用紙・クレジットカード等の参照物を一緒に写すと、 ピクセル比率から対象物の実寸(mm)を逆算する。

【AIの役割】写真をVisionで解析し、参照物と対象物それぞれのピクセル幅・高さを読み取ってこのツールに渡す。 対応参照物: 名刺(91×55mm)、クレジットカード(85.6×54mm)、ペットボトル500ml(65×205mm)、A4用紙(210×297mm)、500円玉(∅26.5mm)、1円玉(∅20mm)、スマホ(71.5×147mm)、ティッシュ箱(240×115mm)、30cm定規、ボールペン(140mm)

結果のsearch_dimensionsをそのままsuggest_by_spaceやcoordinate_storageに渡せば、写真→寸法→商品マッチングが完結する。 信頼度が低い場合は「メジャーで実測を」と伝えること。

ParametersJSON Schema
NameRequiredDescriptionDefault
intentYes【必須】写真から何を測りたいか
target_pxYes対象物のピクセル寸法(AIがVisionで画像から読み取る)
reference_pxYes参照物のピクセル寸法(AIがVisionで画像から読み取る)
reference_objectYes写真に写っている参照物の名前(名刺/ペットボトル/A4用紙/クレジットカード/500円玉/1円玉/スマホ/ティッシュ箱/30cm定規/ボールペン)
estimated_depth_mmNoAIが推定した奥行き(mm)。写真から奥行きが読めない場合にVision LLMの推定値を入れる
target_descriptionYes測定対象の説明(例: '白い3段カラーボックス', '洗面台横の隙間')
manual_dimensions_mmNoユーザーがメジャー/AR等で実測した値があれば上書き(最高精度)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and does well. It discloses the tool's reliance on pixel ratios from reference objects, lists specific supported reference objects with their dimensions, explains the confidence mechanism (low confidence triggers manual measurement recommendation), and describes the workflow integration with other tools. It doesn't mention error rates, processing time, or authentication needs, but covers core behavioral aspects adequately.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Perfectly structured and front-loaded: first sentence states the use case, second explains the core mechanism, third specifies the AI's role, fourth lists reference objects, fifth describes integration, sixth gives confidence handling. Every sentence earns its place with no redundancy. The bullet-style reference list is efficient and clear.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a complex tool with 7 parameters, nested objects, no annotations, and no output schema, the description provides strong context. It covers the measurement methodology, reference objects, AI preprocessing role, confidence handling, and integration with sibling tools. It doesn't explain the output format or error cases in detail, but given the clear workflow description and parameter coverage, it's largely complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. The description adds significant value by explaining the overall measurement logic (pixel ratio calculation), listing all valid reference objects with their real-world dimensions, and clarifying the AI's role in providing pixel measurements. It contextualizes parameters like 'reference_object' and 'target_description' beyond their schema descriptions, though it doesn't detail each parameter individually.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: estimating real-world dimensions (mm) of furniture/spaces from photos using reference objects. It specifies the verb '逆算する' (reverse calculate) and resource '対象物の実寸' (actual dimensions of target objects). It distinguishes from siblings like 'suggest_by_space' or 'coordinate_storage' by focusing on measurement rather than product matching or storage coordination.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use: when users want to measure dimensions from photos or find shelves that fit gaps. Provides clear alternatives: when confidence is low, recommend using a physical tape measure ('メジャーで実測を'). Also specifies the AI's role in preprocessing (analyzing photos with Vision) and downstream usage (passing results to 'suggest_by_space' or 'coordinate_storage').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_amazon_productsAmazonで家具・収納商品を検索(URL生成)AInspect

ユーザーがAmazonで買いたい場合や楽天で見つからない場合に呼ぶ。Amazonの検索結果ページへのアフィリエイトURLを生成する(商品データは返さない)。SearchIndexはカテゴリから自動選択。affiliate_urlをユーザーに提示すること。

ParametersJSON Schema
NameRequiredDescriptionDefault
sortNo並び順
intentYes【必須】検索目的
keywordYesAmazon検索キーワード
price_maxNo最高価格(円)
price_minNo最低価格(円)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It does well by stating that it 'generates affiliate URLs' (implying external navigation) and 'doesn't return product data' (clarifying output limitation). However, it doesn't mention rate limits, authentication needs, or potential side effects like affiliate tracking. The description adds useful context but lacks comprehensive behavioral details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise and well-structured in three sentences. The first sentence establishes usage context, the second states the core functionality and output, and the third provides implementation details. Every sentence earns its place with no wasted words, making it easy to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (5 parameters, no output schema, no annotations), the description does well by covering purpose, usage guidelines, and key behavioral aspects. It explains what the tool does, when to use it, and what it returns. The main gap is lack of detailed behavioral context (rate limits, errors, etc.), but for a URL generation tool, this is reasonably complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds minimal parameter semantics beyond the schema - it mentions that SearchIndex is automatically selected from categories, which isn't in the schema, but this is minor. With high schema coverage, the baseline of 3 is appropriate as the description doesn't significantly enhance parameter understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Amazonの検索結果ページへのアフィリエイトURLを生成する' (generates affiliate URLs to Amazon search results pages). It specifies the resource (Amazon products), the verb (search/generate URLs), and distinguishes from siblings by noting it doesn't return product data (unlike search_products or get_product_detail) and is specifically for Amazon (vs. search_rakuten_products).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool: 'ユーザーがAmazonで買いたい場合や楽天で見つからない場合に呼ぶ' (call when users want to buy on Amazon or can't find products on Rakuten). It provides clear alternatives (Rakuten search) and context for usage, making it easy for an agent to decide when this tool is appropriate versus other search tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_products家具・家電・ガジェット商品を検索AInspect

ユーザーが「棚が欲しい」「Dysonのドライヤー」「幅40cmに入るもの」と言ったときに呼ぶ。31カテゴリ・80+ブランドのカタログをキーワード・サイズ(mm)・価格・色・ブランドで横断検索。結果にrelated_items_hintがある場合はget_related_itemsで付属品チェーンを取得できる。buy_guideがある場合はbest_for/avoid_ifをユーザーに伝えて購入判断を助けること。seasonal_hints/active_salesがある場合はセール情報を伝えること。色はエイリアス対応(白→ホワイト/アイボリー等)。各商品のaffiliate_urlをユーザーに提示すること。

ParametersJSON Schema
NameRequiredDescriptionDefault
brandNoブランド(例:ニトリ、IKEA、Dyson、Panasonic)
colorNo色(例:ホワイト、白、ブラウン、木目)。エイリアス対応:白→ホワイト/アイボリー等
intentYes【必須】検索目的
keywordNoキーワード(商品名・ブランド・タグで部分一致、スペース区切りでAND検索)
categoryNoカテゴリ(例:デスク、美容家電、スマートホーム)
price_maxNo価格の上限(円)
price_minNo価格の下限(円)
depth_mm_maxNo奥行きの最大値(mm)
depth_mm_minNo奥行きの最小値(mm)
width_mm_maxNo幅の最大値(mm)
width_mm_minNo幅の最小値(mm)
height_mm_maxNo高さの最大値(mm)
height_mm_minNo高さの最小値(mm)
in_stock_onlyNo在庫ありのみ(デフォルト:true)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and does well by disclosing: the catalog scope (31 categories, 80+ brands), color alias handling, result structure features (related_items_hint, buy_guide, seasonal_hints), and the requirement to present affiliate_urls. It doesn't mention pagination, rate limits, or authentication needs, but provides substantial behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with clear front-loading of the primary use case, followed by catalog scope, search criteria, and result handling instructions. Every sentence adds value, though it could be slightly more concise by combining some result-handling instructions.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a complex search tool with 14 parameters and no annotations/output schema, the description provides substantial context about catalog scope, search capabilities, and result handling. It mentions related tools and specific response features, though it doesn't describe the exact return format or error conditions.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description adds some context about color aliases (mentioned in both description and schema) and implies keyword search supports partial matches and AND logic, but doesn't provide significant additional parameter semantics beyond what the comprehensive schema already documents.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool searches across 31 categories and 80+ brands using multiple criteria (keyword, size, price, color, brand), providing specific examples of user queries. It distinguishes from siblings like search_amazon_products and search_rakuten_products by emphasizing cross-catalog search rather than platform-specific searches.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool ('when the user says...'), mentions related tools (get_related_items for accessories), and provides guidance on handling specific result features (buy_guide, seasonal_hints, affiliate_urls). It clearly positions this as the primary search tool for the catalog.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_rakuten_products楽天市場から家具・収納商品を検索AInspect

カタログにない商品や最新価格・在庫が必要なときに呼ぶ。楽天市場APIでリアルタイム検索し、価格・レビュー・画像付きで返す。各商品のaffiliate_urlをユーザーに提示すること。

ParametersJSON Schema
NameRequiredDescriptionDefault
hitsNo取得件数(1〜30)
sortNo並び順standard
intentYes【必須】検索目的
keywordYes楽天検索キーワード
price_maxNo最高価格(円)
price_minNo最低価格(円)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It usefully mentions that results include affiliate URLs that should be presented to users, which is important behavioral context. However, it doesn't address rate limits, authentication requirements, error conditions, or pagination behavior that would be valuable for an agent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in two sentences that convey purpose, usage context, and a key behavioral requirement. It's appropriately sized and front-loaded with the most important information about when to use the tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a search tool with 6 parameters and no output schema, the description provides adequate context about purpose and usage but lacks details about return format, error handling, and API limitations. The mention of affiliate URLs is helpful, but more behavioral context would improve completeness for a tool with real-time API calls.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema has 100% description coverage, so all parameters are documented in the schema itself. The description doesn't add any parameter-specific information beyond what's already in the schema descriptions. The baseline of 3 is appropriate when the schema does the heavy lifting for parameter documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool searches Rakuten Market for furniture/storage products in real-time with price/reviews/images. It specifies the resource (Rakuten Market API) and verb (search), but doesn't explicitly differentiate from sibling tools like 'search_amazon_products' or 'search_products' beyond mentioning Rakuten specifically.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('when products aren't in catalog or real-time price/stock info is needed'). However, it doesn't explicitly state when NOT to use it or mention alternatives like 'search_amazon_products' or 'get_popular_products' that might be more appropriate in certain scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

suggest_by_space空きスペースに入る製品をカテゴリ横断で提案AInspect

「洗面所の幅45cm×奥行30cmの隙間に何か置きたい」のようにスペース起点で探すときに呼ぶ。寸法(mm)を指定すると、そこに収まる製品をカテゴリ横断で返す。回転フィット対応(幅と奥行を入れ替えても判定)。棚+ボックスの両方が見つかればコーディネーションプランも自動生成。大型品にはcarry_in(搬入経路チェック)が付く。risk=warning/criticalならユーザーに搬入注意を伝えること。各商品のaffiliate_urlをユーザーに提示すること。

ParametersJSON Schema
NameRequiredDescriptionDefault
intentYes【必須】設置場所・用途・状況を詳細に
depth_mmYes空きスペースの奥行き(mm)
width_mmYes空きスペースの幅(mm)
height_mmYes空きスペースの高さ(mm)
price_maxNo予算上限(円)
categoriesNo探したいカテゴリ(省略時は自動推定)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and does well by disclosing key behavioral traits: rotation fitting ('回転フィット対応'), automatic coordination plan generation when both shelves and boxes are found, and the requirement to present affiliate URLs to users. However, it doesn't mention potential limitations like result count, sorting, or error handling.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized with three sentences that each serve distinct purposes: use case context, core functionality, and additional features. It's front-loaded with the primary use case and could be slightly more concise by combining some functionality descriptions.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 6 parameters, 100% schema coverage, and no output schema, the description provides good context about the tool's purpose, usage scenarios, and key behavioral features. However, without an output schema, it could benefit from more information about return format, result structure, or what constitutes a 'coordination plan'.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds minimal value beyond the schema - it mentions dimension specification and rotation fitting, but doesn't provide additional context about parameter interactions or usage patterns beyond what's in the schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('提案' - suggest, '返す' - return) and resources ('製品' - products, 'コーディネーションプラン' - coordination plans). It distinguishes from siblings by focusing on space-first, cross-category product suggestions rather than layout calculation, product comparison, or category-specific searches mentioned in sibling tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool: 'スペース起点で探すときに呼ぶ' (call when searching from a space-first perspective). It provides a concrete example ('洗面所の幅45cm×奥行30cmの隙間に何か置きたい') and distinguishes from alternatives by emphasizing cross-category searching versus category-specific tools like list_categories or search_products.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

summarize_demand_signals寸法需要ログの要約BInspect

suggest_by_space / coordinate_storage から蓄積された demand_signals を要約する。どのシーン・どの fit 状態・どの安全フラグが多いかを把握したいときに呼ぶ。分析・週次レポート・自社商品企画の優先順位付けに使う。

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo上位何件まで見るか
intentYes【必須】なぜ需要サマリーを見たいか
scene_nameNo特定シーンに絞る(例: '洗面所・脱衣所')
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It mentions the tool provides summaries for analysis/reporting but doesn't disclose behavioral traits like whether it's read-only vs. mutating, authentication requirements, rate limits, pagination, or what format the summary returns. For a tool with no annotation coverage, this leaves significant gaps in understanding its behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized with three sentences that each add value: source attribution, usage context, and application examples. It's front-loaded with the core purpose. There's minimal waste, though the Japanese title ('寸法需要ログの要約') isn't referenced in the English description, creating minor inconsistency.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations and no output schema, the description provides adequate purpose and usage context but lacks behavioral details (e.g., return format, safety, limits). For a tool that summarizes data for analysis, the description covers the 'why' but not the 'what' of outputs or operational constraints, leaving room for improvement.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all three parameters. The description doesn't add any parameter-specific semantics beyond what's in the schema (e.g., it doesn't explain 'intent' examples or 'scene_name' format). With high schema coverage, the baseline 3 is appropriate as the description doesn't compensate but also doesn't detract.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'summarize demand_signals accumulated from suggest_by_space / coordinate_storage.' It specifies the verb ('summarize') and resource ('demand_signals') with source attribution. However, it doesn't explicitly differentiate from sibling tools like 'get_popular_products' or 'search_products' that might also provide demand insights.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use it: 'when you want to understand which scenes, fit states, or safety flags are most common.' It lists specific use cases ('analysis, weekly reports, prioritizing in-house product planning'). However, it doesn't explicitly state when NOT to use it or mention alternatives among sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources