Skip to main content
Glama

製品比較(価格・サイズ・レビュー・耐荷重を並列比較)

compare_products

Compare 2-5 furniture products side-by-side with detailed specifications, reviews, and buying recommendations to help you make informed purchase decisions.

Instructions

「NクリックとKALLAXどっちがいい?」のように2〜5製品を比較するときに呼ぶ。価格・サイズ・レビュー・耐荷重を並列比較表で返す。カタログ一致時は内寸・互換収納・buy_guide(best_for/avoid_if)も付加。buy_guideのdecision_hintは比較recommendationにも反映済み。各商品のaffiliate_urlをユーザーに提示すること。

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
intentYes【必須】なぜ比較したいか
keywordsYes比較したい製品の検索キーワード(2〜5件)
compare_aspectsNo比較したい観点(省略時はデフォルト全項目)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It describes key behaviors: returns a comparison table, includes additional catalog-matched data (internal dimensions, compatible storage, buy_guide), incorporates decision_hint into recommendations, and requires affiliate_url presentation. However, it doesn't mention error handling, rate limits, authentication needs, or what happens when products can't be compared.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, starting with the primary use case and core functionality. All sentences contribute value, though the final sentence about affiliate_urls could be integrated more smoothly. There's no redundant information or unnecessary elaboration.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a comparison tool with 3 parameters and no output schema, the description adequately covers the core functionality but lacks details about the comparison table structure, error cases, or how catalog matching works. With no annotations and no output schema, the description should ideally provide more behavioral context about what the tool returns and its limitations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all three parameters. The description doesn't add significant semantic information beyond what's in the schema descriptions, though it implies the keywords parameter should contain product names or identifiers rather than generic search terms. The baseline of 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: compare 2-5 products across price, size, reviews, and load capacity, returning a parallel comparison table. It specifies the exact resources (products) and verb (compare), distinguishing it from sibling tools like get_product_detail or search_products by focusing on multi-product comparison rather than single-product lookup or general search.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool: 'when comparing 2-5 products' and provides a concrete example ('NクリックとKALLAXどっちがいい?'). It also implicitly distinguishes from alternatives by specifying the comparison table output format, unlike sibling tools that return single products or search results.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/ONE8943/ai-furniture-hub'

If you have feedback or need assistance with the MCP directory API, please join our Discord server