Skip to main content
Glama

Server Details

Real-time Amazon data API built for AI agents. 200M+ products, 1B+ reviews, live BSR, pricing, and competitor data as clean JSON. 10 agent skills for market research, competitor monitoring, pricing analysis, and listing audits.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4/5 across 10 of 10 tools scored. Lowest: 3.1/5.

Server CoherenceA
Disambiguation4/5

Most tools have distinct purposes focused on different aspects of Amazon data analysis (categories, products, markets, reviews, etc.), with clear boundaries described in their documentation. However, there is some potential overlap between products_search and competitors tools as both involve product discovery, though their specific focuses differ enough to avoid major confusion.

Naming Consistency5/5

All tools follow a perfectly consistent naming pattern: 'openapi_v2_' prefix followed by a descriptive resource/action combination in snake_case (e.g., 'categories', 'products_search', 'reviews_analysis'). This uniformity makes the tool set predictable and easy to navigate.

Tool Count5/5

With 10 tools, this is well within the ideal 3-15 range for a comprehensive Amazon data analysis server. Each tool serves a specific, valuable function in the domain, from category exploration to product search, competitor analysis, market research, and review processing, with no obvious redundancy.

Completeness4/5

The tool set covers the core workflows for Amazon product research comprehensively: discovery (categories, products, markets), analysis (competitors, history, reviews), and real-time data (product, reviews). The only minor gap is the lack of a dedicated tool for seller or brand analysis, but the existing tools provide robust coverage for most agent tasks.

Available Tools

16 tools
openapi_v2_categoriesAInspect

Categories V2

Query Amazon category hierarchy by ID, path, parent, or keyword.

Use this to discover category structure for filtering in other endpoints. Example: pass categoryKeyword="yoga" to find matching categories, or parentCategoryPath=["Sports & Outdoors"] to list child categories.

Query modes (mutually exclusive):

  • No parameters: Returns all root categories

  • categoryId: Get specific category by ID

  • categoryPath: Get specific category by path

  • parentCategoryId: Get children of parent category by ID

  • parentCategoryPath: Get children of parent category by path

  • categoryKeyword: Search categories by keyword

Related: /products/search and /markets/search accept categoryPath for filtering.

Responses:

200: Successful Response (Success Response) Content-Type: application/json

Example Response:

{
  "success": true,
  "meta": {
    "requestId": "Requestid",
    "timestamp": "Timestamp"
  }
}

Output Schema:

{
  "properties": {
    "success": {
      "type": "boolean",
      "title": "Success",
      "description": "Whether the request was successful",
      "default": true
    },
    "data": {
      "title": "Data",
      "description": "Response data payload"
    },
    "error": {
      "description": "Error details if request failed"
    },
    "meta": {
      "description": "Metadata for API responses.",
      "properties": {
        "requestId": {
          "type": "string",
          "title": "Requestid",
          "description": "Unique request identifier"
        },
        "timestamp": {
          "type": "string",
          "title": "Timestamp",
          "description": "Response timestamp in ISO 8601 format"
        },
        "total": {
          "title": "Total",
          "description": "Total number of records"
        },
        "page": {
          "title": "Page",
          "description": "Current page number"
        },
        "pageSize": {
          "title": "Pagesize",
          "description": "Number of records per page"
        },
        "totalPages": {
          "title": "Totalpages",
          "description": "Total number of pages"
        },
        "creditsRemaining": {
          "title": "Creditsremaining",
          "description": "Remaining API credits"
        },
        "creditsConsumed": {
          "title": "Creditsconsumed",
          "description": "Credits consumed by this request"
        }
      },
      "type": "object",
      "required": [
        "requestId",
        "timestamp"
      ],
      "title": "ResponseMeta"
    }
  },
  "type": "object",
  "required": [
    "meta"
  ],
  "title": "OpenApiResponse[list[Category]]",
  "examples": []
}

422: Validation Error Content-Type: application/json

Example Response:

{
  "detail": [
    {
      "loc": [],
      "msg": "Message",
      "type": "Error Type",
      "ctx": {}
    }
  ]
}

Output Schema:

{
  "properties": {
    "detail": {
      "items": {
        "properties": {
          "loc": {
            "items": {},
            "type": "array",
            "title": "Location"
          },
          "msg": {
            "type": "string",
            "title": "Message"
          },
          "type": {
            "type": "string",
            "title": "Error Type"
          },
          "input": {
            "title": "Input"
          },
          "ctx": {
            "type": "object",
            "title": "Context"
          }
        },
        "type": "object",
        "required": [
          "loc",
          "msg",
          "type"
        ],
        "title": "ValidationError"
      },
      "type": "array",
      "title": "Detail"
    }
  },
  "type": "object",
  "title": "HTTPValidationError"
}
ParametersJSON Schema
NameRequiredDescriptionDefault
categoryIdNoCategory identifier
marketplaceNoAmazon marketplace codeUS
categoryPathNoCategory hierarchy from root to current level (e.g., ['Electronics', 'Computers', 'Laptops'])
categoryKeywordNoFilter by category name keyword (matches any level in category hierarchy, e.g., 'Electronics' or 'Laptops')
parentCategoryIdNoParent category ID
parentCategoryPathNoParent category path
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It does well by explaining the six query modes and their mutual exclusivity, providing example usage patterns, and mentioning the response structure. However, it doesn't explicitly state whether this is a read-only operation (though implied by 'query'), nor does it mention rate limits, authentication requirements, or pagination behavior that might be relevant for API tools.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections: purpose statement, usage guidance, query modes, and related endpoints. It's appropriately sized for a tool with six parameters and complex query logic. The only minor issue is that the response documentation (200 and 422 sections) is quite detailed and might be better handled by an output schema, but this is reasonable given the context.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 6 parameters, 100% schema coverage, and no output schema, the description provides good contextual completeness. It explains the purpose, usage scenarios, query modes with examples, and relationships to other tools. The main gap is the lack of explicit behavioral information about read-only nature, authentication, or rate limits, but the query modes and examples provide substantial practical guidance.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, so the schema already documents all six parameters thoroughly. The description adds some value by explaining the query modes and providing examples ('categoryKeyword="yoga"' and 'parentCategoryPath=["Sports & Outdoors"]'), but doesn't add significant semantic information beyond what's already in the parameter descriptions. This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Query Amazon category hierarchy by ID, path, parent, or keyword' and 'Use this to discover category structure for filtering in other endpoints.' It specifies the exact resource (Amazon category hierarchy) and multiple query methods, distinguishing it from sibling tools like products_search or reviews_search which handle different resources.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool: 'Use this to discover category structure for filtering in other endpoints' and names specific related endpoints ('/products/search and /markets/search accept categoryPath for filtering'). It also clearly explains the six mutually exclusive query modes, helping the agent choose the right parameter combination.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

openapi_v2_competitorsAInspect

Competitor Lookup V2

Search competitor products by keyword, brand, ASIN, or category with filters.

Use this to identify competing products around a specific listing or brand. Example: pass asin="B07FR2V8SH" to find all products competing in the same keywords and category. Data is based on the latest daily snapshot; results are paginated (max 100 per page). Related: /products/search for broader keyword discovery.

Responses:

200: Successful Response (Success Response) Content-Type: application/json

Example Response:

{
  "success": true,
  "meta": {
    "requestId": "Requestid",
    "timestamp": "Timestamp"
  }
}

Output Schema:

{
  "properties": {
    "success": {
      "type": "boolean",
      "title": "Success",
      "description": "Whether the request was successful",
      "default": true
    },
    "data": {
      "title": "Data",
      "description": "Response data payload"
    },
    "error": {
      "description": "Error details if request failed"
    },
    "meta": {
      "description": "Metadata for API responses.",
      "properties": {
        "requestId": {
          "type": "string",
          "title": "Requestid",
          "description": "Unique request identifier"
        },
        "timestamp": {
          "type": "string",
          "title": "Timestamp",
          "description": "Response timestamp in ISO 8601 format"
        },
        "total": {
          "title": "Total",
          "description": "Total number of records"
        },
        "page": {
          "title": "Page",
          "description": "Current page number"
        },
        "pageSize": {
          "title": "Pagesize",
          "description": "Number of records per page"
        },
        "totalPages": {
          "title": "Totalpages",
          "description": "Total number of pages"
        },
        "creditsRemaining": {
          "title": "Creditsremaining",
          "description": "Remaining API credits"
        },
        "creditsConsumed": {
          "title": "Creditsconsumed",
          "description": "Credits consumed by this request"
        }
      },
      "type": "object",
      "required": [
        "requestId",
        "timestamp"
      ],
      "title": "ResponseMeta"
    }
  },
  "type": "object",
  "required": [
    "meta"
  ],
  "title": "OpenApiResponse[list[Product]]",
  "examples": []
}

422: Validation Error Content-Type: application/json

Example Response:

{
  "detail": [
    {
      "loc": [],
      "msg": "Message",
      "type": "Error Type",
      "ctx": {}
    }
  ]
}

Output Schema:

{
  "properties": {
    "detail": {
      "items": {
        "properties": {
          "loc": {
            "items": {},
            "type": "array",
            "title": "Location"
          },
          "msg": {
            "type": "string",
            "title": "Message"
          },
          "type": {
            "type": "string",
            "title": "Error Type"
          },
          "input": {
            "title": "Input"
          },
          "ctx": {
            "type": "object",
            "title": "Context"
          }
        },
        "type": "object",
        "required": [
          "loc",
          "msg",
          "type"
        ],
        "title": "ValidationError"
      },
      "type": "array",
      "title": "Detail"
    }
  },
  "type": "object",
  "title": "HTTPValidationError"
}
ParametersJSON Schema
NameRequiredDescriptionDefault
asinNoAmazon Standard Identification Number (10-char alphanumeric). Example: 'B07FR2V8SH'.
pageNoPage number
badgesNoInclude products with these badges. Example: ['bestSeller', 'amazonChoice', 'newRelease', 'aPlus', 'video'].
sortByNoSort fieldmonthlySalesFloor
keywordNoSearch keyword
pageSizeNoPage size (max 100)
brandNameNoFilter by brand name.
dateRangeNoTime range filter. Relative ('30d') or month ('2026-01'). Default '30d'.30d
sortOrderNoSort direction: asc or descdesc
sellerNameNoFilter by seller name.
marketplaceNoAmazon marketplace codeUS
categoryPathNoCategory hierarchy from root to current level (e.g., ['Electronics', 'Computers', 'Laptops'])
fulfillmentsNoFulfillment filter. Example: ['FBA', 'FBM'].
excludeBadgesNoExclude products with these badges. Supported: ['aPlus', 'video'].
excludeBrandsNoBrand names to exclude. Example: ['Generic'].
includeBrandsNoBrand names to include. Example: ['Apple', 'Samsung'].
excludeSellersNoSeller names to exclude.
includeSellersNoSeller names to include. Example: ['Apple Store'].
sellerCountMaxNoMaximum number of sellers. Example: 20.
sellerCountMinNoMinimum number of sellers. Example: 1.
excludeKeywordsNoKeywords to exclude from results. Example: ['refurbished', 'used'].
keywordMatchTypeNoKeyword match type: 'fuzzy', 'phrase', or 'exact'. Null = fuzzy.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It adds valuable context beyond basic functionality: it mentions data is 'based on the latest daily snapshot,' results are 'paginated (max 100 per page),' and includes an example response structure. However, it lacks details on permissions, rate limits, or error handling beyond the example, leaving some behavioral gaps for a tool with 22 parameters.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded with key information in the first few sentences. However, it includes extensive example response and output schema details that are redundant since there's no output schema provided in context signals, and some formatting (like markdown code blocks) adds clutter without adding proportional value for tool selection.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (22 parameters, no annotations, no output schema), the description is fairly complete. It covers purpose, usage, behavioral traits like data freshness and pagination, and relates to siblings. However, it could improve by explaining parameter interactions or common use cases more explicitly, given the high parameter count and lack of structured output guidance.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all 22 parameters thoroughly. The description adds minimal parameter semantics beyond the schema, such as implying ASIN usage in the example and mentioning pagination limits. It doesn't provide additional syntax, format details, or usage examples for parameters beyond what the schema offers, meeting the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose as 'Search competitor products by keyword, brand, ASIN, or category with filters' and distinguishes it from sibling tools by mentioning '/products/search for broader keyword discovery.' It specifies the verb ('search'), resource ('competitor products'), and scope ('with filters'), making it highly specific and differentiated.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidance: 'Use this to identify competing products around a specific listing or brand' and gives an example with 'asin="B07FR2V8SH".' It also names an alternative ('/products/search for broader keyword discovery') and clarifies when to use this tool versus that alternative, offering clear context and exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

openapi_v2_ecommerce_rerankAInspect

Rerank documents for ecommerce search queries

Rerank a list of product documents by relevance to ecommerce search queries.

Use this to improve product search result ordering. Pass one or more search queries and a shared list of product documents (titles, descriptions, or concatenated attributes). The model scores each document against each query and returns them sorted by relevance. Powered by a fine-tuned Qwen3-Reranker model optimized for ecommerce product matching.

Credits: 1 credit per query in the batch. A request with 3 queries costs 3 credits.

Responses:

200: Successful Response (Success Response) Content-Type: application/json

Example Response:

{
  "success": true,
  "meta": {
    "requestId": "Requestid",
    "timestamp": "Timestamp"
  }
}

Output Schema:

{
  "properties": {
    "success": {
      "type": "boolean",
      "title": "Success",
      "description": "Whether the request was successful",
      "default": true
    },
    "data": {
      "description": "Response data payload"
    },
    "error": {
      "description": "Error details if request failed"
    },
    "meta": {
      "description": "Metadata for API responses.",
      "properties": {
        "requestId": {
          "type": "string",
          "title": "Requestid",
          "description": "Unique request identifier"
        },
        "timestamp": {
          "type": "string",
          "title": "Timestamp",
          "description": "Response timestamp in ISO 8601 format"
        },
        "total": {
          "title": "Total",
          "description": "Total number of records"
        },
        "page": {
          "title": "Page",
          "description": "Current page number"
        },
        "pageSize": {
          "title": "Pagesize",
          "description": "Number of records per page"
        },
        "totalPages": {
          "title": "Totalpages",
          "description": "Total number of pages"
        },
        "creditsRemaining": {
          "title": "Creditsremaining",
          "description": "Remaining API credits"
        },
        "creditsConsumed": {
          "title": "Creditsconsumed",
          "description": "Credits consumed by this request"
        }
      },
      "type": "object",
      "required": [
        "requestId",
        "timestamp"
      ],
      "title": "ResponseMeta"
    }
  },
  "type": "object",
  "required": [
    "meta"
  ],
  "title": "OpenApiResponse[EcommerceRerankResult]",
  "examples": []
}

422: Validation Error Content-Type: application/json

Example Response:

{
  "detail": [
    {
      "loc": [],
      "msg": "Message",
      "type": "Error Type",
      "ctx": {}
    }
  ]
}

Output Schema:

{
  "properties": {
    "detail": {
      "items": {
        "properties": {
          "loc": {
            "items": {},
            "type": "array",
            "title": "Location"
          },
          "msg": {
            "type": "string",
            "title": "Message"
          },
          "type": {
            "type": "string",
            "title": "Error Type"
          },
          "input": {
            "title": "Input"
          },
          "ctx": {
            "type": "object",
            "title": "Context"
          }
        },
        "type": "object",
        "required": [
          "loc",
          "msg",
          "type"
        ],
        "title": "ValidationError"
      },
      "type": "array",
      "title": "Detail"
    }
  },
  "type": "object",
  "title": "HTTPValidationError"
}
ParametersJSON Schema
NameRequiredDescriptionDefault
topKNoReturn only the top K most relevant documents per query. Omit to return all documents ranked.
queriesYesList of search queries to rerank documents against. Max 10 queries per request.
documentsYesList of document lists, one per query (documents[i] is reranked against queries[i]). Max 100 documents per query.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and adds valuable behavioral context: it discloses the underlying model ('fine-tuned Qwen3-Reranker'), cost structure ('1 credit per query'), and operational details like batch processing and scoring methodology. It does not contradict any annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections and front-loaded key information, but includes extensive API response examples and schemas that could be considered redundant since they're already in structured fields, slightly reducing efficiency.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 3 parameters, 100% schema coverage, and no output schema, the description provides good completeness: it explains the tool's purpose, usage context, behavioral traits, and cost. However, it could better integrate with sibling tools by mentioning alternatives.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds some context about parameter usage ('Pass one or more search queries and a shared list of product documents') but doesn't provide additional semantic details beyond what's in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('rerank documents') and resources ('product documents for ecommerce search queries'), distinguishing it from sibling tools like search or analysis tools by focusing on re-ranking rather than retrieval or processing.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context on when to use this tool ('to improve product search result ordering') and mentions its specific application ('ecommerce product matching'), but does not explicitly state when not to use it or name alternative tools among siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

openapi_v2_fashion_image_embeddingAInspect

Generate fashion image embeddings (768-dim vectors for similarity search)

Generate fashion-specific image embeddings using fine-tuned SigLIP2.

Encode product images into 768-dim vectors aligned with the text embedding space. Use cases: visual similarity search, image-to-text matching, duplicate detection, catalog indexing. Accepts HTTPS URLs or base64-encoded images. Vectors are L2-normalized by default. Credits: 1 credit per request.

Responses:

200: Successful Response (Success Response) Content-Type: application/json

Example Response:

{
  "success": true,
  "meta": {
    "requestId": "Requestid",
    "timestamp": "Timestamp"
  }
}

Output Schema:

{
  "properties": {
    "success": {
      "type": "boolean",
      "title": "Success",
      "description": "Whether the request was successful",
      "default": true
    },
    "data": {
      "description": "Response data payload"
    },
    "error": {
      "description": "Error details if request failed"
    },
    "meta": {
      "description": "Metadata for API responses.",
      "properties": {
        "requestId": {
          "type": "string",
          "title": "Requestid",
          "description": "Unique request identifier"
        },
        "timestamp": {
          "type": "string",
          "title": "Timestamp",
          "description": "Response timestamp in ISO 8601 format"
        },
        "total": {
          "title": "Total",
          "description": "Total number of records"
        },
        "page": {
          "title": "Page",
          "description": "Current page number"
        },
        "pageSize": {
          "title": "Pagesize",
          "description": "Number of records per page"
        },
        "totalPages": {
          "title": "Totalpages",
          "description": "Total number of pages"
        },
        "creditsRemaining": {
          "title": "Creditsremaining",
          "description": "Remaining API credits"
        },
        "creditsConsumed": {
          "title": "Creditsconsumed",
          "description": "Credits consumed by this request"
        }
      },
      "type": "object",
      "required": [
        "requestId",
        "timestamp"
      ],
      "title": "ResponseMeta"
    }
  },
  "type": "object",
  "required": [
    "meta"
  ],
  "title": "OpenApiResponse[FashionImageEmbeddingResult]",
  "examples": []
}

422: Validation Error Content-Type: application/json

Example Response:

{
  "detail": [
    {
      "loc": [],
      "msg": "Message",
      "type": "Error Type",
      "ctx": {}
    }
  ]
}

Output Schema:

{
  "properties": {
    "detail": {
      "items": {
        "properties": {
          "loc": {
            "items": {},
            "type": "array",
            "title": "Location"
          },
          "msg": {
            "type": "string",
            "title": "Message"
          },
          "type": {
            "type": "string",
            "title": "Error Type"
          },
          "input": {
            "title": "Input"
          },
          "ctx": {
            "type": "object",
            "title": "Context"
          }
        },
        "type": "object",
        "required": [
          "loc",
          "msg",
          "type"
        ],
        "title": "ValidationError"
      },
      "type": "array",
      "title": "Detail"
    }
  },
  "type": "object",
  "title": "HTTPValidationError"
}
ParametersJSON Schema
NameRequiredDescriptionDefault
imageUrlsYesProduct images to encode: HTTPS URLs (e.g. 'https://cdn.example.com/product.jpg') or base64-encoded strings (with optional data URI prefix). Max 8 per request. Supported formats: JPEG, PNG, WebP.
normalizeVectorsNoL2-normalize output vectors to unit length (default true). When true, dot product = cosine similarity.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It mentions key behaviors: 768-dim vectors, L2-normalization default, SigLIP2 model, credit cost, and input constraints. However, it omits details on authentication, rate limits, idempotency, or error handling beyond the 422 schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with a clear summary, followed by details, use cases, input format, and credits. The inclusion of full response schemas is verbose but provides completeness. Could be trimmed slightly, but overall well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given two parameters, full schema coverage, and no provided output schema, the description compensates by including output schemas inline. It covers purpose, input constraints, use cases, and credit cost. Missing are file size limits for images and more detailed error behavior, but overall it's fairly complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage for both parameters. The description adds value by clarifying that imageUrls can be HTTPS URLs or base64 with optional data URI prefix, and that normalizeVectors affects dot product interpretation. This goes beyond the schema's descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool generates fashion image embeddings (768-dim vectors) for similarity search, with specific verb 'Generate' and resource 'fashion image embeddings'. It explicitly lists use cases and the name distinguishes it from general-purpose image embedding tools like openapi_v2_image_embedding.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides some usage context: accepts URLs/base64, max 8 images, supported formats, credits. However, it does not explicitly compare to sibling tools or state when to use this over alternatives like openapi_v2_fashion_similarity or openapi_v2_image_embedding.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

openapi_v2_fashion_similarityAInspect

Compute text-image similarity scores for fashion products

Compute cosine similarity between text queries and product images.

Encodes texts and images into the same 768-dim space, returns a score matrix. similarityScores[i][j] = relevance of textQueries[i] to imageUrls[j]. Higher = better match. Equivalent to text-embedding + image-embedding + dot product in one call. Credits: 1 credit per request.

Responses:

200: Successful Response (Success Response) Content-Type: application/json

Example Response:

{
  "success": true,
  "meta": {
    "requestId": "Requestid",
    "timestamp": "Timestamp"
  }
}

Output Schema:

{
  "properties": {
    "success": {
      "type": "boolean",
      "title": "Success",
      "description": "Whether the request was successful",
      "default": true
    },
    "data": {
      "description": "Response data payload"
    },
    "error": {
      "description": "Error details if request failed"
    },
    "meta": {
      "description": "Metadata for API responses.",
      "properties": {
        "requestId": {
          "type": "string",
          "title": "Requestid",
          "description": "Unique request identifier"
        },
        "timestamp": {
          "type": "string",
          "title": "Timestamp",
          "description": "Response timestamp in ISO 8601 format"
        },
        "total": {
          "title": "Total",
          "description": "Total number of records"
        },
        "page": {
          "title": "Page",
          "description": "Current page number"
        },
        "pageSize": {
          "title": "Pagesize",
          "description": "Number of records per page"
        },
        "totalPages": {
          "title": "Totalpages",
          "description": "Total number of pages"
        },
        "creditsRemaining": {
          "title": "Creditsremaining",
          "description": "Remaining API credits"
        },
        "creditsConsumed": {
          "title": "Creditsconsumed",
          "description": "Credits consumed by this request"
        }
      },
      "type": "object",
      "required": [
        "requestId",
        "timestamp"
      ],
      "title": "ResponseMeta"
    }
  },
  "type": "object",
  "required": [
    "meta"
  ],
  "title": "OpenApiResponse[FashionSimilarityResult]",
  "examples": []
}

422: Validation Error Content-Type: application/json

Example Response:

{
  "detail": [
    {
      "loc": [],
      "msg": "Message",
      "type": "Error Type",
      "ctx": {}
    }
  ]
}

Output Schema:

{
  "properties": {
    "detail": {
      "items": {
        "properties": {
          "loc": {
            "items": {},
            "type": "array",
            "title": "Location"
          },
          "msg": {
            "type": "string",
            "title": "Message"
          },
          "type": {
            "type": "string",
            "title": "Error Type"
          },
          "input": {
            "title": "Input"
          },
          "ctx": {
            "type": "object",
            "title": "Context"
          }
        },
        "type": "object",
        "required": [
          "loc",
          "msg",
          "type"
        ],
        "title": "ValidationError"
      },
      "type": "array",
      "title": "Detail"
    }
  },
  "type": "object",
  "title": "HTTPValidationError"
}
ParametersJSON Schema
NameRequiredDescriptionDefault
imageUrlsYesProduct images (HTTPS URLs or base64) to compare against text queries. Max 8.
textQueriesYesFashion text queries to compare against images (e.g. 'red summer dress'). Max 32.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description explains return structure, indexing, and credit cost. It does not cover rate limits or auth, but these are not expected for this type of tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with a concise summary followed by details and output schemas. The inclusion of full output schemas makes it slightly lengthy but well-organized.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description covers the operation, parameters, output format, and example responses comprehensively. With only two parameters and no output schema needed (provided inline), it is fully complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, and the description adds context about fashion-specific queries, image formats, and max items, enhancing understanding beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it computes text-image similarity scores for fashion products, explaining the embedding dimension and similarity matrix. It distinguishes from sibling tools for separate embeddings by noting it's equivalent to combining them in one call.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Describes the operation and equivalence to separate embeddings, giving context for when to use this combined approach. However, it does not explicitly list when not to use it or name sibling tools directly.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

openapi_v2_fashion_text_embeddingAInspect

Generate fashion text embeddings (768-dim vectors for similarity search)

Generate fashion-specific text embeddings using fine-tuned SigLIP2.

Encode fashion text into 768-dim vectors aligned with the image embedding space. Use cases: text-to-image search, semantic product matching, catalog indexing. Vectors are L2-normalized by default (dot product = cosine similarity). Credits: 1 credit per request.

Responses:

200: Successful Response (Success Response) Content-Type: application/json

Example Response:

{
  "success": true,
  "meta": {
    "requestId": "Requestid",
    "timestamp": "Timestamp"
  }
}

Output Schema:

{
  "properties": {
    "success": {
      "type": "boolean",
      "title": "Success",
      "description": "Whether the request was successful",
      "default": true
    },
    "data": {
      "description": "Response data payload"
    },
    "error": {
      "description": "Error details if request failed"
    },
    "meta": {
      "description": "Metadata for API responses.",
      "properties": {
        "requestId": {
          "type": "string",
          "title": "Requestid",
          "description": "Unique request identifier"
        },
        "timestamp": {
          "type": "string",
          "title": "Timestamp",
          "description": "Response timestamp in ISO 8601 format"
        },
        "total": {
          "title": "Total",
          "description": "Total number of records"
        },
        "page": {
          "title": "Page",
          "description": "Current page number"
        },
        "pageSize": {
          "title": "Pagesize",
          "description": "Number of records per page"
        },
        "totalPages": {
          "title": "Totalpages",
          "description": "Total number of pages"
        },
        "creditsRemaining": {
          "title": "Creditsremaining",
          "description": "Remaining API credits"
        },
        "creditsConsumed": {
          "title": "Creditsconsumed",
          "description": "Credits consumed by this request"
        }
      },
      "type": "object",
      "required": [
        "requestId",
        "timestamp"
      ],
      "title": "ResponseMeta"
    }
  },
  "type": "object",
  "required": [
    "meta"
  ],
  "title": "OpenApiResponse[FashionTextEmbeddingResult]",
  "examples": []
}

422: Validation Error Content-Type: application/json

Example Response:

{
  "detail": [
    {
      "loc": [],
      "msg": "Message",
      "type": "Error Type",
      "ctx": {}
    }
  ]
}

Output Schema:

{
  "properties": {
    "detail": {
      "items": {
        "properties": {
          "loc": {
            "items": {},
            "type": "array",
            "title": "Location"
          },
          "msg": {
            "type": "string",
            "title": "Message"
          },
          "type": {
            "type": "string",
            "title": "Error Type"
          },
          "input": {
            "title": "Input"
          },
          "ctx": {
            "type": "object",
            "title": "Context"
          }
        },
        "type": "object",
        "required": [
          "loc",
          "msg",
          "type"
        ],
        "title": "ValidationError"
      },
      "type": "array",
      "title": "Detail"
    }
  },
  "type": "object",
  "title": "HTTPValidationError"
}
ParametersJSON Schema
NameRequiredDescriptionDefault
queriesYesFashion text queries to encode. Examples: product titles ('Women Red Floral Midi Dress'), search queries ('casual summer outfit'), or attributes ('cotton, v-neck, knee-length'). Max 32 per request.
normalizeVectorsNoL2-normalize output vectors to unit length (default true). When true, dot product = cosine similarity.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses that vectors are L2-normalized by default (dot product equals cosine similarity) and mentions credits consumption. However, it does not mention authentication requirements, rate limits, or idempotency. The response schema is included, but behavioral expectations beyond the output are incomplete.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description includes multiple response examples and output schemas that lengthen the text. The essential information (purpose, dimensions, normalization, credits) is front-loaded, but the later sections are redundant given the output schema. It could be more concise by omitting the full response examples.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description covers purpose, output dimensions, normalization behavior, credits, and response structure. Given the tool’s complexity (text embedding generation), it is fairly comprehensive. However, it lacks authentication context and does not mention any rate limits, which would be helpful for an API tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema coverage is 100%, so baseline is 3. The description adds minimal value beyond the schema: it restates the default for normalizeVectors and implies its effect. The queries parameter is already well-described in the schema. Thus, the description does not significantly enhance parameter understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool generates fashion text embeddings with 768-dim vectors for similarity search, specifies use cases (text-to-image search, semantic product matching, catalog indexing), and mentions the fine-tuned SigLIP2 model. This effectively differentiates from the sibling tool for image embeddings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear use cases (text-to-image search, semantic product matching, catalog indexing) and credits per request. However, it does not explicitly contrast with the sibling fashion_image_embedding tool or specify when not to use it, so guidance on alternatives is implied rather than explicit.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

openapi_v2_image_detectionAInspect

Detect fashion items in images

Detect fashion items in an image and return their bounding boxes.

Analyzes an image to locate fashion items such as bags, shoes, clothing, watches, glasses, and jewelry. Returns bounding box coordinates, category classification, and confidence scores for each detected item. Use the classes parameter to filter for specific fashion categories. Credits: 1 credit per request.

Responses:

200: Successful Response (Success Response) Content-Type: application/json

Example Response:

{
  "success": true,
  "meta": {
    "requestId": "Requestid",
    "timestamp": "Timestamp"
  }
}

Output Schema:

{
  "properties": {
    "success": {
      "type": "boolean",
      "title": "Success",
      "description": "Whether the request was successful",
      "default": true
    },
    "data": {
      "description": "Response data payload"
    },
    "error": {
      "description": "Error details if request failed"
    },
    "meta": {
      "description": "Metadata for API responses.",
      "properties": {
        "requestId": {
          "type": "string",
          "title": "Requestid",
          "description": "Unique request identifier"
        },
        "timestamp": {
          "type": "string",
          "title": "Timestamp",
          "description": "Response timestamp in ISO 8601 format"
        },
        "total": {
          "title": "Total",
          "description": "Total number of records"
        },
        "page": {
          "title": "Page",
          "description": "Current page number"
        },
        "pageSize": {
          "title": "Pagesize",
          "description": "Number of records per page"
        },
        "totalPages": {
          "title": "Totalpages",
          "description": "Total number of pages"
        },
        "creditsRemaining": {
          "title": "Creditsremaining",
          "description": "Remaining API credits"
        },
        "creditsConsumed": {
          "title": "Creditsconsumed",
          "description": "Credits consumed by this request"
        }
      },
      "type": "object",
      "required": [
        "requestId",
        "timestamp"
      ],
      "title": "ResponseMeta"
    }
  },
  "type": "object",
  "required": [
    "meta"
  ],
  "title": "OpenApiResponse[ImageDetectionResult]",
  "examples": []
}

422: Validation Error Content-Type: application/json

Example Response:

{
  "detail": [
    {
      "loc": [],
      "msg": "Message",
      "type": "Error Type",
      "ctx": {}
    }
  ]
}

Output Schema:

{
  "properties": {
    "detail": {
      "items": {
        "properties": {
          "loc": {
            "items": {},
            "type": "array",
            "title": "Location"
          },
          "msg": {
            "type": "string",
            "title": "Message"
          },
          "type": {
            "type": "string",
            "title": "Error Type"
          },
          "input": {
            "title": "Input"
          },
          "ctx": {
            "type": "object",
            "title": "Context"
          }
        },
        "type": "object",
        "required": [
          "loc",
          "msg",
          "type"
        ],
        "title": "ValidationError"
      },
      "type": "array",
      "title": "Detail"
    }
  },
  "type": "object",
  "title": "HTTPValidationError"
}
ParametersJSON Schema
NameRequiredDescriptionDefault
topKYesMaximum number of detections to return (1-50).
imageYesURL of the image to analyze. Must be a publicly accessible HTTPS URL.
classesNoFashion category class IDs to detect. Omit to detect all categories. Values: 0=Bag, 1=Cap, 2=Down-Clothing, 3=Glasses, 4=Jewelry, 5=Others, 6=Shoes, 7=Sock, 8=Up-Clothing, 9=Watch.
timeoutNoRequest timeout in seconds (1-120). The request will be aborted if the upstream service does not respond within this time.
returnImageNoWhether to return the annotated image with bounding boxes drawn.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description effectively discloses behavior: it returns bounding boxes, categories, confidence scores, credits consumption, and optionally returns an annotated image. It also mentions the credits per request, which is helpful for planning.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections. However, the example response and output schema details are lengthy and potentially redundant given the description already explains returns. Still, it's organized and front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 4-param tool with high schema coverage and no output schema, the description is fairly complete. It explains input, output components, and credits. It lacks details on bounding box coordinates format, but overall is adequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description adds value by explaining that the classes parameter filters specific fashion categories and lists the class ID-to-label mapping. This goes beyond the schema's concise description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool detects fashion items in images, specifies return of bounding boxes, categories, and confidence scores. It distinguishes from siblings (e.g., openapi_v2_categories, openapi_v2_image_embedding) by focusing on detection with location info.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides implicit usage guidance: use for fashion item detection with optional filtering by classes. However, no explicit when-to-use vs when-not-to-use or alternatives among siblings are given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

openapi_v2_image_embeddingAInspect

Generate fashion item embeddings from images

Generate feature embeddings for fashion items detected in an image.

Automatically detects fashion items (bags, shoes, clothing, watches, etc.) in the image and generates feature embedding vectors for each detected item. Embeddings can be used for visual similarity search, product recommendations, and image-based product matching. Optionally include fashion category tags and text-image relevance scores. Credits: 1 credit per request.

Responses:

200: Successful Response (Success Response) Content-Type: application/json

Example Response:

{
  "success": true,
  "meta": {
    "requestId": "Requestid",
    "timestamp": "Timestamp"
  }
}

Output Schema:

{
  "properties": {
    "success": {
      "type": "boolean",
      "title": "Success",
      "description": "Whether the request was successful",
      "default": true
    },
    "data": {
      "description": "Response data payload"
    },
    "error": {
      "description": "Error details if request failed"
    },
    "meta": {
      "description": "Metadata for API responses.",
      "properties": {
        "requestId": {
          "type": "string",
          "title": "Requestid",
          "description": "Unique request identifier"
        },
        "timestamp": {
          "type": "string",
          "title": "Timestamp",
          "description": "Response timestamp in ISO 8601 format"
        },
        "total": {
          "title": "Total",
          "description": "Total number of records"
        },
        "page": {
          "title": "Page",
          "description": "Current page number"
        },
        "pageSize": {
          "title": "Pagesize",
          "description": "Number of records per page"
        },
        "totalPages": {
          "title": "Totalpages",
          "description": "Total number of pages"
        },
        "creditsRemaining": {
          "title": "Creditsremaining",
          "description": "Remaining API credits"
        },
        "creditsConsumed": {
          "title": "Creditsconsumed",
          "description": "Credits consumed by this request"
        }
      },
      "type": "object",
      "required": [
        "requestId",
        "timestamp"
      ],
      "title": "ResponseMeta"
    }
  },
  "type": "object",
  "required": [
    "meta"
  ],
  "title": "OpenApiResponse[ImageEmbeddingResult]",
  "examples": []
}

422: Validation Error Content-Type: application/json

Example Response:

{
  "detail": [
    {
      "loc": [],
      "msg": "Message",
      "type": "Error Type",
      "ctx": {}
    }
  ]
}

Output Schema:

{
  "properties": {
    "detail": {
      "items": {
        "properties": {
          "loc": {
            "items": {},
            "type": "array",
            "title": "Location"
          },
          "msg": {
            "type": "string",
            "title": "Message"
          },
          "type": {
            "type": "string",
            "title": "Error Type"
          },
          "input": {
            "title": "Input"
          },
          "ctx": {
            "type": "object",
            "title": "Context"
          }
        },
        "type": "object",
        "required": [
          "loc",
          "msg",
          "type"
        ],
        "title": "ValidationError"
      },
      "type": "array",
      "title": "Detail"
    }
  },
  "type": "object",
  "title": "HTTPValidationError"
}
ParametersJSON Schema
NameRequiredDescriptionDefault
textNoText(s) for computing text-image relevance scores. Omit to skip relevance scoring.
topKNoMaximum number of detected items to return. Omit to return all detections.
imageYesURL of the image to analyze. Must be a publicly accessible HTTPS URL.
timeoutNoRequest timeout in seconds (1-120). The request will be aborted if the upstream service does not respond within this time.
withTagNoWhether to include fashion category tags (e.g. Bag, Shoes, Watch) in the response.
boundingBoxesNoPre-defined bounding boxes as [[x1, y1, x2, y2], ...]. Omit for automatic detection.
withEmbeddingNoWhether to include feature embedding vectors in the response.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Since no annotations are provided, the description carries full burden. It discloses key behaviors: auto-detection of fashion items, generation of embedding vectors, optional inclusion of tags/relevance scores, and credit cost (1 credit per request). Return format is outlined with schema examples.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with a summary paragraph, list of use cases, and detailed response schemas. Slightly verbose for the response section which could be summarized more concisely. Overall clear and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (9 parameters, no output schema in strict sense but detailed inline examples), the description provides good context including credit cost, auto-detection behavior, and response structure. Missing explicit error handling details beyond validation error example.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, but the description still adds value by clarifying the purpose of detected items and embeddings. It explains optional parameters like withTag and withEmbedding in context. However, it does not explicitly elaborate on each parameter beyond what's in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states it generates fashion item embeddings from images and lists use cases like visual similarity search and product recommendations. Its purpose is distinct from sibling tools, but the description could more explicitly contrast it with similar image tools like openapi_v2_image_detection.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains what the tool does but does not provide explicit guidance on when to use it versus alternatives. It mentions use cases but no when-not-to-use scenarios or comparisons with sibling tools like image_detection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

openapi_v2_products_historyAInspect

Product History V2

Get historical time-series data for a single ASIN over a date range.

Returns columnar arrays: high-frequency metrics (price, BSR, sales, rating, sellerCount) as daily arrays aligned with timestamps, and low-frequency fields (title, imageUrl, badges, inventoryStatus) as changelog entries that only record changes. Max date range: 730 days. Related: /products/search to discover ASINs first.

Responses:

200: Successful Response (Success Response) Content-Type: application/json

Example Response:

{
  "success": true,
  "meta": {
    "requestId": "Requestid",
    "timestamp": "Timestamp"
  }
}

Output Schema:

{
  "properties": {
    "success": {
      "type": "boolean",
      "title": "Success",
      "description": "Whether the request was successful",
      "default": true
    },
    "data": {
      "description": "Response data payload"
    },
    "error": {
      "description": "Error details if request failed"
    },
    "meta": {
      "description": "Metadata for API responses.",
      "properties": {
        "requestId": {
          "type": "string",
          "title": "Requestid",
          "description": "Unique request identifier"
        },
        "timestamp": {
          "type": "string",
          "title": "Timestamp",
          "description": "Response timestamp in ISO 8601 format"
        },
        "total": {
          "title": "Total",
          "description": "Total number of records"
        },
        "page": {
          "title": "Page",
          "description": "Current page number"
        },
        "pageSize": {
          "title": "Pagesize",
          "description": "Number of records per page"
        },
        "totalPages": {
          "title": "Totalpages",
          "description": "Total number of pages"
        },
        "creditsRemaining": {
          "title": "Creditsremaining",
          "description": "Remaining API credits"
        },
        "creditsConsumed": {
          "title": "Creditsconsumed",
          "description": "Credits consumed by this request"
        }
      },
      "type": "object",
      "required": [
        "requestId",
        "timestamp"
      ],
      "title": "ResponseMeta"
    }
  },
  "type": "object",
  "required": [
    "meta"
  ],
  "title": "OpenApiResponse[ProductHistoryTimeSeriesItem]",
  "examples": []
}

422: Validation Error Content-Type: application/json

Example Response:

{
  "detail": [
    {
      "loc": [],
      "msg": "Message",
      "type": "Error Type",
      "ctx": {}
    }
  ]
}

Output Schema:

{
  "properties": {
    "detail": {
      "items": {
        "properties": {
          "loc": {
            "items": {},
            "type": "array",
            "title": "Location"
          },
          "msg": {
            "type": "string",
            "title": "Message"
          },
          "type": {
            "type": "string",
            "title": "Error Type"
          },
          "input": {
            "title": "Input"
          },
          "ctx": {
            "type": "object",
            "title": "Context"
          }
        },
        "type": "object",
        "required": [
          "loc",
          "msg",
          "type"
        ],
        "title": "ValidationError"
      },
      "type": "array",
      "title": "Detail"
    }
  },
  "type": "object",
  "title": "HTTPValidationError"
}
ParametersJSON Schema
NameRequiredDescriptionDefault
asinYesAmazon Standard Identification Number (10 chars)
endDateYesEnd date in YYYY-MM-DD format
startDateYesStart date in YYYY-MM-DD format
marketplaceNoAmazon marketplace code.US
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It adds useful context like the 730-day max range and details on response structure (e.g., columnar arrays vs. changelog entries). However, it omits critical behavioral traits such as rate limits, authentication needs, or error handling specifics, leaving gaps for a mutation-free read tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with key information in the first two sentences, but it includes extensive, redundant output schema details (e.g., full JSON examples and schemas for 200 and 422 responses) that could be omitted since output schema is noted as false in context. This adds unnecessary length without enhancing tool understanding.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of a time-series data tool with no annotations and no output schema, the description does well by explaining the return data structure (columnar arrays vs. changelog entries) and constraints. However, it could be more complete by addressing potential errors or usage limits beyond the date range, slightly reducing the score.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents all parameters (asin, startDate, endDate, marketplace). The description does not add any parameter-specific semantics beyond what the schema provides, such as explaining ASIN format or date constraints. This meets the baseline for high schema coverage but offers no extra value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get historical time-series data for a single ASIN over a date range.' It specifies the verb ('Get'), resource ('historical time-series data'), and scope ('single ASIN over a date range'), distinguishing it from siblings like 'products_search' for discovery. The title 'Product History V2' reinforces this focus.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for usage: 'Max date range: 730 days' and 'Related: /products/search to discover ASINs first.' This guides the agent on constraints and prerequisites. However, it lacks explicit alternatives or exclusions, such as when to use this versus 'realtime_product' for current data, which prevents a perfect score.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

openapi_v2_prompt_injection_detectBInspect

Detect prompt injection attacks

Detect prompt injection attacks in user input text.

Use this before passing untrusted text to an LLM to guard against instruction override, goal hijacking, data exfiltration, and jailbreak attacks. Example: send user messages, retrieved RAG documents, or tool outputs through this endpoint and block any input where isInjection is true. Most requests resolve in the BERT detection stage (<10 ms); LLM detection (~2 s) only activates when BERT detects an injection. Set useLlmDetection to false to force BERT-only classification. Supports English, Chinese, Japanese, Korean, French, Spanish, and German.

Responses:

200: Successful Response (Success Response) Content-Type: application/json

Example Response:

{
  "success": true,
  "meta": {
    "requestId": "Requestid",
    "timestamp": "Timestamp"
  }
}

Output Schema:

{
  "properties": {
    "success": {
      "type": "boolean",
      "title": "Success",
      "description": "Whether the request was successful",
      "default": true
    },
    "data": {
      "description": "Response data payload"
    },
    "error": {
      "description": "Error details if request failed"
    },
    "meta": {
      "description": "Metadata for API responses.",
      "properties": {
        "requestId": {
          "type": "string",
          "title": "Requestid",
          "description": "Unique request identifier"
        },
        "timestamp": {
          "type": "string",
          "title": "Timestamp",
          "description": "Response timestamp in ISO 8601 format"
        },
        "total": {
          "title": "Total",
          "description": "Total number of records"
        },
        "page": {
          "title": "Page",
          "description": "Current page number"
        },
        "pageSize": {
          "title": "Pagesize",
          "description": "Number of records per page"
        },
        "totalPages": {
          "title": "Totalpages",
          "description": "Total number of pages"
        },
        "creditsRemaining": {
          "title": "Creditsremaining",
          "description": "Remaining API credits"
        },
        "creditsConsumed": {
          "title": "Creditsconsumed",
          "description": "Credits consumed by this request"
        }
      },
      "type": "object",
      "required": [
        "requestId",
        "timestamp"
      ],
      "title": "ResponseMeta"
    }
  },
  "type": "object",
  "required": [
    "meta"
  ],
  "title": "OpenApiResponse[PromptInjectionResult]",
  "examples": []
}

422: Validation Error Content-Type: application/json

Example Response:

{
  "detail": [
    {
      "loc": [],
      "msg": "Message",
      "type": "Error Type",
      "ctx": {}
    }
  ]
}

Output Schema:

{
  "properties": {
    "detail": {
      "items": {
        "properties": {
          "loc": {
            "items": {},
            "type": "array",
            "title": "Location"
          },
          "msg": {
            "type": "string",
            "title": "Message"
          },
          "type": {
            "type": "string",
            "title": "Error Type"
          },
          "input": {
            "title": "Input"
          },
          "ctx": {
            "type": "object",
            "title": "Context"
          }
        },
        "type": "object",
        "required": [
          "loc",
          "msg",
          "type"
        ],
        "title": "ValidationError"
      },
      "type": "array",
      "title": "Detail"
    }
  },
  "type": "object",
  "title": "HTTPValidationError"
}
ParametersJSON Schema
NameRequiredDescriptionDefault
textYesUser input text to analyze for prompt injection attacks.
useLlmDetectionNoWhether to allow LLM detection stage for injections flagged by DeBERTa. Set to false to force fast DeBERTa-only classification.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. While it mentions the model used (fine-tuned DeBERTa) and the types of attacks detected, it lacks critical behavioral details such as rate limits, authentication requirements, performance characteristics, error handling, or what happens when attacks are detected. For a security tool with zero annotation coverage, this is a significant gap in transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is poorly structured and excessively long, including irrelevant details like full HTTP response examples and output schemas that should be handled separately. The core description is front-loaded but buried under verbose API documentation, wasting space and reducing clarity. Every sentence does not earn its place, as much of the content repeats or extends beyond the tool's functional description.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of a security detection tool with no annotations and no output schema, the description is incomplete. It fails to explain the return values (e.g., what 'classification label, confidence score, and boolean flag' mean in practice), error conditions, or operational constraints. The inclusion of output schema snippets is confusing and does not compensate for the lack of a proper output schema or behavioral context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the single parameter 'text' well-documented in the schema as 'User input text to analyze for prompt injection attacks.' The description adds minimal value beyond this, merely restating that it analyzes 'user input text' without providing additional context like examples, edge cases, or formatting requirements. Baseline 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('detect', 'analyzes') and resources ('prompt injection attacks', 'user input text'), distinguishing it from sibling tools which focus on categories, competitors, markets, products, and reviews. It explicitly names the types of attacks it identifies, making the purpose highly specific and differentiated.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by stating it analyzes 'user input text' for prompt injection attacks, suggesting it should be used when processing potentially malicious user inputs. However, it provides no explicit guidance on when to use this tool versus alternatives (e.g., other security tools or manual review) or any prerequisites, leaving the agent to infer appropriate scenarios without clear boundaries.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

openapi_v2_realtime_productAInspect

Realtime Product V2

Get realtime product data for a given ASIN.

Use this for up-to-the-minute data when daily snapshots are insufficient. Example: pass asin="B07FR2V8SH" to get current price, rating, review count, BSR, and availability. Data fetched on demand; latency is higher than snapshot endpoints (2-5 seconds). Related: /products/search for snapshot data, /products/history for trend analysis.

Responses:

200: Successful Response (Success Response) Content-Type: application/json

Example Response:

{
  "success": true,
  "meta": {
    "requestId": "Requestid",
    "timestamp": "Timestamp"
  }
}

Output Schema:

{
  "properties": {
    "success": {
      "type": "boolean",
      "title": "Success",
      "description": "Whether the request was successful",
      "default": true
    },
    "data": {
      "description": "Response data payload"
    },
    "error": {
      "description": "Error details if request failed"
    },
    "meta": {
      "description": "Metadata for API responses.",
      "properties": {
        "requestId": {
          "type": "string",
          "title": "Requestid",
          "description": "Unique request identifier"
        },
        "timestamp": {
          "type": "string",
          "title": "Timestamp",
          "description": "Response timestamp in ISO 8601 format"
        },
        "total": {
          "title": "Total",
          "description": "Total number of records"
        },
        "page": {
          "title": "Page",
          "description": "Current page number"
        },
        "pageSize": {
          "title": "Pagesize",
          "description": "Number of records per page"
        },
        "totalPages": {
          "title": "Totalpages",
          "description": "Total number of pages"
        },
        "creditsRemaining": {
          "title": "Creditsremaining",
          "description": "Remaining API credits"
        },
        "creditsConsumed": {
          "title": "Creditsconsumed",
          "description": "Credits consumed by this request"
        }
      },
      "type": "object",
      "required": [
        "requestId",
        "timestamp"
      ],
      "title": "ResponseMeta"
    }
  },
  "type": "object",
  "required": [
    "meta"
  ],
  "title": "OpenApiResponse[RealtimeProduct]",
  "examples": []
}

422: Validation Error Content-Type: application/json

Example Response:

{
  "detail": [
    {
      "loc": [],
      "msg": "Message",
      "type": "Error Type",
      "ctx": {}
    }
  ]
}

Output Schema:

{
  "properties": {
    "detail": {
      "items": {
        "properties": {
          "loc": {
            "items": {},
            "type": "array",
            "title": "Location"
          },
          "msg": {
            "type": "string",
            "title": "Message"
          },
          "type": {
            "type": "string",
            "title": "Error Type"
          },
          "input": {
            "title": "Input"
          },
          "ctx": {
            "type": "object",
            "title": "Context"
          }
        },
        "type": "object",
        "required": [
          "loc",
          "msg",
          "type"
        ],
        "title": "ValidationError"
      },
      "type": "array",
      "title": "Detail"
    }
  },
  "type": "object",
  "title": "HTTPValidationError"
}
ParametersJSON Schema
NameRequiredDescriptionDefault
asinYesAmazon Standard Identification Number
marketplaceNoAmazon marketplace codeUS
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behavioral traits: the real-time scraping mechanism ('Data fetched via Spider API in real time'), performance characteristics ('latency is higher than snapshot endpoints (2-5 seconds)'), and the types of data returned (price, rating, review count, BSR, availability). However, it doesn't mention rate limits, authentication requirements, or error handling specifics.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and appropriately sized, with the core purpose and usage guidelines presented upfront. However, the inclusion of extensive response documentation (200 and 422 examples with schemas) adds bulk that could be streamlined, as some of this information might be redundant with structured output schemas if they were provided.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of a real-time scraping tool with no annotations and no output schema, the description does a good job covering the essential context: purpose, usage scenarios, performance characteristics, and data types. It includes example responses which partially compensate for the missing output schema. However, it lacks details on error handling, rate limits, and authentication requirements that would be important for complete agent understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters (asin and marketplace) with descriptions and constraints. The description adds minimal value beyond the schema by providing an example ASIN value ('B07FR2V8SH') and mentioning the marketplace parameter implicitly through context, but doesn't explain parameter interactions or usage nuances.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verb ('Get realtime product details') and resource ('by scraping the Amazon product page live'). It distinguishes itself from siblings by emphasizing real-time data acquisition versus snapshot endpoints, explicitly naming related tools like '/products/search' and '/products/history' for comparison.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool ('when daily snapshots are insufficient') and when not to (implied for snapshot data). It names specific alternatives ('/products/search for snapshot data, /products/history for trend analysis'), giving clear context for tool selection among siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

openapi_v2_realtime_reviewsAInspect

Realtime Reviews V2

Fetch realtime reviews for a given ASIN.

Cursor-based pagination: omit cursor for the first page, then pass nextCursor from the previous response for subsequent pages. nextCursor=null means no more data. Related: /reviews/search for offline data with AI tags, /reviews/analysis for aggregated insights.

Responses:

200: Successful Response (Success Response) Content-Type: application/json

Example Response:

{
  "success": true,
  "meta": {
    "requestId": "Requestid",
    "timestamp": "Timestamp"
  }
}

Output Schema:

{
  "properties": {
    "success": {
      "type": "boolean",
      "title": "Success",
      "description": "Whether the request was successful",
      "default": true
    },
    "data": {
      "description": "Response data payload"
    },
    "error": {
      "description": "Error details if request failed"
    },
    "meta": {
      "description": "Metadata for API responses.",
      "properties": {
        "requestId": {
          "type": "string",
          "title": "Requestid",
          "description": "Unique request identifier"
        },
        "timestamp": {
          "type": "string",
          "title": "Timestamp",
          "description": "Response timestamp in ISO 8601 format"
        },
        "total": {
          "title": "Total",
          "description": "Total number of records"
        },
        "page": {
          "title": "Page",
          "description": "Current page number"
        },
        "pageSize": {
          "title": "Pagesize",
          "description": "Number of records per page"
        },
        "totalPages": {
          "title": "Totalpages",
          "description": "Total number of pages"
        },
        "creditsRemaining": {
          "title": "Creditsremaining",
          "description": "Remaining API credits"
        },
        "creditsConsumed": {
          "title": "Creditsconsumed",
          "description": "Credits consumed by this request"
        }
      },
      "type": "object",
      "required": [
        "requestId",
        "timestamp"
      ],
      "title": "ResponseMeta"
    }
  },
  "type": "object",
  "required": [
    "meta"
  ],
  "title": "OpenApiResponse[RealtimeReviews]",
  "examples": []
}

422: Validation Error Content-Type: application/json

Example Response:

{
  "detail": [
    {
      "loc": [],
      "msg": "Message",
      "type": "Error Type",
      "ctx": {}
    }
  ]
}

Output Schema:

{
  "properties": {
    "detail": {
      "items": {
        "properties": {
          "loc": {
            "items": {},
            "type": "array",
            "title": "Location"
          },
          "msg": {
            "type": "string",
            "title": "Message"
          },
          "type": {
            "type": "string",
            "title": "Error Type"
          },
          "input": {
            "title": "Input"
          },
          "ctx": {
            "type": "object",
            "title": "Context"
          }
        },
        "type": "object",
        "required": [
          "loc",
          "msg",
          "type"
        ],
        "title": "ValidationError"
      },
      "type": "array",
      "title": "Detail"
    }
  },
  "type": "object",
  "title": "HTTPValidationError"
}
ParametersJSON Schema
NameRequiredDescriptionDefault
asinYesAmazon Standard Identification Number
cursorNoPagination token from previous response's nextCursor. Omit for the first page. When the response's nextCursor is null, there are no more pages.
marketplaceNoAmazon marketplace codeUS
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and does well by explaining pagination behavior ('Cursor-based pagination: omit cursor for the first page, then pass nextCursor from the previous response for subsequent pages. nextCursor=null means no more data'). It also mentions the 'live' nature of the data. However, it doesn't cover potential rate limits, authentication requirements, or data freshness guarantees that would be helpful for a real-time API.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose and usage guidelines, but includes extensive response format documentation (200 and 422 responses with schemas) that duplicates what would typically be in an output schema. While the response documentation is valuable, it makes the description longer than necessary for tool selection purposes.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 3 parameters, 100% schema coverage, and no annotations, the description provides good context about pagination behavior, sibling tool differentiation, and the real-time nature of the data. The inclusion of response format documentation compensates for the lack of output schema. However, it could better address behavioral aspects like rate limits or data freshness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already fully documents all three parameters (asin, cursor, marketplace). The description adds context about cursor usage ('omit cursor for the first page') but doesn't provide additional semantic meaning beyond what's in the parameter descriptions. This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Fetch realtime reviews for an ASIN from Amazon live'), identifies the resource (reviews for an ASIN), and distinguishes from siblings by mentioning '/reviews/search for offline data with AI tags' and '/reviews/analysis for aggregated insights'. This provides precise differentiation from related tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool ('Fetch realtime reviews') versus alternatives ('Related: /reviews/search for offline data with AI tags, /reviews/analysis for aggregated insights'). It also provides clear pagination guidance ('omit cursor for the first page, then pass nextCursor from the previous response'). This gives comprehensive usage context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

openapi_v2_reviews_analysisAInspect

Reviews Analyze V2

Analyze reviews by ASIN list or category to surface AI-generated sentiment, ratings, and consumer intelligence.

Use this to understand customer satisfaction and common complaints before sourcing a product. Example: pass asins=["B07FR2V8SH"] with period="6m" for 6-month review analysis. Requires ≥50 reviews for meaningful results. Data sourced from review analysis pipeline; ASIN mode supports max 100 ASINs. Related: /products/search to find ASINs, /categories for category paths.

Responses:

200: Successful Response (Success Response) Content-Type: application/json

Example Response:

{
  "success": true,
  "meta": {
    "requestId": "Requestid",
    "timestamp": "Timestamp"
  }
}

Output Schema:

{
  "properties": {
    "success": {
      "type": "boolean",
      "title": "Success",
      "description": "Whether the request was successful",
      "default": true
    },
    "data": {
      "description": "Response data payload"
    },
    "error": {
      "description": "Error details if request failed"
    },
    "meta": {
      "description": "Metadata for API responses.",
      "properties": {
        "requestId": {
          "type": "string",
          "title": "Requestid",
          "description": "Unique request identifier"
        },
        "timestamp": {
          "type": "string",
          "title": "Timestamp",
          "description": "Response timestamp in ISO 8601 format"
        },
        "total": {
          "title": "Total",
          "description": "Total number of records"
        },
        "page": {
          "title": "Page",
          "description": "Current page number"
        },
        "pageSize": {
          "title": "Pagesize",
          "description": "Number of records per page"
        },
        "totalPages": {
          "title": "Totalpages",
          "description": "Total number of pages"
        },
        "creditsRemaining": {
          "title": "Creditsremaining",
          "description": "Remaining API credits"
        },
        "creditsConsumed": {
          "title": "Creditsconsumed",
          "description": "Credits consumed by this request"
        }
      },
      "type": "object",
      "required": [
        "requestId",
        "timestamp"
      ],
      "title": "ResponseMeta"
    }
  },
  "type": "object",
  "required": [
    "meta"
  ],
  "title": "OpenApiResponse[ReviewAnalysis]",
  "examples": []
}

422: Validation Error Content-Type: application/json

Example Response:

{
  "detail": [
    {
      "loc": [],
      "msg": "Message",
      "type": "Error Type",
      "ctx": {}
    }
  ]
}

Output Schema:

{
  "properties": {
    "detail": {
      "items": {
        "properties": {
          "loc": {
            "items": {},
            "type": "array",
            "title": "Location"
          },
          "msg": {
            "type": "string",
            "title": "Message"
          },
          "type": {
            "type": "string",
            "title": "Error Type"
          },
          "input": {
            "title": "Input"
          },
          "ctx": {
            "type": "object",
            "title": "Context"
          }
        },
        "type": "object",
        "required": [
          "loc",
          "msg",
          "type"
        ],
        "title": "ValidationError"
      },
      "type": "array",
      "title": "Detail"
    }
  },
  "type": "object",
  "title": "HTTPValidationError"
}
ParametersJSON Schema
NameRequiredDescriptionDefault
modeYesQuery mode: 'asin' for ASIN-based, 'category' for category-based
asinsNoList of ASINs to analyze (max 100, required when mode='asin'). Example: ['B07FR2V8SH'].
periodNoTime period for analysis6m
marketplaceNoAmazon marketplace code.US
categoryPathNoCategory hierarchy from root. Example: ['Electronics', 'Computers']. Required when mode='category'.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses key behavioral traits: 'Requires ≥50 reviews for meaningful results,' 'Data sourced from review analysis pipeline,' and 'ASIN mode supports max 100 ASINs.' However, it doesn't cover other important aspects like rate limits, authentication needs, or what happens if requirements aren't met, leaving gaps for a mutation-like analysis tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with purpose and usage, but it includes extensive, redundant output schema details that belong in structured fields, not the description. This adds unnecessary length and reduces focus. The core content is concise, but the inclusion of schema examples and error responses wastes space and detracts from clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (5 parameters, no annotations, no output schema in context signals), the description is moderately complete. It covers purpose, usage, and some behavioral constraints but lacks details on output format, error handling, or deeper integration context. The output schema provided in the description compensates partially, but it's verbose and not efficiently integrated.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds minimal value beyond the schema: it provides an example ('Example: pass asins=["B07FR2V8SH"] with period="6m"') and hints at parameter interactions (e.g., ASIN mode limits). This meets the baseline for high schema coverage but doesn't significantly enhance understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Analyze reviews by ASIN list or category to extract sentiment, ratings, and consumer insights.' It specifies the resource (reviews) and action (analyze to extract insights). However, it doesn't explicitly differentiate from sibling tools like 'openapi_v2_reviews_search' or 'openapi_v2_realtime_reviews', which might also involve review analysis, so it doesn't reach the highest score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use the tool: 'Use this to understand customer satisfaction and common complaints before sourcing a product.' It mentions related tools ('/products/search to find ASINs, /categories for category paths') but doesn't explicitly state when NOT to use it or name direct alternatives among the siblings, so it falls short of a perfect score.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources