APIClaw
Server Details
Real-time Amazon data API built for AI agents. 200M+ products, 1B+ reviews, live BSR, pricing, and competitor data as clean JSON. 10 agent skills for market research, competitor monitoring, pricing analysis, and listing audits.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4/5 across 10 of 10 tools scored. Lowest: 3.1/5.
Most tools have distinct purposes focused on different aspects of Amazon data analysis (categories, products, markets, reviews, etc.), with clear boundaries described in their documentation. However, there is some potential overlap between products_search and competitors tools as both involve product discovery, though their specific focuses differ enough to avoid major confusion.
All tools follow a perfectly consistent naming pattern: 'openapi_v2_' prefix followed by a descriptive resource/action combination in snake_case (e.g., 'categories', 'products_search', 'reviews_analysis'). This uniformity makes the tool set predictable and easy to navigate.
With 10 tools, this is well within the ideal 3-15 range for a comprehensive Amazon data analysis server. Each tool serves a specific, valuable function in the domain, from category exploration to product search, competitor analysis, market research, and review processing, with no obvious redundancy.
The tool set covers the core workflows for Amazon product research comprehensively: discovery (categories, products, markets), analysis (competitors, history, reviews), and real-time data (product, reviews). The only minor gap is the lack of a dedicated tool for seller or brand analysis, but the existing tools provide robust coverage for most agent tasks.
Available Tools
16 toolsopenapi_v2_categoriesAInspect
Categories V2
Query Amazon category hierarchy by ID, path, parent, or keyword.
Use this to discover category structure for filtering in other endpoints. Example: pass categoryKeyword="yoga" to find matching categories, or parentCategoryPath=["Sports & Outdoors"] to list child categories.
Query modes (mutually exclusive):
No parameters: Returns all root categories
categoryId: Get specific category by ID
categoryPath: Get specific category by path
parentCategoryId: Get children of parent category by ID
parentCategoryPath: Get children of parent category by path
categoryKeyword: Search categories by keyword
Related: /products/search and /markets/search accept categoryPath for filtering.
Responses:
200: Successful Response (Success Response) Content-Type: application/json
Example Response:
{
"success": true,
"meta": {
"requestId": "Requestid",
"timestamp": "Timestamp"
}
}Output Schema:
{
"properties": {
"success": {
"type": "boolean",
"title": "Success",
"description": "Whether the request was successful",
"default": true
},
"data": {
"title": "Data",
"description": "Response data payload"
},
"error": {
"description": "Error details if request failed"
},
"meta": {
"description": "Metadata for API responses.",
"properties": {
"requestId": {
"type": "string",
"title": "Requestid",
"description": "Unique request identifier"
},
"timestamp": {
"type": "string",
"title": "Timestamp",
"description": "Response timestamp in ISO 8601 format"
},
"total": {
"title": "Total",
"description": "Total number of records"
},
"page": {
"title": "Page",
"description": "Current page number"
},
"pageSize": {
"title": "Pagesize",
"description": "Number of records per page"
},
"totalPages": {
"title": "Totalpages",
"description": "Total number of pages"
},
"creditsRemaining": {
"title": "Creditsremaining",
"description": "Remaining API credits"
},
"creditsConsumed": {
"title": "Creditsconsumed",
"description": "Credits consumed by this request"
}
},
"type": "object",
"required": [
"requestId",
"timestamp"
],
"title": "ResponseMeta"
}
},
"type": "object",
"required": [
"meta"
],
"title": "OpenApiResponse[list[Category]]",
"examples": []
}422: Validation Error Content-Type: application/json
Example Response:
{
"detail": [
{
"loc": [],
"msg": "Message",
"type": "Error Type",
"ctx": {}
}
]
}Output Schema:
{
"properties": {
"detail": {
"items": {
"properties": {
"loc": {
"items": {},
"type": "array",
"title": "Location"
},
"msg": {
"type": "string",
"title": "Message"
},
"type": {
"type": "string",
"title": "Error Type"
},
"input": {
"title": "Input"
},
"ctx": {
"type": "object",
"title": "Context"
}
},
"type": "object",
"required": [
"loc",
"msg",
"type"
],
"title": "ValidationError"
},
"type": "array",
"title": "Detail"
}
},
"type": "object",
"title": "HTTPValidationError"
}| Name | Required | Description | Default |
|---|---|---|---|
| categoryId | No | Category identifier | |
| marketplace | No | Amazon marketplace code | US |
| categoryPath | No | Category hierarchy from root to current level (e.g., ['Electronics', 'Computers', 'Laptops']) | |
| categoryKeyword | No | Filter by category name keyword (matches any level in category hierarchy, e.g., 'Electronics' or 'Laptops') | |
| parentCategoryId | No | Parent category ID | |
| parentCategoryPath | No | Parent category path |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It does well by explaining the six query modes and their mutual exclusivity, providing example usage patterns, and mentioning the response structure. However, it doesn't explicitly state whether this is a read-only operation (though implied by 'query'), nor does it mention rate limits, authentication requirements, or pagination behavior that might be relevant for API tools.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with clear sections: purpose statement, usage guidance, query modes, and related endpoints. It's appropriately sized for a tool with six parameters and complex query logic. The only minor issue is that the response documentation (200 and 422 sections) is quite detailed and might be better handled by an output schema, but this is reasonable given the context.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 6 parameters, 100% schema coverage, and no output schema, the description provides good contextual completeness. It explains the purpose, usage scenarios, query modes with examples, and relationships to other tools. The main gap is the lack of explicit behavioral information about read-only nature, authentication, or rate limits, but the query modes and examples provide substantial practical guidance.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, so the schema already documents all six parameters thoroughly. The description adds some value by explaining the query modes and providing examples ('categoryKeyword="yoga"' and 'parentCategoryPath=["Sports & Outdoors"]'), but doesn't add significant semantic information beyond what's already in the parameter descriptions. This meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Query Amazon category hierarchy by ID, path, parent, or keyword' and 'Use this to discover category structure for filtering in other endpoints.' It specifies the exact resource (Amazon category hierarchy) and multiple query methods, distinguishing it from sibling tools like products_search or reviews_search which handle different resources.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool: 'Use this to discover category structure for filtering in other endpoints' and names specific related endpoints ('/products/search and /markets/search accept categoryPath for filtering'). It also clearly explains the six mutually exclusive query modes, helping the agent choose the right parameter combination.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
openapi_v2_competitorsAInspect
Competitor Lookup V2
Search competitor products by keyword, brand, ASIN, or category with filters.
Use this to identify competing products around a specific listing or brand. Example: pass asin="B07FR2V8SH" to find all products competing in the same keywords and category. Data is based on the latest daily snapshot; results are paginated (max 100 per page). Related: /products/search for broader keyword discovery.
Responses:
200: Successful Response (Success Response) Content-Type: application/json
Example Response:
{
"success": true,
"meta": {
"requestId": "Requestid",
"timestamp": "Timestamp"
}
}Output Schema:
{
"properties": {
"success": {
"type": "boolean",
"title": "Success",
"description": "Whether the request was successful",
"default": true
},
"data": {
"title": "Data",
"description": "Response data payload"
},
"error": {
"description": "Error details if request failed"
},
"meta": {
"description": "Metadata for API responses.",
"properties": {
"requestId": {
"type": "string",
"title": "Requestid",
"description": "Unique request identifier"
},
"timestamp": {
"type": "string",
"title": "Timestamp",
"description": "Response timestamp in ISO 8601 format"
},
"total": {
"title": "Total",
"description": "Total number of records"
},
"page": {
"title": "Page",
"description": "Current page number"
},
"pageSize": {
"title": "Pagesize",
"description": "Number of records per page"
},
"totalPages": {
"title": "Totalpages",
"description": "Total number of pages"
},
"creditsRemaining": {
"title": "Creditsremaining",
"description": "Remaining API credits"
},
"creditsConsumed": {
"title": "Creditsconsumed",
"description": "Credits consumed by this request"
}
},
"type": "object",
"required": [
"requestId",
"timestamp"
],
"title": "ResponseMeta"
}
},
"type": "object",
"required": [
"meta"
],
"title": "OpenApiResponse[list[Product]]",
"examples": []
}422: Validation Error Content-Type: application/json
Example Response:
{
"detail": [
{
"loc": [],
"msg": "Message",
"type": "Error Type",
"ctx": {}
}
]
}Output Schema:
{
"properties": {
"detail": {
"items": {
"properties": {
"loc": {
"items": {},
"type": "array",
"title": "Location"
},
"msg": {
"type": "string",
"title": "Message"
},
"type": {
"type": "string",
"title": "Error Type"
},
"input": {
"title": "Input"
},
"ctx": {
"type": "object",
"title": "Context"
}
},
"type": "object",
"required": [
"loc",
"msg",
"type"
],
"title": "ValidationError"
},
"type": "array",
"title": "Detail"
}
},
"type": "object",
"title": "HTTPValidationError"
}| Name | Required | Description | Default |
|---|---|---|---|
| asin | No | Amazon Standard Identification Number (10-char alphanumeric). Example: 'B07FR2V8SH'. | |
| page | No | Page number | |
| badges | No | Include products with these badges. Example: ['bestSeller', 'amazonChoice', 'newRelease', 'aPlus', 'video']. | |
| sortBy | No | Sort field | monthlySalesFloor |
| keyword | No | Search keyword | |
| pageSize | No | Page size (max 100) | |
| brandName | No | Filter by brand name. | |
| dateRange | No | Time range filter. Relative ('30d') or month ('2026-01'). Default '30d'. | 30d |
| sortOrder | No | Sort direction: asc or desc | desc |
| sellerName | No | Filter by seller name. | |
| marketplace | No | Amazon marketplace code | US |
| categoryPath | No | Category hierarchy from root to current level (e.g., ['Electronics', 'Computers', 'Laptops']) | |
| fulfillments | No | Fulfillment filter. Example: ['FBA', 'FBM']. | |
| excludeBadges | No | Exclude products with these badges. Supported: ['aPlus', 'video']. | |
| excludeBrands | No | Brand names to exclude. Example: ['Generic']. | |
| includeBrands | No | Brand names to include. Example: ['Apple', 'Samsung']. | |
| excludeSellers | No | Seller names to exclude. | |
| includeSellers | No | Seller names to include. Example: ['Apple Store']. | |
| sellerCountMax | No | Maximum number of sellers. Example: 20. | |
| sellerCountMin | No | Minimum number of sellers. Example: 1. | |
| excludeKeywords | No | Keywords to exclude from results. Example: ['refurbished', 'used']. | |
| keywordMatchType | No | Keyword match type: 'fuzzy', 'phrase', or 'exact'. Null = fuzzy. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It adds valuable context beyond basic functionality: it mentions data is 'based on the latest daily snapshot,' results are 'paginated (max 100 per page),' and includes an example response structure. However, it lacks details on permissions, rate limits, or error handling beyond the example, leaving some behavioral gaps for a tool with 22 parameters.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded with key information in the first few sentences. However, it includes extensive example response and output schema details that are redundant since there's no output schema provided in context signals, and some formatting (like markdown code blocks) adds clutter without adding proportional value for tool selection.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (22 parameters, no annotations, no output schema), the description is fairly complete. It covers purpose, usage, behavioral traits like data freshness and pagination, and relates to siblings. However, it could improve by explaining parameter interactions or common use cases more explicitly, given the high parameter count and lack of structured output guidance.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all 22 parameters thoroughly. The description adds minimal parameter semantics beyond the schema, such as implying ASIN usage in the example and mentioning pagination limits. It doesn't provide additional syntax, format details, or usage examples for parameters beyond what the schema offers, meeting the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose as 'Search competitor products by keyword, brand, ASIN, or category with filters' and distinguishes it from sibling tools by mentioning '/products/search for broader keyword discovery.' It specifies the verb ('search'), resource ('competitor products'), and scope ('with filters'), making it highly specific and differentiated.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidance: 'Use this to identify competing products around a specific listing or brand' and gives an example with 'asin="B07FR2V8SH".' It also names an alternative ('/products/search for broader keyword discovery') and clarifies when to use this tool versus that alternative, offering clear context and exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
openapi_v2_ecommerce_rerankAInspect
Rerank documents for ecommerce search queries
Rerank a list of product documents by relevance to ecommerce search queries.
Use this to improve product search result ordering. Pass one or more search queries and a shared list of product documents (titles, descriptions, or concatenated attributes). The model scores each document against each query and returns them sorted by relevance. Powered by a fine-tuned Qwen3-Reranker model optimized for ecommerce product matching.
Credits: 1 credit per query in the batch. A request with 3 queries costs 3 credits.
Responses:
200: Successful Response (Success Response) Content-Type: application/json
Example Response:
{
"success": true,
"meta": {
"requestId": "Requestid",
"timestamp": "Timestamp"
}
}Output Schema:
{
"properties": {
"success": {
"type": "boolean",
"title": "Success",
"description": "Whether the request was successful",
"default": true
},
"data": {
"description": "Response data payload"
},
"error": {
"description": "Error details if request failed"
},
"meta": {
"description": "Metadata for API responses.",
"properties": {
"requestId": {
"type": "string",
"title": "Requestid",
"description": "Unique request identifier"
},
"timestamp": {
"type": "string",
"title": "Timestamp",
"description": "Response timestamp in ISO 8601 format"
},
"total": {
"title": "Total",
"description": "Total number of records"
},
"page": {
"title": "Page",
"description": "Current page number"
},
"pageSize": {
"title": "Pagesize",
"description": "Number of records per page"
},
"totalPages": {
"title": "Totalpages",
"description": "Total number of pages"
},
"creditsRemaining": {
"title": "Creditsremaining",
"description": "Remaining API credits"
},
"creditsConsumed": {
"title": "Creditsconsumed",
"description": "Credits consumed by this request"
}
},
"type": "object",
"required": [
"requestId",
"timestamp"
],
"title": "ResponseMeta"
}
},
"type": "object",
"required": [
"meta"
],
"title": "OpenApiResponse[EcommerceRerankResult]",
"examples": []
}422: Validation Error Content-Type: application/json
Example Response:
{
"detail": [
{
"loc": [],
"msg": "Message",
"type": "Error Type",
"ctx": {}
}
]
}Output Schema:
{
"properties": {
"detail": {
"items": {
"properties": {
"loc": {
"items": {},
"type": "array",
"title": "Location"
},
"msg": {
"type": "string",
"title": "Message"
},
"type": {
"type": "string",
"title": "Error Type"
},
"input": {
"title": "Input"
},
"ctx": {
"type": "object",
"title": "Context"
}
},
"type": "object",
"required": [
"loc",
"msg",
"type"
],
"title": "ValidationError"
},
"type": "array",
"title": "Detail"
}
},
"type": "object",
"title": "HTTPValidationError"
}| Name | Required | Description | Default |
|---|---|---|---|
| topK | No | Return only the top K most relevant documents per query. Omit to return all documents ranked. | |
| queries | Yes | List of search queries to rerank documents against. Max 10 queries per request. | |
| documents | Yes | List of document lists, one per query (documents[i] is reranked against queries[i]). Max 100 documents per query. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and adds valuable behavioral context: it discloses the underlying model ('fine-tuned Qwen3-Reranker'), cost structure ('1 credit per query'), and operational details like batch processing and scoring methodology. It does not contradict any annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with clear sections and front-loaded key information, but includes extensive API response examples and schemas that could be considered redundant since they're already in structured fields, slightly reducing efficiency.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 3 parameters, 100% schema coverage, and no output schema, the description provides good completeness: it explains the tool's purpose, usage context, behavioral traits, and cost. However, it could better integrate with sibling tools by mentioning alternatives.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds some context about parameter usage ('Pass one or more search queries and a shared list of product documents') but doesn't provide additional semantic details beyond what's in the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('rerank documents') and resources ('product documents for ecommerce search queries'), distinguishing it from sibling tools like search or analysis tools by focusing on re-ranking rather than retrieval or processing.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context on when to use this tool ('to improve product search result ordering') and mentions its specific application ('ecommerce product matching'), but does not explicitly state when not to use it or name alternative tools among siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
openapi_v2_fashion_image_embeddingAInspect
Generate fashion image embeddings (768-dim vectors for similarity search)
Generate fashion-specific image embeddings using fine-tuned SigLIP2.
Encode product images into 768-dim vectors aligned with the text embedding space. Use cases: visual similarity search, image-to-text matching, duplicate detection, catalog indexing. Accepts HTTPS URLs or base64-encoded images. Vectors are L2-normalized by default. Credits: 1 credit per request.
Responses:
200: Successful Response (Success Response) Content-Type: application/json
Example Response:
{
"success": true,
"meta": {
"requestId": "Requestid",
"timestamp": "Timestamp"
}
}Output Schema:
{
"properties": {
"success": {
"type": "boolean",
"title": "Success",
"description": "Whether the request was successful",
"default": true
},
"data": {
"description": "Response data payload"
},
"error": {
"description": "Error details if request failed"
},
"meta": {
"description": "Metadata for API responses.",
"properties": {
"requestId": {
"type": "string",
"title": "Requestid",
"description": "Unique request identifier"
},
"timestamp": {
"type": "string",
"title": "Timestamp",
"description": "Response timestamp in ISO 8601 format"
},
"total": {
"title": "Total",
"description": "Total number of records"
},
"page": {
"title": "Page",
"description": "Current page number"
},
"pageSize": {
"title": "Pagesize",
"description": "Number of records per page"
},
"totalPages": {
"title": "Totalpages",
"description": "Total number of pages"
},
"creditsRemaining": {
"title": "Creditsremaining",
"description": "Remaining API credits"
},
"creditsConsumed": {
"title": "Creditsconsumed",
"description": "Credits consumed by this request"
}
},
"type": "object",
"required": [
"requestId",
"timestamp"
],
"title": "ResponseMeta"
}
},
"type": "object",
"required": [
"meta"
],
"title": "OpenApiResponse[FashionImageEmbeddingResult]",
"examples": []
}422: Validation Error Content-Type: application/json
Example Response:
{
"detail": [
{
"loc": [],
"msg": "Message",
"type": "Error Type",
"ctx": {}
}
]
}Output Schema:
{
"properties": {
"detail": {
"items": {
"properties": {
"loc": {
"items": {},
"type": "array",
"title": "Location"
},
"msg": {
"type": "string",
"title": "Message"
},
"type": {
"type": "string",
"title": "Error Type"
},
"input": {
"title": "Input"
},
"ctx": {
"type": "object",
"title": "Context"
}
},
"type": "object",
"required": [
"loc",
"msg",
"type"
],
"title": "ValidationError"
},
"type": "array",
"title": "Detail"
}
},
"type": "object",
"title": "HTTPValidationError"
}| Name | Required | Description | Default |
|---|---|---|---|
| imageUrls | Yes | Product images to encode: HTTPS URLs (e.g. 'https://cdn.example.com/product.jpg') or base64-encoded strings (with optional data URI prefix). Max 8 per request. Supported formats: JPEG, PNG, WebP. | |
| normalizeVectors | No | L2-normalize output vectors to unit length (default true). When true, dot product = cosine similarity. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It mentions key behaviors: 768-dim vectors, L2-normalization default, SigLIP2 model, credit cost, and input constraints. However, it omits details on authentication, rate limits, idempotency, or error handling beyond the 422 schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with a clear summary, followed by details, use cases, input format, and credits. The inclusion of full response schemas is verbose but provides completeness. Could be trimmed slightly, but overall well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given two parameters, full schema coverage, and no provided output schema, the description compensates by including output schemas inline. It covers purpose, input constraints, use cases, and credit cost. Missing are file size limits for images and more detailed error behavior, but overall it's fairly complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage for both parameters. The description adds value by clarifying that imageUrls can be HTTPS URLs or base64 with optional data URI prefix, and that normalizeVectors affects dot product interpretation. This goes beyond the schema's descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool generates fashion image embeddings (768-dim vectors) for similarity search, with specific verb 'Generate' and resource 'fashion image embeddings'. It explicitly lists use cases and the name distinguishes it from general-purpose image embedding tools like openapi_v2_image_embedding.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides some usage context: accepts URLs/base64, max 8 images, supported formats, credits. However, it does not explicitly compare to sibling tools or state when to use this over alternatives like openapi_v2_fashion_similarity or openapi_v2_image_embedding.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
openapi_v2_fashion_similarityAInspect
Compute text-image similarity scores for fashion products
Compute cosine similarity between text queries and product images.
Encodes texts and images into the same 768-dim space, returns a score matrix. similarityScores[i][j] = relevance of textQueries[i] to imageUrls[j]. Higher = better match. Equivalent to text-embedding + image-embedding + dot product in one call. Credits: 1 credit per request.
Responses:
200: Successful Response (Success Response) Content-Type: application/json
Example Response:
{
"success": true,
"meta": {
"requestId": "Requestid",
"timestamp": "Timestamp"
}
}Output Schema:
{
"properties": {
"success": {
"type": "boolean",
"title": "Success",
"description": "Whether the request was successful",
"default": true
},
"data": {
"description": "Response data payload"
},
"error": {
"description": "Error details if request failed"
},
"meta": {
"description": "Metadata for API responses.",
"properties": {
"requestId": {
"type": "string",
"title": "Requestid",
"description": "Unique request identifier"
},
"timestamp": {
"type": "string",
"title": "Timestamp",
"description": "Response timestamp in ISO 8601 format"
},
"total": {
"title": "Total",
"description": "Total number of records"
},
"page": {
"title": "Page",
"description": "Current page number"
},
"pageSize": {
"title": "Pagesize",
"description": "Number of records per page"
},
"totalPages": {
"title": "Totalpages",
"description": "Total number of pages"
},
"creditsRemaining": {
"title": "Creditsremaining",
"description": "Remaining API credits"
},
"creditsConsumed": {
"title": "Creditsconsumed",
"description": "Credits consumed by this request"
}
},
"type": "object",
"required": [
"requestId",
"timestamp"
],
"title": "ResponseMeta"
}
},
"type": "object",
"required": [
"meta"
],
"title": "OpenApiResponse[FashionSimilarityResult]",
"examples": []
}422: Validation Error Content-Type: application/json
Example Response:
{
"detail": [
{
"loc": [],
"msg": "Message",
"type": "Error Type",
"ctx": {}
}
]
}Output Schema:
{
"properties": {
"detail": {
"items": {
"properties": {
"loc": {
"items": {},
"type": "array",
"title": "Location"
},
"msg": {
"type": "string",
"title": "Message"
},
"type": {
"type": "string",
"title": "Error Type"
},
"input": {
"title": "Input"
},
"ctx": {
"type": "object",
"title": "Context"
}
},
"type": "object",
"required": [
"loc",
"msg",
"type"
],
"title": "ValidationError"
},
"type": "array",
"title": "Detail"
}
},
"type": "object",
"title": "HTTPValidationError"
}| Name | Required | Description | Default |
|---|---|---|---|
| imageUrls | Yes | Product images (HTTPS URLs or base64) to compare against text queries. Max 8. | |
| textQueries | Yes | Fashion text queries to compare against images (e.g. 'red summer dress'). Max 32. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description explains return structure, indexing, and credit cost. It does not cover rate limits or auth, but these are not expected for this type of tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with a concise summary followed by details and output schemas. The inclusion of full output schemas makes it slightly lengthy but well-organized.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description covers the operation, parameters, output format, and example responses comprehensively. With only two parameters and no output schema needed (provided inline), it is fully complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, and the description adds context about fashion-specific queries, image formats, and max items, enhancing understanding beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it computes text-image similarity scores for fashion products, explaining the embedding dimension and similarity matrix. It distinguishes from sibling tools for separate embeddings by noting it's equivalent to combining them in one call.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Describes the operation and equivalence to separate embeddings, giving context for when to use this combined approach. However, it does not explicitly list when not to use it or name sibling tools directly.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
openapi_v2_fashion_text_embeddingAInspect
Generate fashion text embeddings (768-dim vectors for similarity search)
Generate fashion-specific text embeddings using fine-tuned SigLIP2.
Encode fashion text into 768-dim vectors aligned with the image embedding space. Use cases: text-to-image search, semantic product matching, catalog indexing. Vectors are L2-normalized by default (dot product = cosine similarity). Credits: 1 credit per request.
Responses:
200: Successful Response (Success Response) Content-Type: application/json
Example Response:
{
"success": true,
"meta": {
"requestId": "Requestid",
"timestamp": "Timestamp"
}
}Output Schema:
{
"properties": {
"success": {
"type": "boolean",
"title": "Success",
"description": "Whether the request was successful",
"default": true
},
"data": {
"description": "Response data payload"
},
"error": {
"description": "Error details if request failed"
},
"meta": {
"description": "Metadata for API responses.",
"properties": {
"requestId": {
"type": "string",
"title": "Requestid",
"description": "Unique request identifier"
},
"timestamp": {
"type": "string",
"title": "Timestamp",
"description": "Response timestamp in ISO 8601 format"
},
"total": {
"title": "Total",
"description": "Total number of records"
},
"page": {
"title": "Page",
"description": "Current page number"
},
"pageSize": {
"title": "Pagesize",
"description": "Number of records per page"
},
"totalPages": {
"title": "Totalpages",
"description": "Total number of pages"
},
"creditsRemaining": {
"title": "Creditsremaining",
"description": "Remaining API credits"
},
"creditsConsumed": {
"title": "Creditsconsumed",
"description": "Credits consumed by this request"
}
},
"type": "object",
"required": [
"requestId",
"timestamp"
],
"title": "ResponseMeta"
}
},
"type": "object",
"required": [
"meta"
],
"title": "OpenApiResponse[FashionTextEmbeddingResult]",
"examples": []
}422: Validation Error Content-Type: application/json
Example Response:
{
"detail": [
{
"loc": [],
"msg": "Message",
"type": "Error Type",
"ctx": {}
}
]
}Output Schema:
{
"properties": {
"detail": {
"items": {
"properties": {
"loc": {
"items": {},
"type": "array",
"title": "Location"
},
"msg": {
"type": "string",
"title": "Message"
},
"type": {
"type": "string",
"title": "Error Type"
},
"input": {
"title": "Input"
},
"ctx": {
"type": "object",
"title": "Context"
}
},
"type": "object",
"required": [
"loc",
"msg",
"type"
],
"title": "ValidationError"
},
"type": "array",
"title": "Detail"
}
},
"type": "object",
"title": "HTTPValidationError"
}| Name | Required | Description | Default |
|---|---|---|---|
| queries | Yes | Fashion text queries to encode. Examples: product titles ('Women Red Floral Midi Dress'), search queries ('casual summer outfit'), or attributes ('cotton, v-neck, knee-length'). Max 32 per request. | |
| normalizeVectors | No | L2-normalize output vectors to unit length (default true). When true, dot product = cosine similarity. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses that vectors are L2-normalized by default (dot product equals cosine similarity) and mentions credits consumption. However, it does not mention authentication requirements, rate limits, or idempotency. The response schema is included, but behavioral expectations beyond the output are incomplete.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description includes multiple response examples and output schemas that lengthen the text. The essential information (purpose, dimensions, normalization, credits) is front-loaded, but the later sections are redundant given the output schema. It could be more concise by omitting the full response examples.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description covers purpose, output dimensions, normalization behavior, credits, and response structure. Given the tool’s complexity (text embedding generation), it is fairly comprehensive. However, it lacks authentication context and does not mention any rate limits, which would be helpful for an API tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema coverage is 100%, so baseline is 3. The description adds minimal value beyond the schema: it restates the default for normalizeVectors and implies its effect. The queries parameter is already well-described in the schema. Thus, the description does not significantly enhance parameter understanding.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool generates fashion text embeddings with 768-dim vectors for similarity search, specifies use cases (text-to-image search, semantic product matching, catalog indexing), and mentions the fine-tuned SigLIP2 model. This effectively differentiates from the sibling tool for image embeddings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear use cases (text-to-image search, semantic product matching, catalog indexing) and credits per request. However, it does not explicitly contrast with the sibling fashion_image_embedding tool or specify when not to use it, so guidance on alternatives is implied rather than explicit.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
openapi_v2_image_detectionAInspect
Detect fashion items in images
Detect fashion items in an image and return their bounding boxes.
Analyzes an image to locate fashion items such as bags, shoes, clothing, watches, glasses, and jewelry. Returns bounding box coordinates, category classification, and confidence scores for each detected item. Use the classes parameter to filter for specific fashion categories. Credits: 1 credit per request.
Responses:
200: Successful Response (Success Response) Content-Type: application/json
Example Response:
{
"success": true,
"meta": {
"requestId": "Requestid",
"timestamp": "Timestamp"
}
}Output Schema:
{
"properties": {
"success": {
"type": "boolean",
"title": "Success",
"description": "Whether the request was successful",
"default": true
},
"data": {
"description": "Response data payload"
},
"error": {
"description": "Error details if request failed"
},
"meta": {
"description": "Metadata for API responses.",
"properties": {
"requestId": {
"type": "string",
"title": "Requestid",
"description": "Unique request identifier"
},
"timestamp": {
"type": "string",
"title": "Timestamp",
"description": "Response timestamp in ISO 8601 format"
},
"total": {
"title": "Total",
"description": "Total number of records"
},
"page": {
"title": "Page",
"description": "Current page number"
},
"pageSize": {
"title": "Pagesize",
"description": "Number of records per page"
},
"totalPages": {
"title": "Totalpages",
"description": "Total number of pages"
},
"creditsRemaining": {
"title": "Creditsremaining",
"description": "Remaining API credits"
},
"creditsConsumed": {
"title": "Creditsconsumed",
"description": "Credits consumed by this request"
}
},
"type": "object",
"required": [
"requestId",
"timestamp"
],
"title": "ResponseMeta"
}
},
"type": "object",
"required": [
"meta"
],
"title": "OpenApiResponse[ImageDetectionResult]",
"examples": []
}422: Validation Error Content-Type: application/json
Example Response:
{
"detail": [
{
"loc": [],
"msg": "Message",
"type": "Error Type",
"ctx": {}
}
]
}Output Schema:
{
"properties": {
"detail": {
"items": {
"properties": {
"loc": {
"items": {},
"type": "array",
"title": "Location"
},
"msg": {
"type": "string",
"title": "Message"
},
"type": {
"type": "string",
"title": "Error Type"
},
"input": {
"title": "Input"
},
"ctx": {
"type": "object",
"title": "Context"
}
},
"type": "object",
"required": [
"loc",
"msg",
"type"
],
"title": "ValidationError"
},
"type": "array",
"title": "Detail"
}
},
"type": "object",
"title": "HTTPValidationError"
}| Name | Required | Description | Default |
|---|---|---|---|
| topK | Yes | Maximum number of detections to return (1-50). | |
| image | Yes | URL of the image to analyze. Must be a publicly accessible HTTPS URL. | |
| classes | No | Fashion category class IDs to detect. Omit to detect all categories. Values: 0=Bag, 1=Cap, 2=Down-Clothing, 3=Glasses, 4=Jewelry, 5=Others, 6=Shoes, 7=Sock, 8=Up-Clothing, 9=Watch. | |
| timeout | No | Request timeout in seconds (1-120). The request will be aborted if the upstream service does not respond within this time. | |
| returnImage | No | Whether to return the annotated image with bounding boxes drawn. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description effectively discloses behavior: it returns bounding boxes, categories, confidence scores, credits consumption, and optionally returns an annotated image. It also mentions the credits per request, which is helpful for planning.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with clear sections. However, the example response and output schema details are lengthy and potentially redundant given the description already explains returns. Still, it's organized and front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 4-param tool with high schema coverage and no output schema, the description is fairly complete. It explains input, output components, and credits. It lacks details on bounding box coordinates format, but overall is adequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description adds value by explaining that the classes parameter filters specific fashion categories and lists the class ID-to-label mapping. This goes beyond the schema's concise description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool detects fashion items in images, specifies return of bounding boxes, categories, and confidence scores. It distinguishes from siblings (e.g., openapi_v2_categories, openapi_v2_image_embedding) by focusing on detection with location info.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides implicit usage guidance: use for fashion item detection with optional filtering by classes. However, no explicit when-to-use vs when-not-to-use or alternatives among siblings are given.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
openapi_v2_image_embeddingAInspect
Generate fashion item embeddings from images
Generate feature embeddings for fashion items detected in an image.
Automatically detects fashion items (bags, shoes, clothing, watches, etc.) in the image and generates feature embedding vectors for each detected item. Embeddings can be used for visual similarity search, product recommendations, and image-based product matching. Optionally include fashion category tags and text-image relevance scores. Credits: 1 credit per request.
Responses:
200: Successful Response (Success Response) Content-Type: application/json
Example Response:
{
"success": true,
"meta": {
"requestId": "Requestid",
"timestamp": "Timestamp"
}
}Output Schema:
{
"properties": {
"success": {
"type": "boolean",
"title": "Success",
"description": "Whether the request was successful",
"default": true
},
"data": {
"description": "Response data payload"
},
"error": {
"description": "Error details if request failed"
},
"meta": {
"description": "Metadata for API responses.",
"properties": {
"requestId": {
"type": "string",
"title": "Requestid",
"description": "Unique request identifier"
},
"timestamp": {
"type": "string",
"title": "Timestamp",
"description": "Response timestamp in ISO 8601 format"
},
"total": {
"title": "Total",
"description": "Total number of records"
},
"page": {
"title": "Page",
"description": "Current page number"
},
"pageSize": {
"title": "Pagesize",
"description": "Number of records per page"
},
"totalPages": {
"title": "Totalpages",
"description": "Total number of pages"
},
"creditsRemaining": {
"title": "Creditsremaining",
"description": "Remaining API credits"
},
"creditsConsumed": {
"title": "Creditsconsumed",
"description": "Credits consumed by this request"
}
},
"type": "object",
"required": [
"requestId",
"timestamp"
],
"title": "ResponseMeta"
}
},
"type": "object",
"required": [
"meta"
],
"title": "OpenApiResponse[ImageEmbeddingResult]",
"examples": []
}422: Validation Error Content-Type: application/json
Example Response:
{
"detail": [
{
"loc": [],
"msg": "Message",
"type": "Error Type",
"ctx": {}
}
]
}Output Schema:
{
"properties": {
"detail": {
"items": {
"properties": {
"loc": {
"items": {},
"type": "array",
"title": "Location"
},
"msg": {
"type": "string",
"title": "Message"
},
"type": {
"type": "string",
"title": "Error Type"
},
"input": {
"title": "Input"
},
"ctx": {
"type": "object",
"title": "Context"
}
},
"type": "object",
"required": [
"loc",
"msg",
"type"
],
"title": "ValidationError"
},
"type": "array",
"title": "Detail"
}
},
"type": "object",
"title": "HTTPValidationError"
}| Name | Required | Description | Default |
|---|---|---|---|
| text | No | Text(s) for computing text-image relevance scores. Omit to skip relevance scoring. | |
| topK | No | Maximum number of detected items to return. Omit to return all detections. | |
| image | Yes | URL of the image to analyze. Must be a publicly accessible HTTPS URL. | |
| timeout | No | Request timeout in seconds (1-120). The request will be aborted if the upstream service does not respond within this time. | |
| withTag | No | Whether to include fashion category tags (e.g. Bag, Shoes, Watch) in the response. | |
| boundingBoxes | No | Pre-defined bounding boxes as [[x1, y1, x2, y2], ...]. Omit for automatic detection. | |
| withEmbedding | No | Whether to include feature embedding vectors in the response. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Since no annotations are provided, the description carries full burden. It discloses key behaviors: auto-detection of fashion items, generation of embedding vectors, optional inclusion of tags/relevance scores, and credit cost (1 credit per request). Return format is outlined with schema examples.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with a summary paragraph, list of use cases, and detailed response schemas. Slightly verbose for the response section which could be summarized more concisely. Overall clear and easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (9 parameters, no output schema in strict sense but detailed inline examples), the description provides good context including credit cost, auto-detection behavior, and response structure. Missing explicit error handling details beyond validation error example.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, but the description still adds value by clarifying the purpose of detected items and embeddings. It explains optional parameters like withTag and withEmbedding in context. However, it does not explicitly elaborate on each parameter beyond what's in the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states it generates fashion item embeddings from images and lists use cases like visual similarity search and product recommendations. Its purpose is distinct from sibling tools, but the description could more explicitly contrast it with similar image tools like openapi_v2_image_detection.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains what the tool does but does not provide explicit guidance on when to use it versus alternatives. It mentions use cases but no when-not-to-use scenarios or comparisons with sibling tools like image_detection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
openapi_v2_markets_searchAInspect
Markets Search V2
Search market data by category with aggregated demand, competition, and pricing metrics.
Use this to evaluate market size and competition before entering a niche. Example: search "Pet Supplies" with sampleAvgMonthlySalesMin >= 200 to find categories with proven demand. Data is based on top-100 product samples per category from the latest daily snapshot; results paginated (max 100 per page). Related: /products/search for product-level data in a category.
Responses:
200: Successful Response (Success Response) Content-Type: application/json
Example Response:
{
"success": true,
"meta": {
"requestId": "Requestid",
"timestamp": "Timestamp"
}
}Output Schema:
{
"properties": {
"success": {
"type": "boolean",
"title": "Success",
"description": "Whether the request was successful",
"default": true
},
"data": {
"title": "Data",
"description": "Response data payload"
},
"error": {
"description": "Error details if request failed"
},
"meta": {
"description": "Metadata for API responses.",
"properties": {
"requestId": {
"type": "string",
"title": "Requestid",
"description": "Unique request identifier"
},
"timestamp": {
"type": "string",
"title": "Timestamp",
"description": "Response timestamp in ISO 8601 format"
},
"total": {
"title": "Total",
"description": "Total number of records"
},
"page": {
"title": "Page",
"description": "Current page number"
},
"pageSize": {
"title": "Pagesize",
"description": "Number of records per page"
},
"totalPages": {
"title": "Totalpages",
"description": "Total number of pages"
},
"creditsRemaining": {
"title": "Creditsremaining",
"description": "Remaining API credits"
},
"creditsConsumed": {
"title": "Creditsconsumed",
"description": "Credits consumed by this request"
}
},
"type": "object",
"required": [
"requestId",
"timestamp"
],
"title": "ResponseMeta"
}
},
"type": "object",
"required": [
"meta"
],
"title": "OpenApiResponse[list[Market]]",
"examples": []
}422: Validation Error Content-Type: application/json
Example Response:
{
"detail": [
{
"loc": [],
"msg": "Message",
"type": "Error Type",
"ctx": {}
}
]
}Output Schema:
{
"properties": {
"detail": {
"items": {
"properties": {
"loc": {
"items": {},
"type": "array",
"title": "Location"
},
"msg": {
"type": "string",
"title": "Message"
},
"type": {
"type": "string",
"title": "Error Type"
},
"input": {
"title": "Input"
},
"ctx": {
"type": "object",
"title": "Context"
}
},
"type": "object",
"required": [
"loc",
"msg",
"type"
],
"title": "ValidationError"
},
"type": "array",
"title": "Detail"
}
},
"type": "object",
"title": "HTTPValidationError"
}| Name | Required | Description | Default |
|---|---|---|---|
| page | No | Page number | |
| topN | No | Number of top products to analyze: '3', '5', '10', or '20'. Affects top* response fields (e.g., topAvgMonthlySales) | 10 |
| sortBy | No | Sort field (matches response field names) | sampleAvgMonthlyRevenue |
| pageSize | No | Page size (max 100) | |
| dateRange | No | Time range filter. Relative ('30d') or month ('2026-02'). | 30d |
| sortOrder | No | Sort direction: asc or desc | desc |
| sampleType | No | Sampling method for market metrics: 'bySale100' = analyze top 100 products by sales volume, 'byBsr100' = top 100 by BSR rank, 'avg' = category-wide average | bySale100 |
| marketplace | No | Amazon marketplace code. | US |
| categoryPath | No | Category hierarchy from root. Example: ['Electronics', 'Computers', 'Laptops']. | |
| topAvgBsrMax | No | Maximum Top N average main-category BSR. | |
| topAvgBsrMin | No | Minimum Top N average main-category BSR. | |
| categoryKeyword | No | Filter by category name keyword (matches any level in category hierarchy, e.g., 'Electronics' or 'Laptops') | |
| sampleAvgBsrMax | No | Maximum sample average main-category BSR. | |
| sampleAvgBsrMin | No | Minimum sample average main-category BSR (lower = better). | |
| topSalesRateMax | No | Maximum Top N sales share as decimal. | |
| topSalesRateMin | No | Minimum Top N sales share as decimal. Example: 0.5 = 50%. | |
| newProductPeriod | No | Define 'new product' as listed within X months: '1', '3', '6', or '12'. Affects sampleNewSku* response fields | 3 |
| sampleAmzRateMax | No | Maximum sample Amazon-sold product rate as decimal. | |
| sampleAmzRateMin | No | Minimum sample Amazon-sold product rate as decimal. | |
| sampleFbaRateMax | No | Maximum sample FBA product rate as decimal. | |
| sampleFbaRateMin | No | Minimum sample FBA product rate as decimal. | |
| totalSkuCountMax | No | Maximum total SKU count in category. | |
| totalSkuCountMin | No | Minimum total SKU count in category. | |
| sampleAvgPriceMax | No | Maximum sample average price. | |
| sampleAvgPriceMin | No | Minimum sample average price. Example: 10.00. | |
| sampleSkuCountMax | No | Maximum sample SKU count. | |
| sampleSkuCountMin | No | Minimum sample SKU count. | |
| sampleAvgRatingMax | No | Maximum sample average star rating (0.0–5.0). | |
| sampleAvgRatingMin | No | Minimum sample average star rating (0.0–5.0). | |
| sampleBrandCountMax | No | Maximum sample unique brand count. | |
| sampleBrandCountMin | No | Minimum sample unique brand count. | |
| sampleNewSkuRateMax | No | Maximum sample new product rate as decimal. | |
| sampleNewSkuRateMin | No | Minimum sample new product rate as decimal. | |
| sampleNewSkuCountMax | No | Maximum sample new product count. | |
| sampleNewSkuCountMin | No | Minimum sample new product count. | |
| sampleSellerCountMax | No | Maximum sample unique seller count. | |
| sampleSellerCountMin | No | Minimum sample unique seller count. | |
| topBrandSalesRateMax | No | Maximum Top N brand concentration ratio as decimal. | |
| topBrandSalesRateMin | No | Minimum Top N brand concentration ratio as decimal. | |
| topAvgMonthlySalesMax | No | Maximum Top N average monthly sales. | |
| topAvgMonthlySalesMin | No | Minimum Top N average monthly sales. Units sold. | |
| topSellerSalesRateMax | No | Maximum Top N seller concentration ratio as decimal. | |
| topSellerSalesRateMin | No | Minimum Top N seller concentration ratio as decimal. | |
| sampleAvgRatingCountMax | No | Maximum sample average rating count per product. | |
| sampleAvgRatingCountMin | No | Minimum sample average rating count per product. | |
| sampleNewSkuAvgPriceMax | No | Maximum new product average price. | |
| sampleNewSkuAvgPriceMin | No | Minimum new product average price. | |
| topAvgMonthlyRevenueMax | No | Maximum Top N average monthly revenue. | |
| topAvgMonthlyRevenueMin | No | Minimum Top N average monthly revenue. | |
| sampleAvgMonthlySalesMax | No | Maximum sample average monthly sales. Units sold. | |
| sampleAvgMonthlySalesMin | No | Minimum sample average monthly sales. Units sold. Example: 100. | |
| sampleAvgPackageVolumeMax | No | Maximum sample average package volume in in³. | |
| sampleAvgPackageVolumeMin | No | Minimum sample average package volume in in³. | |
| sampleAvgPackageWeightMax | No | Maximum sample average package weight in oz. | |
| sampleAvgPackageWeightMin | No | Minimum sample average package weight in oz. | |
| sampleAvgMonthlyRevenueMax | No | Maximum sample average monthly revenue. | |
| sampleAvgMonthlyRevenueMin | No | Minimum sample average monthly revenue. Example: 5000.00. | |
| sampleNewSkuAvgMonthlySalesMax | No | Maximum new product average monthly sales. | |
| sampleNewSkuAvgMonthlySalesMin | No | Minimum new product average monthly sales. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and does well by disclosing key behavioral traits: data source ('based on top-100 product samples per category from the latest daily snapshot'), pagination behavior ('results paginated (max 100 per page)'), and usage context ('evaluate market size and competition'). It doesn't mention rate limits, authentication needs, or data freshness details, but provides substantial operational context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with purpose and usage guidance, but includes extensive API response documentation that belongs in the output schema field. The first paragraph is efficient, but the subsequent HTTP response details and output schema duplication make it unnecessarily long and poorly structured for a tool description.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the high complexity (59 parameters) and no output schema provided, the description does well by explaining the tool's purpose, usage context, data source, and pagination. It could better address the parameter complexity and provide more guidance on navigating the many filtering options, but covers the essential operational context needed to use this search tool effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the baseline is 3. The description adds minimal parameter semantics beyond the schema - it mentions 'sampleAvgMonthlySalesMin >= 200' as an example but doesn't explain parameter relationships or provide additional context about the 59 parameters. The description doesn't compensate for the complexity, but doesn't need to since schema coverage is complete.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verb ('Search') and resource ('market data by category'), and distinguishes it from sibling tools by mentioning '/products/search for product-level data in a category'. It also specifies the type of metrics returned (aggregated demand, competition, and pricing metrics).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool ('evaluate market size and competition before entering a niche') and when to use an alternative ('Related: /products/search for product-level data in a category'). It also includes a concrete example with specific parameter usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
openapi_v2_products_historyAInspect
Product History V2
Get historical time-series data for a single ASIN over a date range.
Returns columnar arrays: high-frequency metrics (price, BSR, sales, rating, sellerCount) as daily arrays aligned with timestamps, and low-frequency fields (title, imageUrl, badges, inventoryStatus) as changelog entries that only record changes. Max date range: 730 days. Related: /products/search to discover ASINs first.
Responses:
200: Successful Response (Success Response) Content-Type: application/json
Example Response:
{
"success": true,
"meta": {
"requestId": "Requestid",
"timestamp": "Timestamp"
}
}Output Schema:
{
"properties": {
"success": {
"type": "boolean",
"title": "Success",
"description": "Whether the request was successful",
"default": true
},
"data": {
"description": "Response data payload"
},
"error": {
"description": "Error details if request failed"
},
"meta": {
"description": "Metadata for API responses.",
"properties": {
"requestId": {
"type": "string",
"title": "Requestid",
"description": "Unique request identifier"
},
"timestamp": {
"type": "string",
"title": "Timestamp",
"description": "Response timestamp in ISO 8601 format"
},
"total": {
"title": "Total",
"description": "Total number of records"
},
"page": {
"title": "Page",
"description": "Current page number"
},
"pageSize": {
"title": "Pagesize",
"description": "Number of records per page"
},
"totalPages": {
"title": "Totalpages",
"description": "Total number of pages"
},
"creditsRemaining": {
"title": "Creditsremaining",
"description": "Remaining API credits"
},
"creditsConsumed": {
"title": "Creditsconsumed",
"description": "Credits consumed by this request"
}
},
"type": "object",
"required": [
"requestId",
"timestamp"
],
"title": "ResponseMeta"
}
},
"type": "object",
"required": [
"meta"
],
"title": "OpenApiResponse[ProductHistoryTimeSeriesItem]",
"examples": []
}422: Validation Error Content-Type: application/json
Example Response:
{
"detail": [
{
"loc": [],
"msg": "Message",
"type": "Error Type",
"ctx": {}
}
]
}Output Schema:
{
"properties": {
"detail": {
"items": {
"properties": {
"loc": {
"items": {},
"type": "array",
"title": "Location"
},
"msg": {
"type": "string",
"title": "Message"
},
"type": {
"type": "string",
"title": "Error Type"
},
"input": {
"title": "Input"
},
"ctx": {
"type": "object",
"title": "Context"
}
},
"type": "object",
"required": [
"loc",
"msg",
"type"
],
"title": "ValidationError"
},
"type": "array",
"title": "Detail"
}
},
"type": "object",
"title": "HTTPValidationError"
}| Name | Required | Description | Default |
|---|---|---|---|
| asin | Yes | Amazon Standard Identification Number (10 chars) | |
| endDate | Yes | End date in YYYY-MM-DD format | |
| startDate | Yes | Start date in YYYY-MM-DD format | |
| marketplace | No | Amazon marketplace code. | US |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It adds useful context like the 730-day max range and details on response structure (e.g., columnar arrays vs. changelog entries). However, it omits critical behavioral traits such as rate limits, authentication needs, or error handling specifics, leaving gaps for a mutation-free read tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with key information in the first two sentences, but it includes extensive, redundant output schema details (e.g., full JSON examples and schemas for 200 and 422 responses) that could be omitted since output schema is noted as false in context. This adds unnecessary length without enhancing tool understanding.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of a time-series data tool with no annotations and no output schema, the description does well by explaining the return data structure (columnar arrays vs. changelog entries) and constraints. However, it could be more complete by addressing potential errors or usage limits beyond the date range, slightly reducing the score.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents all parameters (asin, startDate, endDate, marketplace). The description does not add any parameter-specific semantics beyond what the schema provides, such as explaining ASIN format or date constraints. This meets the baseline for high schema coverage but offers no extra value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Get historical time-series data for a single ASIN over a date range.' It specifies the verb ('Get'), resource ('historical time-series data'), and scope ('single ASIN over a date range'), distinguishing it from siblings like 'products_search' for discovery. The title 'Product History V2' reinforces this focus.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for usage: 'Max date range: 730 days' and 'Related: /products/search to discover ASINs first.' This guides the agent on constraints and prerequisites. However, it lacks explicit alternatives or exclusions, such as when to use this versus 'realtime_product' for current data, which prevents a perfect score.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
openapi_v2_products_searchAInspect
Products Search V2
Search Amazon products by keyword, category, and multi-dimensional filters.
Use this to discover products in a specific niche, analyze competitor listings, or find high-demand low-competition opportunities. Example: search "yoga mat" in Sports & Outdoors with monthlySalesFloor >= 500 and price <= $30 to find proven sellers in an affordable range. Data is based on the latest daily snapshot; results are paginated (max 100 per page). Related: /products/competitors for competitor analysis, /products/history for trends.
Responses:
200: Successful Response (Success Response) Content-Type: application/json
Example Response:
{
"success": true,
"meta": {
"requestId": "Requestid",
"timestamp": "Timestamp"
}
}Output Schema:
{
"properties": {
"success": {
"type": "boolean",
"title": "Success",
"description": "Whether the request was successful",
"default": true
},
"data": {
"title": "Data",
"description": "Response data payload"
},
"error": {
"description": "Error details if request failed"
},
"meta": {
"description": "Metadata for API responses.",
"properties": {
"requestId": {
"type": "string",
"title": "Requestid",
"description": "Unique request identifier"
},
"timestamp": {
"type": "string",
"title": "Timestamp",
"description": "Response timestamp in ISO 8601 format"
},
"total": {
"title": "Total",
"description": "Total number of records"
},
"page": {
"title": "Page",
"description": "Current page number"
},
"pageSize": {
"title": "Pagesize",
"description": "Number of records per page"
},
"totalPages": {
"title": "Totalpages",
"description": "Total number of pages"
},
"creditsRemaining": {
"title": "Creditsremaining",
"description": "Remaining API credits"
},
"creditsConsumed": {
"title": "Creditsconsumed",
"description": "Credits consumed by this request"
}
},
"type": "object",
"required": [
"requestId",
"timestamp"
],
"title": "ResponseMeta"
}
},
"type": "object",
"required": [
"meta"
],
"title": "OpenApiResponse[list[Product]]",
"examples": []
}422: Validation Error Content-Type: application/json
Example Response:
{
"detail": [
{
"loc": [],
"msg": "Message",
"type": "Error Type",
"ctx": {}
}
]
}Output Schema:
{
"properties": {
"detail": {
"items": {
"properties": {
"loc": {
"items": {},
"type": "array",
"title": "Location"
},
"msg": {
"type": "string",
"title": "Message"
},
"type": {
"type": "string",
"title": "Error Type"
},
"input": {
"title": "Input"
},
"ctx": {
"type": "object",
"title": "Context"
}
},
"type": "object",
"required": [
"loc",
"msg",
"type"
],
"title": "ValidationError"
},
"type": "array",
"title": "Detail"
}
},
"type": "object",
"title": "HTTPValidationError"
}| Name | Required | Description | Default |
|---|---|---|---|
| page | No | Page number | |
| badges | No | Include products with these badges. Example: ['bestSeller', 'amazonChoice', 'newRelease', 'aPlus', 'video']. | |
| bsrMax | No | Maximum Best Sellers Rank. Example: 100000. | |
| bsrMin | No | Minimum Best Sellers Rank (lower = better). Example: 1. | |
| lqsMax | No | Maximum Listing Quality Score | |
| lqsMin | No | Minimum Listing Quality Score | |
| sortBy | No | Sort field | monthlySalesFloor |
| keyword | No | Search keyword | |
| pageSize | No | Page size (max 100) | |
| priceMax | No | Maximum product price. Example: 99.99. | |
| priceMin | No | Minimum product price. Example: 9.99. | |
| dateRange | No | Time range filter. Relative ('30d') or month ('2026-02'). Null = latest. | |
| fbaFeeMax | No | Maximum FBA fee. Example: 15.00. | |
| fbaFeeMin | No | Minimum FBA fee. Example: 3.00. | |
| ratingMax | No | Maximum star rating (0.0-5.0). Example: 5.0. | |
| ratingMin | No | Minimum star rating (0.0-5.0). Example: 4.0. | |
| sortOrder | No | Sort direction: asc or desc | desc |
| subBsrMax | No | Maximum sub-category BSR. Example: 50000. | |
| subBsrMin | No | Minimum sub-category BSR. Example: 1. | |
| listingAge | No | Max product age: '30d', '90d', '180d', '1y', '2y'. Null = no limit. | |
| qaCountMax | No | Maximum Q&A count. Example: 100. | |
| qaCountMin | No | Minimum Q&A count. Example: 5. | |
| marketplace | No | Amazon marketplace code | US |
| categoryPath | No | Category hierarchy from root to current level (e.g., ['Electronics', 'Computers', 'Laptops']) | |
| fulfillments | No | Fulfillment filter. Example: ['FBA', 'FBM']. | |
| excludeBadges | No | Exclude products with these badges. Supported: ['aPlus', 'video']. | |
| excludeBrands | No | Brand names to exclude. Example: ['Generic']. | |
| includeBrands | No | Brand names to include. Example: ['Apple', 'Samsung']. | |
| excludeSellers | No | Seller names to exclude. | |
| includeSellers | No | Seller names to include. Example: ['Apple Store']. | |
| ratingCountMax | No | Maximum total rating count. Example: 10000. | |
| ratingCountMin | No | Minimum total rating count. Example: 50. | |
| sellerCountMax | No | Maximum number of sellers. Example: 20. | |
| sellerCountMin | No | Minimum number of sellers. Example: 1. | |
| excludeKeywords | No | Keywords to exclude from results. Example: ['refurbished', 'used']. | |
| monthlySalesMax | No | Maximum monthly sales floor. Units sold. Example: 5000. | |
| monthlySalesMin | No | Minimum monthly sales floor. Units sold. Example: 100. | |
| variantCountMax | No | Maximum number of product variants. Example: 50. | |
| variantCountMin | No | Minimum number of product variants. Example: 2. | |
| bsrGrowthRateMax | No | Maximum BSR growth rate as decimal. | |
| bsrGrowthRateMin | No | Minimum BSR growth rate as decimal. Example: -0.2 = 20% improvement. | |
| keywordMatchType | No | Keyword match type: 'fuzzy', 'phrase', or 'exact'. Null = fuzzy. | |
| onlyCategoryRank | No | If true, only return products ranked in their category BSR. | |
| monthlyRevenueMax | No | Maximum monthly revenue floor. Example: 50000.00. | |
| monthlyRevenueMin | No | Minimum monthly revenue floor. Example: 1000.00. | |
| ratingFilterTarget | No | Choose whether rating-related filters apply to the current product or the most-rated variant. | |
| salesGrowthRateMax | No | Maximum sales growth rate as decimal. Example: 0.5 = 50% growth. | |
| salesGrowthRateMin | No | Minimum sales growth rate as decimal. Example: 0.1 = 10% growth. | |
| ratingToSalesRateMax | No | Maximum rating-to-sales rate as decimal. Example: 0.5. | |
| ratingToSalesRateMin | No | Minimum rating-to-sales rate as decimal. Example: 0.05. | |
| monthlyRatingCountMax | No | Maximum monthly new rating count. Example: 500. | |
| monthlyRatingCountMin | No | Minimum monthly new rating count. Example: 10. | |
| parentMonthlySalesMax | No | Maximum parent ASIN monthly sales floor. Units sold. | |
| parentMonthlySalesMin | No | Minimum parent ASIN monthly sales floor. Units sold. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully describes key behavioral traits: data freshness ('latest daily snapshot'), pagination behavior ('results are paginated (max 100 per page)'), and provides a concrete example of usage. However, it doesn't mention authentication requirements, rate limits, or error handling specifics.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is poorly structured with excessive technical documentation included (HTTP response codes, example responses, output schemas) that belongs in structured fields rather than the description. The core description is only the first paragraph, but it's buried under unnecessary API documentation that makes the tool definition bloated and difficult to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a complex tool with 54 parameters and no annotations, the description provides adequate context about the tool's purpose, usage scenarios, and key behavioral aspects. However, it lacks information about authentication, rate limits, and error handling that would be important for a production API tool. The inclusion of output schema details in the description is redundant since this information should be in structured fields.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, so the schema already documents all 54 parameters thoroughly. The description mentions filtering by 'keyword, category, and multi-dimensional filters' and provides an example with specific parameters, but doesn't add significant semantic meaning beyond what's already in the schema. This meets the baseline expectation when schema coverage is complete.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose as 'Search Amazon products by keyword, category, and multi-dimensional filters,' which is a specific verb+resource combination. It distinguishes itself from sibling tools by mentioning related endpoints for competitor analysis and trends, helping differentiate its search functionality from other product-related operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidance with 'Use this to discover products in a specific niche, analyze competitor listings, or find high-demand low-competition opportunities.' It also names specific alternatives ('Related: /products/competitors for competitor analysis, /products/history for trends'), giving clear context for when to use this tool versus others.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
openapi_v2_prompt_injection_detectBInspect
Detect prompt injection attacks
Detect prompt injection attacks in user input text.
Use this before passing untrusted text to an LLM to guard against instruction override, goal hijacking, data exfiltration, and jailbreak attacks. Example: send user messages, retrieved RAG documents, or tool outputs through this endpoint and block any input where isInjection is true. Most requests resolve in the BERT detection stage (<10 ms); LLM detection (~2 s) only activates when BERT detects an injection. Set useLlmDetection to false to force BERT-only classification. Supports English, Chinese, Japanese, Korean, French, Spanish, and German.
Responses:
200: Successful Response (Success Response) Content-Type: application/json
Example Response:
{
"success": true,
"meta": {
"requestId": "Requestid",
"timestamp": "Timestamp"
}
}Output Schema:
{
"properties": {
"success": {
"type": "boolean",
"title": "Success",
"description": "Whether the request was successful",
"default": true
},
"data": {
"description": "Response data payload"
},
"error": {
"description": "Error details if request failed"
},
"meta": {
"description": "Metadata for API responses.",
"properties": {
"requestId": {
"type": "string",
"title": "Requestid",
"description": "Unique request identifier"
},
"timestamp": {
"type": "string",
"title": "Timestamp",
"description": "Response timestamp in ISO 8601 format"
},
"total": {
"title": "Total",
"description": "Total number of records"
},
"page": {
"title": "Page",
"description": "Current page number"
},
"pageSize": {
"title": "Pagesize",
"description": "Number of records per page"
},
"totalPages": {
"title": "Totalpages",
"description": "Total number of pages"
},
"creditsRemaining": {
"title": "Creditsremaining",
"description": "Remaining API credits"
},
"creditsConsumed": {
"title": "Creditsconsumed",
"description": "Credits consumed by this request"
}
},
"type": "object",
"required": [
"requestId",
"timestamp"
],
"title": "ResponseMeta"
}
},
"type": "object",
"required": [
"meta"
],
"title": "OpenApiResponse[PromptInjectionResult]",
"examples": []
}422: Validation Error Content-Type: application/json
Example Response:
{
"detail": [
{
"loc": [],
"msg": "Message",
"type": "Error Type",
"ctx": {}
}
]
}Output Schema:
{
"properties": {
"detail": {
"items": {
"properties": {
"loc": {
"items": {},
"type": "array",
"title": "Location"
},
"msg": {
"type": "string",
"title": "Message"
},
"type": {
"type": "string",
"title": "Error Type"
},
"input": {
"title": "Input"
},
"ctx": {
"type": "object",
"title": "Context"
}
},
"type": "object",
"required": [
"loc",
"msg",
"type"
],
"title": "ValidationError"
},
"type": "array",
"title": "Detail"
}
},
"type": "object",
"title": "HTTPValidationError"
}| Name | Required | Description | Default |
|---|---|---|---|
| text | Yes | User input text to analyze for prompt injection attacks. | |
| useLlmDetection | No | Whether to allow LLM detection stage for injections flagged by DeBERTa. Set to false to force fast DeBERTa-only classification. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. While it mentions the model used (fine-tuned DeBERTa) and the types of attacks detected, it lacks critical behavioral details such as rate limits, authentication requirements, performance characteristics, error handling, or what happens when attacks are detected. For a security tool with zero annotation coverage, this is a significant gap in transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is poorly structured and excessively long, including irrelevant details like full HTTP response examples and output schemas that should be handled separately. The core description is front-loaded but buried under verbose API documentation, wasting space and reducing clarity. Every sentence does not earn its place, as much of the content repeats or extends beyond the tool's functional description.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of a security detection tool with no annotations and no output schema, the description is incomplete. It fails to explain the return values (e.g., what 'classification label, confidence score, and boolean flag' mean in practice), error conditions, or operational constraints. The inclusion of output schema snippets is confusing and does not compensate for the lack of a proper output schema or behavioral context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the single parameter 'text' well-documented in the schema as 'User input text to analyze for prompt injection attacks.' The description adds minimal value beyond this, merely restating that it analyzes 'user input text' without providing additional context like examples, edge cases, or formatting requirements. Baseline 3 is appropriate when the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('detect', 'analyzes') and resources ('prompt injection attacks', 'user input text'), distinguishing it from sibling tools which focus on categories, competitors, markets, products, and reviews. It explicitly names the types of attacks it identifies, making the purpose highly specific and differentiated.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by stating it analyzes 'user input text' for prompt injection attacks, suggesting it should be used when processing potentially malicious user inputs. However, it provides no explicit guidance on when to use this tool versus alternatives (e.g., other security tools or manual review) or any prerequisites, leaving the agent to infer appropriate scenarios without clear boundaries.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
openapi_v2_realtime_productAInspect
Realtime Product V2
Get realtime product data for a given ASIN.
Use this for up-to-the-minute data when daily snapshots are insufficient. Example: pass asin="B07FR2V8SH" to get current price, rating, review count, BSR, and availability. Data fetched on demand; latency is higher than snapshot endpoints (2-5 seconds). Related: /products/search for snapshot data, /products/history for trend analysis.
Responses:
200: Successful Response (Success Response) Content-Type: application/json
Example Response:
{
"success": true,
"meta": {
"requestId": "Requestid",
"timestamp": "Timestamp"
}
}Output Schema:
{
"properties": {
"success": {
"type": "boolean",
"title": "Success",
"description": "Whether the request was successful",
"default": true
},
"data": {
"description": "Response data payload"
},
"error": {
"description": "Error details if request failed"
},
"meta": {
"description": "Metadata for API responses.",
"properties": {
"requestId": {
"type": "string",
"title": "Requestid",
"description": "Unique request identifier"
},
"timestamp": {
"type": "string",
"title": "Timestamp",
"description": "Response timestamp in ISO 8601 format"
},
"total": {
"title": "Total",
"description": "Total number of records"
},
"page": {
"title": "Page",
"description": "Current page number"
},
"pageSize": {
"title": "Pagesize",
"description": "Number of records per page"
},
"totalPages": {
"title": "Totalpages",
"description": "Total number of pages"
},
"creditsRemaining": {
"title": "Creditsremaining",
"description": "Remaining API credits"
},
"creditsConsumed": {
"title": "Creditsconsumed",
"description": "Credits consumed by this request"
}
},
"type": "object",
"required": [
"requestId",
"timestamp"
],
"title": "ResponseMeta"
}
},
"type": "object",
"required": [
"meta"
],
"title": "OpenApiResponse[RealtimeProduct]",
"examples": []
}422: Validation Error Content-Type: application/json
Example Response:
{
"detail": [
{
"loc": [],
"msg": "Message",
"type": "Error Type",
"ctx": {}
}
]
}Output Schema:
{
"properties": {
"detail": {
"items": {
"properties": {
"loc": {
"items": {},
"type": "array",
"title": "Location"
},
"msg": {
"type": "string",
"title": "Message"
},
"type": {
"type": "string",
"title": "Error Type"
},
"input": {
"title": "Input"
},
"ctx": {
"type": "object",
"title": "Context"
}
},
"type": "object",
"required": [
"loc",
"msg",
"type"
],
"title": "ValidationError"
},
"type": "array",
"title": "Detail"
}
},
"type": "object",
"title": "HTTPValidationError"
}| Name | Required | Description | Default |
|---|---|---|---|
| asin | Yes | Amazon Standard Identification Number | |
| marketplace | No | Amazon marketplace code | US |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behavioral traits: the real-time scraping mechanism ('Data fetched via Spider API in real time'), performance characteristics ('latency is higher than snapshot endpoints (2-5 seconds)'), and the types of data returned (price, rating, review count, BSR, availability). However, it doesn't mention rate limits, authentication requirements, or error handling specifics.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and appropriately sized, with the core purpose and usage guidelines presented upfront. However, the inclusion of extensive response documentation (200 and 422 examples with schemas) adds bulk that could be streamlined, as some of this information might be redundant with structured output schemas if they were provided.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of a real-time scraping tool with no annotations and no output schema, the description does a good job covering the essential context: purpose, usage scenarios, performance characteristics, and data types. It includes example responses which partially compensate for the missing output schema. However, it lacks details on error handling, rate limits, and authentication requirements that would be important for complete agent understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents both parameters (asin and marketplace) with descriptions and constraints. The description adds minimal value beyond the schema by providing an example ASIN value ('B07FR2V8SH') and mentioning the marketplace parameter implicitly through context, but doesn't explain parameter interactions or usage nuances.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verb ('Get realtime product details') and resource ('by scraping the Amazon product page live'). It distinguishes itself from siblings by emphasizing real-time data acquisition versus snapshot endpoints, explicitly naming related tools like '/products/search' and '/products/history' for comparison.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool ('when daily snapshots are insufficient') and when not to (implied for snapshot data). It names specific alternatives ('/products/search for snapshot data, /products/history for trend analysis'), giving clear context for tool selection among siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
openapi_v2_realtime_reviewsAInspect
Realtime Reviews V2
Fetch realtime reviews for a given ASIN.
Cursor-based pagination: omit cursor for the first page, then pass nextCursor from the previous response for subsequent pages. nextCursor=null means no more data. Related: /reviews/search for offline data with AI tags, /reviews/analysis for aggregated insights.
Responses:
200: Successful Response (Success Response) Content-Type: application/json
Example Response:
{
"success": true,
"meta": {
"requestId": "Requestid",
"timestamp": "Timestamp"
}
}Output Schema:
{
"properties": {
"success": {
"type": "boolean",
"title": "Success",
"description": "Whether the request was successful",
"default": true
},
"data": {
"description": "Response data payload"
},
"error": {
"description": "Error details if request failed"
},
"meta": {
"description": "Metadata for API responses.",
"properties": {
"requestId": {
"type": "string",
"title": "Requestid",
"description": "Unique request identifier"
},
"timestamp": {
"type": "string",
"title": "Timestamp",
"description": "Response timestamp in ISO 8601 format"
},
"total": {
"title": "Total",
"description": "Total number of records"
},
"page": {
"title": "Page",
"description": "Current page number"
},
"pageSize": {
"title": "Pagesize",
"description": "Number of records per page"
},
"totalPages": {
"title": "Totalpages",
"description": "Total number of pages"
},
"creditsRemaining": {
"title": "Creditsremaining",
"description": "Remaining API credits"
},
"creditsConsumed": {
"title": "Creditsconsumed",
"description": "Credits consumed by this request"
}
},
"type": "object",
"required": [
"requestId",
"timestamp"
],
"title": "ResponseMeta"
}
},
"type": "object",
"required": [
"meta"
],
"title": "OpenApiResponse[RealtimeReviews]",
"examples": []
}422: Validation Error Content-Type: application/json
Example Response:
{
"detail": [
{
"loc": [],
"msg": "Message",
"type": "Error Type",
"ctx": {}
}
]
}Output Schema:
{
"properties": {
"detail": {
"items": {
"properties": {
"loc": {
"items": {},
"type": "array",
"title": "Location"
},
"msg": {
"type": "string",
"title": "Message"
},
"type": {
"type": "string",
"title": "Error Type"
},
"input": {
"title": "Input"
},
"ctx": {
"type": "object",
"title": "Context"
}
},
"type": "object",
"required": [
"loc",
"msg",
"type"
],
"title": "ValidationError"
},
"type": "array",
"title": "Detail"
}
},
"type": "object",
"title": "HTTPValidationError"
}| Name | Required | Description | Default |
|---|---|---|---|
| asin | Yes | Amazon Standard Identification Number | |
| cursor | No | Pagination token from previous response's nextCursor. Omit for the first page. When the response's nextCursor is null, there are no more pages. | |
| marketplace | No | Amazon marketplace code | US |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and does well by explaining pagination behavior ('Cursor-based pagination: omit cursor for the first page, then pass nextCursor from the previous response for subsequent pages. nextCursor=null means no more data'). It also mentions the 'live' nature of the data. However, it doesn't cover potential rate limits, authentication requirements, or data freshness guarantees that would be helpful for a real-time API.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose and usage guidelines, but includes extensive response format documentation (200 and 422 responses with schemas) that duplicates what would typically be in an output schema. While the response documentation is valuable, it makes the description longer than necessary for tool selection purposes.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 3 parameters, 100% schema coverage, and no annotations, the description provides good context about pagination behavior, sibling tool differentiation, and the real-time nature of the data. The inclusion of response format documentation compensates for the lack of output schema. However, it could better address behavioral aspects like rate limits or data freshness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already fully documents all three parameters (asin, cursor, marketplace). The description adds context about cursor usage ('omit cursor for the first page') but doesn't provide additional semantic meaning beyond what's in the parameter descriptions. This meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Fetch realtime reviews for an ASIN from Amazon live'), identifies the resource (reviews for an ASIN), and distinguishes from siblings by mentioning '/reviews/search for offline data with AI tags' and '/reviews/analysis for aggregated insights'. This provides precise differentiation from related tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool ('Fetch realtime reviews') versus alternatives ('Related: /reviews/search for offline data with AI tags, /reviews/analysis for aggregated insights'). It also provides clear pagination guidance ('omit cursor for the first page, then pass nextCursor from the previous response'). This gives comprehensive usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
openapi_v2_reviews_analysisAInspect
Reviews Analyze V2
Analyze reviews by ASIN list or category to surface AI-generated sentiment, ratings, and consumer intelligence.
Use this to understand customer satisfaction and common complaints before sourcing a product. Example: pass asins=["B07FR2V8SH"] with period="6m" for 6-month review analysis. Requires ≥50 reviews for meaningful results. Data sourced from review analysis pipeline; ASIN mode supports max 100 ASINs. Related: /products/search to find ASINs, /categories for category paths.
Responses:
200: Successful Response (Success Response) Content-Type: application/json
Example Response:
{
"success": true,
"meta": {
"requestId": "Requestid",
"timestamp": "Timestamp"
}
}Output Schema:
{
"properties": {
"success": {
"type": "boolean",
"title": "Success",
"description": "Whether the request was successful",
"default": true
},
"data": {
"description": "Response data payload"
},
"error": {
"description": "Error details if request failed"
},
"meta": {
"description": "Metadata for API responses.",
"properties": {
"requestId": {
"type": "string",
"title": "Requestid",
"description": "Unique request identifier"
},
"timestamp": {
"type": "string",
"title": "Timestamp",
"description": "Response timestamp in ISO 8601 format"
},
"total": {
"title": "Total",
"description": "Total number of records"
},
"page": {
"title": "Page",
"description": "Current page number"
},
"pageSize": {
"title": "Pagesize",
"description": "Number of records per page"
},
"totalPages": {
"title": "Totalpages",
"description": "Total number of pages"
},
"creditsRemaining": {
"title": "Creditsremaining",
"description": "Remaining API credits"
},
"creditsConsumed": {
"title": "Creditsconsumed",
"description": "Credits consumed by this request"
}
},
"type": "object",
"required": [
"requestId",
"timestamp"
],
"title": "ResponseMeta"
}
},
"type": "object",
"required": [
"meta"
],
"title": "OpenApiResponse[ReviewAnalysis]",
"examples": []
}422: Validation Error Content-Type: application/json
Example Response:
{
"detail": [
{
"loc": [],
"msg": "Message",
"type": "Error Type",
"ctx": {}
}
]
}Output Schema:
{
"properties": {
"detail": {
"items": {
"properties": {
"loc": {
"items": {},
"type": "array",
"title": "Location"
},
"msg": {
"type": "string",
"title": "Message"
},
"type": {
"type": "string",
"title": "Error Type"
},
"input": {
"title": "Input"
},
"ctx": {
"type": "object",
"title": "Context"
}
},
"type": "object",
"required": [
"loc",
"msg",
"type"
],
"title": "ValidationError"
},
"type": "array",
"title": "Detail"
}
},
"type": "object",
"title": "HTTPValidationError"
}| Name | Required | Description | Default |
|---|---|---|---|
| mode | Yes | Query mode: 'asin' for ASIN-based, 'category' for category-based | |
| asins | No | List of ASINs to analyze (max 100, required when mode='asin'). Example: ['B07FR2V8SH']. | |
| period | No | Time period for analysis | 6m |
| marketplace | No | Amazon marketplace code. | US |
| categoryPath | No | Category hierarchy from root. Example: ['Electronics', 'Computers']. Required when mode='category'. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses key behavioral traits: 'Requires ≥50 reviews for meaningful results,' 'Data sourced from review analysis pipeline,' and 'ASIN mode supports max 100 ASINs.' However, it doesn't cover other important aspects like rate limits, authentication needs, or what happens if requirements aren't met, leaving gaps for a mutation-like analysis tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with purpose and usage, but it includes extensive, redundant output schema details that belong in structured fields, not the description. This adds unnecessary length and reduces focus. The core content is concise, but the inclusion of schema examples and error responses wastes space and detracts from clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (5 parameters, no annotations, no output schema in context signals), the description is moderately complete. It covers purpose, usage, and some behavioral constraints but lacks details on output format, error handling, or deeper integration context. The output schema provided in the description compensates partially, but it's verbose and not efficiently integrated.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds minimal value beyond the schema: it provides an example ('Example: pass asins=["B07FR2V8SH"] with period="6m"') and hints at parameter interactions (e.g., ASIN mode limits). This meets the baseline for high schema coverage but doesn't significantly enhance understanding.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Analyze reviews by ASIN list or category to extract sentiment, ratings, and consumer insights.' It specifies the resource (reviews) and action (analyze to extract insights). However, it doesn't explicitly differentiate from sibling tools like 'openapi_v2_reviews_search' or 'openapi_v2_realtime_reviews', which might also involve review analysis, so it doesn't reach the highest score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use the tool: 'Use this to understand customer satisfaction and common complaints before sourcing a product.' It mentions related tools ('/products/search to find ASINs, /categories for category paths') but doesn't explicitly state when NOT to use it or name direct alternatives among the siblings, so it falls short of a perfect score.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
openapi_v2_reviews_searchAInspect
Reviews Search V2
Search reviews for an ASIN with multi-dimensional filters.
Filters: star rating range, verified purchase, Vine program, helpful vote count, date range, and AI-generated tags. Results sorted by recent/rating/helpfulVoteCount. Page-based pagination (default 10 per page, max 20). Data sourced from daily BigQuery snapshot with AI-generated tags. Related: /realtime/reviews for live data, /reviews/analysis for aggregated insights.
Responses:
200: Successful Response (Success Response) Content-Type: application/json
Example Response:
{
"success": true,
"meta": {
"requestId": "Requestid",
"timestamp": "Timestamp"
}
}Output Schema:
{
"properties": {
"success": {
"type": "boolean",
"title": "Success",
"description": "Whether the request was successful",
"default": true
},
"data": {
"title": "Data",
"description": "Response data payload"
},
"error": {
"description": "Error details if request failed"
},
"meta": {
"description": "Metadata for API responses.",
"properties": {
"requestId": {
"type": "string",
"title": "Requestid",
"description": "Unique request identifier"
},
"timestamp": {
"type": "string",
"title": "Timestamp",
"description": "Response timestamp in ISO 8601 format"
},
"total": {
"title": "Total",
"description": "Total number of records"
},
"page": {
"title": "Page",
"description": "Current page number"
},
"pageSize": {
"title": "Pagesize",
"description": "Number of records per page"
},
"totalPages": {
"title": "Totalpages",
"description": "Total number of pages"
},
"creditsRemaining": {
"title": "Creditsremaining",
"description": "Remaining API credits"
},
"creditsConsumed": {
"title": "Creditsconsumed",
"description": "Credits consumed by this request"
}
},
"type": "object",
"required": [
"requestId",
"timestamp"
],
"title": "ResponseMeta"
}
},
"type": "object",
"required": [
"meta"
],
"title": "OpenApiResponse[list[TaggedReview]]",
"examples": []
}422: Validation Error Content-Type: application/json
Example Response:
{
"detail": [
{
"loc": [],
"msg": "Message",
"type": "Error Type",
"ctx": {}
}
]
}Output Schema:
{
"properties": {
"detail": {
"items": {
"properties": {
"loc": {
"items": {},
"type": "array",
"title": "Location"
},
"msg": {
"type": "string",
"title": "Message"
},
"type": {
"type": "string",
"title": "Error Type"
},
"input": {
"title": "Input"
},
"ctx": {
"type": "object",
"title": "Context"
}
},
"type": "object",
"required": [
"loc",
"msg",
"type"
],
"title": "ValidationError"
},
"type": "array",
"title": "Detail"
}
},
"type": "object",
"title": "HTTPValidationError"
}| Name | Required | Description | Default |
|---|---|---|---|
| asin | Yes | Amazon Standard Identification Number | |
| page | No | Page number (1-indexed). | |
| sortBy | No | Sort field: 'recent' (date), 'rating' (star rating), 'helpfulVoteCount' (vote count). | recent |
| dateEnd | No | Latest review date (inclusive, YYYY-MM-DD). Example: '2026-04-01'. | |
| pageSize | No | Results per page (1-20, default 10). | |
| vineOnly | No | If true, only return Amazon Vine program reviews. | |
| dateStart | No | Earliest review date (inclusive, YYYY-MM-DD). Example: '2025-01-01'. | |
| ratingMax | No | Maximum star rating (inclusive). Example: 3 for negative reviews. | |
| ratingMin | No | Minimum star rating (inclusive). Example: 1. | |
| sortOrder | No | Sort direction: 'desc' or 'asc'. | desc |
| verifiedOnly | No | If true, only return verified purchase reviews. | |
| helpfulVoteCountMin | No | Minimum helpful vote count. Example: 5. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behavioral traits: data source ('daily BigQuery snapshot'), pagination behavior ('page-based pagination, default 10 per page, max 20'), sorting options, and filter dimensions. It doesn't mention rate limits or authentication requirements, but covers most operational aspects well.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with essential information but becomes bloated with extensive API response documentation that belongs in the output schema. The first paragraph is efficient, but the inclusion of full response schemas and examples makes it unnecessarily long and violates the principle that every sentence should earn its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (12 parameters, no annotations, no output schema), the description does a good job covering operational context. It explains data source, pagination, sorting, and filter dimensions, and provides usage guidance. The main gap is the lack of output format explanation, which would be helpful since there's no output schema provided in the context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all 12 parameters thoroughly. The description adds minimal parameter semantics beyond the schema - it lists filter categories and sorting options but doesn't provide additional context about parameter interactions or usage patterns. Baseline 3 is appropriate when the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Search reviews for an ASIN with multi-dimensional filters.' It specifies the resource (reviews for an ASIN) and the action (search with filters), and distinguishes it from sibling tools by mentioning related endpoints for live data and aggregated insights.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool versus alternatives: 'Related: /realtime/reviews for live data, /reviews/analysis for aggregated insights.' This clearly differentiates this tool from its siblings, indicating it's for filtered search of snapshot data rather than real-time access or analysis.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!