APIClaw
Server Details
Real-time Amazon data API built for AI agents. 200M+ products, 1B+ reviews, live BSR, pricing, and competitor data as clean JSON. 10 agent skills for market research, competitor monitoring, pricing analysis, and listing audits.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4/5 across 10 of 10 tools scored. Lowest: 3.1/5.
Most tools have distinct purposes focused on different aspects of Amazon data analysis (categories, products, markets, reviews, etc.), with clear boundaries described in their documentation. However, there is some potential overlap between products_search and competitors tools as both involve product discovery, though their specific focuses differ enough to avoid major confusion.
All tools follow a perfectly consistent naming pattern: 'openapi_v2_' prefix followed by a descriptive resource/action combination in snake_case (e.g., 'categories', 'products_search', 'reviews_analysis'). This uniformity makes the tool set predictable and easy to navigate.
With 10 tools, this is well within the ideal 3-15 range for a comprehensive Amazon data analysis server. Each tool serves a specific, valuable function in the domain, from category exploration to product search, competitor analysis, market research, and review processing, with no obvious redundancy.
The tool set covers the core workflows for Amazon product research comprehensively: discovery (categories, products, markets), analysis (competitors, history, reviews), and real-time data (product, reviews). The only minor gap is the lack of a dedicated tool for seller or brand analysis, but the existing tools provide robust coverage for most agent tasks.
Available Tools
10 toolsopenapi_v2_categoriesAInspect
Categories V2
Query Amazon category hierarchy by ID, path, parent, or keyword.
Use this to discover category structure for filtering in other endpoints. Example: pass categoryKeyword="yoga" to find matching categories, or parentCategoryPath=["Sports & Outdoors"] to list child categories.
Query modes (mutually exclusive):
No parameters: Returns all root categories
categoryId: Get specific category by ID
categoryPath: Get specific category by path
parentCategoryId: Get children of parent category by ID
parentCategoryPath: Get children of parent category by path
categoryKeyword: Search categories by keyword
Related: /products/search and /markets/search accept categoryPath for filtering.
Responses:
200: Successful Response (Success Response) Content-Type: application/json
Example Response:
{
"success": true,
"meta": {
"requestId": "Requestid",
"timestamp": "Timestamp"
}
}Output Schema:
{
"properties": {
"success": {
"type": "boolean",
"title": "Success",
"description": "Whether the request was successful",
"default": true
},
"data": {
"title": "Data",
"description": "Response data payload"
},
"error": {
"description": "Error details if request failed"
},
"meta": {
"description": "Metadata for API responses.",
"properties": {
"requestId": {
"type": "string",
"title": "Requestid",
"description": "Unique request identifier"
},
"timestamp": {
"type": "string",
"title": "Timestamp",
"description": "Response timestamp in ISO 8601 format"
},
"total": {
"title": "Total",
"description": "Total number of records"
},
"page": {
"title": "Page",
"description": "Current page number"
},
"pageSize": {
"title": "Pagesize",
"description": "Number of records per page"
},
"totalPages": {
"title": "Totalpages",
"description": "Total number of pages"
},
"creditsRemaining": {
"title": "Creditsremaining",
"description": "Remaining API credits"
},
"creditsConsumed": {
"title": "Creditsconsumed",
"description": "Credits consumed by this request"
}
},
"type": "object",
"required": [
"requestId",
"timestamp"
],
"title": "ResponseMeta"
}
},
"type": "object",
"required": [
"meta"
],
"title": "OpenApiResponse[list[Category]]",
"examples": []
}422: Validation Error Content-Type: application/json
Example Response:
{
"detail": [
{
"loc": [],
"msg": "Message",
"type": "Error Type",
"ctx": {}
}
]
}Output Schema:
{
"properties": {
"detail": {
"items": {
"properties": {
"loc": {
"items": {},
"type": "array",
"title": "Location"
},
"msg": {
"type": "string",
"title": "Message"
},
"type": {
"type": "string",
"title": "Error Type"
},
"input": {
"title": "Input"
},
"ctx": {
"type": "object",
"title": "Context"
}
},
"type": "object",
"required": [
"loc",
"msg",
"type"
],
"title": "ValidationError"
},
"type": "array",
"title": "Detail"
}
},
"type": "object",
"title": "HTTPValidationError"
}| Name | Required | Description | Default |
|---|---|---|---|
| categoryId | No | Category identifier | |
| marketplace | No | Amazon marketplace code | US |
| categoryPath | No | Category hierarchy from root to current level (e.g., ['Electronics', 'Computers', 'Laptops']) | |
| categoryKeyword | No | Filter by category name keyword (matches any level in category hierarchy, e.g., 'Electronics' or 'Laptops') | |
| parentCategoryId | No | Parent category ID | |
| parentCategoryPath | No | Parent category path |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It does well by explaining the six query modes and their mutual exclusivity, providing example usage patterns, and mentioning the response structure. However, it doesn't explicitly state whether this is a read-only operation (though implied by 'query'), nor does it mention rate limits, authentication requirements, or pagination behavior that might be relevant for API tools.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with clear sections: purpose statement, usage guidance, query modes, and related endpoints. It's appropriately sized for a tool with six parameters and complex query logic. The only minor issue is that the response documentation (200 and 422 sections) is quite detailed and might be better handled by an output schema, but this is reasonable given the context.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 6 parameters, 100% schema coverage, and no output schema, the description provides good contextual completeness. It explains the purpose, usage scenarios, query modes with examples, and relationships to other tools. The main gap is the lack of explicit behavioral information about read-only nature, authentication, or rate limits, but the query modes and examples provide substantial practical guidance.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, so the schema already documents all six parameters thoroughly. The description adds some value by explaining the query modes and providing examples ('categoryKeyword="yoga"' and 'parentCategoryPath=["Sports & Outdoors"]'), but doesn't add significant semantic information beyond what's already in the parameter descriptions. This meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Query Amazon category hierarchy by ID, path, parent, or keyword' and 'Use this to discover category structure for filtering in other endpoints.' It specifies the exact resource (Amazon category hierarchy) and multiple query methods, distinguishing it from sibling tools like products_search or reviews_search which handle different resources.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool: 'Use this to discover category structure for filtering in other endpoints' and names specific related endpoints ('/products/search and /markets/search accept categoryPath for filtering'). It also clearly explains the six mutually exclusive query modes, helping the agent choose the right parameter combination.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
openapi_v2_competitorsAInspect
Competitor Lookup V2
Search competitor products by keyword, brand, ASIN, or category with filters.
Use this to identify competing products around a specific listing or brand. Example: pass asin="B07FR2V8SH" to find all products competing in the same keywords and category. Data is based on the latest daily snapshot; results are paginated (max 100 per page). Related: /products/search for broader keyword discovery.
Responses:
200: Successful Response (Success Response) Content-Type: application/json
Example Response:
{
"success": true,
"meta": {
"requestId": "Requestid",
"timestamp": "Timestamp"
}
}Output Schema:
{
"properties": {
"success": {
"type": "boolean",
"title": "Success",
"description": "Whether the request was successful",
"default": true
},
"data": {
"title": "Data",
"description": "Response data payload"
},
"error": {
"description": "Error details if request failed"
},
"meta": {
"description": "Metadata for API responses.",
"properties": {
"requestId": {
"type": "string",
"title": "Requestid",
"description": "Unique request identifier"
},
"timestamp": {
"type": "string",
"title": "Timestamp",
"description": "Response timestamp in ISO 8601 format"
},
"total": {
"title": "Total",
"description": "Total number of records"
},
"page": {
"title": "Page",
"description": "Current page number"
},
"pageSize": {
"title": "Pagesize",
"description": "Number of records per page"
},
"totalPages": {
"title": "Totalpages",
"description": "Total number of pages"
},
"creditsRemaining": {
"title": "Creditsremaining",
"description": "Remaining API credits"
},
"creditsConsumed": {
"title": "Creditsconsumed",
"description": "Credits consumed by this request"
}
},
"type": "object",
"required": [
"requestId",
"timestamp"
],
"title": "ResponseMeta"
}
},
"type": "object",
"required": [
"meta"
],
"title": "OpenApiResponse[list[Product]]",
"examples": []
}422: Validation Error Content-Type: application/json
Example Response:
{
"detail": [
{
"loc": [],
"msg": "Message",
"type": "Error Type",
"ctx": {}
}
]
}Output Schema:
{
"properties": {
"detail": {
"items": {
"properties": {
"loc": {
"items": {},
"type": "array",
"title": "Location"
},
"msg": {
"type": "string",
"title": "Message"
},
"type": {
"type": "string",
"title": "Error Type"
},
"input": {
"title": "Input"
},
"ctx": {
"type": "object",
"title": "Context"
}
},
"type": "object",
"required": [
"loc",
"msg",
"type"
],
"title": "ValidationError"
},
"type": "array",
"title": "Detail"
}
},
"type": "object",
"title": "HTTPValidationError"
}| Name | Required | Description | Default |
|---|---|---|---|
| asin | No | Amazon Standard Identification Number (10-char alphanumeric). Example: 'B07FR2V8SH'. | |
| page | No | Page number | |
| badges | No | Include products with these badges. Example: ['bestSeller', 'amazonChoice', 'newRelease', 'aPlus', 'video']. | |
| sortBy | No | Sort field | monthlySalesFloor |
| keyword | No | Search keyword | |
| pageSize | No | Page size (max 100) | |
| brandName | No | Filter by brand name. | |
| dateRange | No | Time range filter. Relative ('30d') or month ('2026-01'). Default '30d'. | 30d |
| sortOrder | No | Sort direction: asc or desc | desc |
| sellerName | No | Filter by seller name. | |
| marketplace | No | Amazon marketplace code | US |
| categoryPath | No | Category hierarchy from root to current level (e.g., ['Electronics', 'Computers', 'Laptops']) | |
| fulfillments | No | Fulfillment filter. Example: ['FBA', 'FBM']. | |
| excludeBadges | No | Exclude products with these badges. Supported: ['aPlus', 'video']. | |
| excludeBrands | No | Brand names to exclude. Example: ['Generic']. | |
| includeBrands | No | Brand names to include. Example: ['Apple', 'Samsung']. | |
| excludeSellers | No | Seller names to exclude. | |
| includeSellers | No | Seller names to include. Example: ['Apple Store']. | |
| sellerCountMax | No | Maximum number of sellers. Example: 20. | |
| sellerCountMin | No | Minimum number of sellers. Example: 1. | |
| excludeKeywords | No | Keywords to exclude from results. Example: ['refurbished', 'used']. | |
| keywordMatchType | No | Keyword match type: 'fuzzy', 'phrase', or 'exact'. Null = fuzzy. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It adds valuable context beyond basic functionality: it mentions data is 'based on the latest daily snapshot,' results are 'paginated (max 100 per page),' and includes an example response structure. However, it lacks details on permissions, rate limits, or error handling beyond the example, leaving some behavioral gaps for a tool with 22 parameters.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded with key information in the first few sentences. However, it includes extensive example response and output schema details that are redundant since there's no output schema provided in context signals, and some formatting (like markdown code blocks) adds clutter without adding proportional value for tool selection.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (22 parameters, no annotations, no output schema), the description is fairly complete. It covers purpose, usage, behavioral traits like data freshness and pagination, and relates to siblings. However, it could improve by explaining parameter interactions or common use cases more explicitly, given the high parameter count and lack of structured output guidance.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all 22 parameters thoroughly. The description adds minimal parameter semantics beyond the schema, such as implying ASIN usage in the example and mentioning pagination limits. It doesn't provide additional syntax, format details, or usage examples for parameters beyond what the schema offers, meeting the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose as 'Search competitor products by keyword, brand, ASIN, or category with filters' and distinguishes it from sibling tools by mentioning '/products/search for broader keyword discovery.' It specifies the verb ('search'), resource ('competitor products'), and scope ('with filters'), making it highly specific and differentiated.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidance: 'Use this to identify competing products around a specific listing or brand' and gives an example with 'asin="B07FR2V8SH".' It also names an alternative ('/products/search for broader keyword discovery') and clarifies when to use this tool versus that alternative, offering clear context and exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
openapi_v2_markets_searchAInspect
Markets Search V2
Search market data by category with aggregated demand, competition, and pricing metrics.
Use this to evaluate market size and competition before entering a niche. Example: search "Pet Supplies" with sampleAvgMonthlySalesMin >= 200 to find categories with proven demand. Data is based on top-100 product samples per category from the latest daily snapshot; results paginated (max 100 per page). Related: /products/search for product-level data in a category.
Responses:
200: Successful Response (Success Response) Content-Type: application/json
Example Response:
{
"success": true,
"meta": {
"requestId": "Requestid",
"timestamp": "Timestamp"
}
}Output Schema:
{
"properties": {
"success": {
"type": "boolean",
"title": "Success",
"description": "Whether the request was successful",
"default": true
},
"data": {
"title": "Data",
"description": "Response data payload"
},
"error": {
"description": "Error details if request failed"
},
"meta": {
"description": "Metadata for API responses.",
"properties": {
"requestId": {
"type": "string",
"title": "Requestid",
"description": "Unique request identifier"
},
"timestamp": {
"type": "string",
"title": "Timestamp",
"description": "Response timestamp in ISO 8601 format"
},
"total": {
"title": "Total",
"description": "Total number of records"
},
"page": {
"title": "Page",
"description": "Current page number"
},
"pageSize": {
"title": "Pagesize",
"description": "Number of records per page"
},
"totalPages": {
"title": "Totalpages",
"description": "Total number of pages"
},
"creditsRemaining": {
"title": "Creditsremaining",
"description": "Remaining API credits"
},
"creditsConsumed": {
"title": "Creditsconsumed",
"description": "Credits consumed by this request"
}
},
"type": "object",
"required": [
"requestId",
"timestamp"
],
"title": "ResponseMeta"
}
},
"type": "object",
"required": [
"meta"
],
"title": "OpenApiResponse[list[Market]]",
"examples": []
}422: Validation Error Content-Type: application/json
Example Response:
{
"detail": [
{
"loc": [],
"msg": "Message",
"type": "Error Type",
"ctx": {}
}
]
}Output Schema:
{
"properties": {
"detail": {
"items": {
"properties": {
"loc": {
"items": {},
"type": "array",
"title": "Location"
},
"msg": {
"type": "string",
"title": "Message"
},
"type": {
"type": "string",
"title": "Error Type"
},
"input": {
"title": "Input"
},
"ctx": {
"type": "object",
"title": "Context"
}
},
"type": "object",
"required": [
"loc",
"msg",
"type"
],
"title": "ValidationError"
},
"type": "array",
"title": "Detail"
}
},
"type": "object",
"title": "HTTPValidationError"
}| Name | Required | Description | Default |
|---|---|---|---|
| page | No | Page number | |
| topN | No | Number of top products to analyze: '3', '5', '10', or '20'. Affects top* response fields (e.g., topAvgMonthlySales) | 10 |
| sortBy | No | Sort field (matches response field names) | sampleAvgMonthlyRevenue |
| pageSize | No | Page size (max 100) | |
| dateRange | No | Time range filter. Relative ('30d') or month ('2026-02'). | 30d |
| sortOrder | No | Sort direction: asc or desc | desc |
| sampleType | No | Sampling method for market metrics: 'bySale100' = analyze top 100 products by sales volume, 'byBsr100' = top 100 by BSR rank, 'avg' = category-wide average | bySale100 |
| marketplace | No | Amazon marketplace code. | US |
| categoryPath | No | Category hierarchy from root. Example: ['Electronics', 'Computers', 'Laptops']. | |
| topAvgBsrMax | No | Maximum Top N average main-category BSR. | |
| topAvgBsrMin | No | Minimum Top N average main-category BSR. | |
| categoryKeyword | No | Filter by category name keyword (matches any level in category hierarchy, e.g., 'Electronics' or 'Laptops') | |
| sampleAvgBsrMax | No | Maximum sample average main-category BSR. | |
| sampleAvgBsrMin | No | Minimum sample average main-category BSR (lower = better). | |
| topSalesRateMax | No | Maximum Top N sales share as decimal. | |
| topSalesRateMin | No | Minimum Top N sales share as decimal. Example: 0.5 = 50%. | |
| newProductPeriod | No | Define 'new product' as listed within X months: '1', '3', '6', or '12'. Affects sampleNewSku* response fields | 3 |
| sampleAmzRateMax | No | Maximum sample Amazon-sold product rate as decimal. | |
| sampleAmzRateMin | No | Minimum sample Amazon-sold product rate as decimal. | |
| sampleFbaRateMax | No | Maximum sample FBA product rate as decimal. | |
| sampleFbaRateMin | No | Minimum sample FBA product rate as decimal. | |
| totalSkuCountMax | No | Maximum total SKU count in category. | |
| totalSkuCountMin | No | Minimum total SKU count in category. | |
| sampleAvgPriceMax | No | Maximum sample average price. | |
| sampleAvgPriceMin | No | Minimum sample average price. Example: 10.00. | |
| sampleSkuCountMax | No | Maximum sample SKU count. | |
| sampleSkuCountMin | No | Minimum sample SKU count. | |
| sampleAvgRatingMax | No | Maximum sample average star rating (0.0–5.0). | |
| sampleAvgRatingMin | No | Minimum sample average star rating (0.0–5.0). | |
| sampleBrandCountMax | No | Maximum sample unique brand count. | |
| sampleBrandCountMin | No | Minimum sample unique brand count. | |
| sampleNewSkuRateMax | No | Maximum sample new product rate as decimal. | |
| sampleNewSkuRateMin | No | Minimum sample new product rate as decimal. | |
| sampleNewSkuCountMax | No | Maximum sample new product count. | |
| sampleNewSkuCountMin | No | Minimum sample new product count. | |
| sampleSellerCountMax | No | Maximum sample unique seller count. | |
| sampleSellerCountMin | No | Minimum sample unique seller count. | |
| topBrandSalesRateMax | No | Maximum Top N brand concentration ratio as decimal. | |
| topBrandSalesRateMin | No | Minimum Top N brand concentration ratio as decimal. | |
| topAvgMonthlySalesMax | No | Maximum Top N average monthly sales. | |
| topAvgMonthlySalesMin | No | Minimum Top N average monthly sales. Units sold. | |
| topSellerSalesRateMax | No | Maximum Top N seller concentration ratio as decimal. | |
| topSellerSalesRateMin | No | Minimum Top N seller concentration ratio as decimal. | |
| sampleAvgRatingCountMax | No | Maximum sample average rating count per product. | |
| sampleAvgRatingCountMin | No | Minimum sample average rating count per product. | |
| sampleNewSkuAvgPriceMax | No | Maximum new product average price. | |
| sampleNewSkuAvgPriceMin | No | Minimum new product average price. | |
| topAvgMonthlyRevenueMax | No | Maximum Top N average monthly revenue. | |
| topAvgMonthlyRevenueMin | No | Minimum Top N average monthly revenue. | |
| sampleAvgMonthlySalesMax | No | Maximum sample average monthly sales. Units sold. | |
| sampleAvgMonthlySalesMin | No | Minimum sample average monthly sales. Units sold. Example: 100. | |
| sampleAvgPackageVolumeMax | No | Maximum sample average package volume in in³. | |
| sampleAvgPackageVolumeMin | No | Minimum sample average package volume in in³. | |
| sampleAvgPackageWeightMax | No | Maximum sample average package weight in oz. | |
| sampleAvgPackageWeightMin | No | Minimum sample average package weight in oz. | |
| sampleAvgMonthlyRevenueMax | No | Maximum sample average monthly revenue. | |
| sampleAvgMonthlyRevenueMin | No | Minimum sample average monthly revenue. Example: 5000.00. | |
| sampleNewSkuAvgMonthlySalesMax | No | Maximum new product average monthly sales. | |
| sampleNewSkuAvgMonthlySalesMin | No | Minimum new product average monthly sales. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and does well by disclosing key behavioral traits: data source ('based on top-100 product samples per category from the latest daily snapshot'), pagination behavior ('results paginated (max 100 per page)'), and usage context ('evaluate market size and competition'). It doesn't mention rate limits, authentication needs, or data freshness details, but provides substantial operational context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with purpose and usage guidance, but includes extensive API response documentation that belongs in the output schema field. The first paragraph is efficient, but the subsequent HTTP response details and output schema duplication make it unnecessarily long and poorly structured for a tool description.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the high complexity (59 parameters) and no output schema provided, the description does well by explaining the tool's purpose, usage context, data source, and pagination. It could better address the parameter complexity and provide more guidance on navigating the many filtering options, but covers the essential operational context needed to use this search tool effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the baseline is 3. The description adds minimal parameter semantics beyond the schema - it mentions 'sampleAvgMonthlySalesMin >= 200' as an example but doesn't explain parameter relationships or provide additional context about the 59 parameters. The description doesn't compensate for the complexity, but doesn't need to since schema coverage is complete.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verb ('Search') and resource ('market data by category'), and distinguishes it from sibling tools by mentioning '/products/search for product-level data in a category'. It also specifies the type of metrics returned (aggregated demand, competition, and pricing metrics).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool ('evaluate market size and competition before entering a niche') and when to use an alternative ('Related: /products/search for product-level data in a category'). It also includes a concrete example with specific parameter usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
openapi_v2_products_historyAInspect
Product History V2
Get historical time-series data for a single ASIN over a date range.
Returns columnar arrays: high-frequency metrics (price, BSR, sales, rating, sellerCount) as daily arrays aligned with timestamps, and low-frequency fields (title, imageUrl, badges, inventoryStatus) as changelog entries that only record changes. Max date range: 730 days. Related: /products/search to discover ASINs first.
Responses:
200: Successful Response (Success Response) Content-Type: application/json
Example Response:
{
"success": true,
"meta": {
"requestId": "Requestid",
"timestamp": "Timestamp"
}
}Output Schema:
{
"properties": {
"success": {
"type": "boolean",
"title": "Success",
"description": "Whether the request was successful",
"default": true
},
"data": {
"description": "Response data payload"
},
"error": {
"description": "Error details if request failed"
},
"meta": {
"description": "Metadata for API responses.",
"properties": {
"requestId": {
"type": "string",
"title": "Requestid",
"description": "Unique request identifier"
},
"timestamp": {
"type": "string",
"title": "Timestamp",
"description": "Response timestamp in ISO 8601 format"
},
"total": {
"title": "Total",
"description": "Total number of records"
},
"page": {
"title": "Page",
"description": "Current page number"
},
"pageSize": {
"title": "Pagesize",
"description": "Number of records per page"
},
"totalPages": {
"title": "Totalpages",
"description": "Total number of pages"
},
"creditsRemaining": {
"title": "Creditsremaining",
"description": "Remaining API credits"
},
"creditsConsumed": {
"title": "Creditsconsumed",
"description": "Credits consumed by this request"
}
},
"type": "object",
"required": [
"requestId",
"timestamp"
],
"title": "ResponseMeta"
}
},
"type": "object",
"required": [
"meta"
],
"title": "OpenApiResponse[ProductHistoryTimeSeriesItem]",
"examples": []
}422: Validation Error Content-Type: application/json
Example Response:
{
"detail": [
{
"loc": [],
"msg": "Message",
"type": "Error Type",
"ctx": {}
}
]
}Output Schema:
{
"properties": {
"detail": {
"items": {
"properties": {
"loc": {
"items": {},
"type": "array",
"title": "Location"
},
"msg": {
"type": "string",
"title": "Message"
},
"type": {
"type": "string",
"title": "Error Type"
},
"input": {
"title": "Input"
},
"ctx": {
"type": "object",
"title": "Context"
}
},
"type": "object",
"required": [
"loc",
"msg",
"type"
],
"title": "ValidationError"
},
"type": "array",
"title": "Detail"
}
},
"type": "object",
"title": "HTTPValidationError"
}| Name | Required | Description | Default |
|---|---|---|---|
| asin | Yes | Amazon Standard Identification Number (10 chars) | |
| endDate | Yes | End date in YYYY-MM-DD format | |
| startDate | Yes | Start date in YYYY-MM-DD format | |
| marketplace | No | Amazon marketplace code. | US |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It adds useful context like the 730-day max range and details on response structure (e.g., columnar arrays vs. changelog entries). However, it omits critical behavioral traits such as rate limits, authentication needs, or error handling specifics, leaving gaps for a mutation-free read tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with key information in the first two sentences, but it includes extensive, redundant output schema details (e.g., full JSON examples and schemas for 200 and 422 responses) that could be omitted since output schema is noted as false in context. This adds unnecessary length without enhancing tool understanding.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of a time-series data tool with no annotations and no output schema, the description does well by explaining the return data structure (columnar arrays vs. changelog entries) and constraints. However, it could be more complete by addressing potential errors or usage limits beyond the date range, slightly reducing the score.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents all parameters (asin, startDate, endDate, marketplace). The description does not add any parameter-specific semantics beyond what the schema provides, such as explaining ASIN format or date constraints. This meets the baseline for high schema coverage but offers no extra value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Get historical time-series data for a single ASIN over a date range.' It specifies the verb ('Get'), resource ('historical time-series data'), and scope ('single ASIN over a date range'), distinguishing it from siblings like 'products_search' for discovery. The title 'Product History V2' reinforces this focus.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for usage: 'Max date range: 730 days' and 'Related: /products/search to discover ASINs first.' This guides the agent on constraints and prerequisites. However, it lacks explicit alternatives or exclusions, such as when to use this versus 'realtime_product' for current data, which prevents a perfect score.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
openapi_v2_products_searchAInspect
Products Search V2
Search Amazon products by keyword, category, and multi-dimensional filters.
Use this to discover products in a specific niche, analyze competitor listings, or find high-demand low-competition opportunities. Example: search "yoga mat" in Sports & Outdoors with monthlySalesFloor >= 500 and price <= $30 to find proven sellers in an affordable range. Data is based on the latest daily snapshot; results are paginated (max 100 per page). Related: /products/competitors for competitor analysis, /products/history for trends.
Responses:
200: Successful Response (Success Response) Content-Type: application/json
Example Response:
{
"success": true,
"meta": {
"requestId": "Requestid",
"timestamp": "Timestamp"
}
}Output Schema:
{
"properties": {
"success": {
"type": "boolean",
"title": "Success",
"description": "Whether the request was successful",
"default": true
},
"data": {
"title": "Data",
"description": "Response data payload"
},
"error": {
"description": "Error details if request failed"
},
"meta": {
"description": "Metadata for API responses.",
"properties": {
"requestId": {
"type": "string",
"title": "Requestid",
"description": "Unique request identifier"
},
"timestamp": {
"type": "string",
"title": "Timestamp",
"description": "Response timestamp in ISO 8601 format"
},
"total": {
"title": "Total",
"description": "Total number of records"
},
"page": {
"title": "Page",
"description": "Current page number"
},
"pageSize": {
"title": "Pagesize",
"description": "Number of records per page"
},
"totalPages": {
"title": "Totalpages",
"description": "Total number of pages"
},
"creditsRemaining": {
"title": "Creditsremaining",
"description": "Remaining API credits"
},
"creditsConsumed": {
"title": "Creditsconsumed",
"description": "Credits consumed by this request"
}
},
"type": "object",
"required": [
"requestId",
"timestamp"
],
"title": "ResponseMeta"
}
},
"type": "object",
"required": [
"meta"
],
"title": "OpenApiResponse[list[Product]]",
"examples": []
}422: Validation Error Content-Type: application/json
Example Response:
{
"detail": [
{
"loc": [],
"msg": "Message",
"type": "Error Type",
"ctx": {}
}
]
}Output Schema:
{
"properties": {
"detail": {
"items": {
"properties": {
"loc": {
"items": {},
"type": "array",
"title": "Location"
},
"msg": {
"type": "string",
"title": "Message"
},
"type": {
"type": "string",
"title": "Error Type"
},
"input": {
"title": "Input"
},
"ctx": {
"type": "object",
"title": "Context"
}
},
"type": "object",
"required": [
"loc",
"msg",
"type"
],
"title": "ValidationError"
},
"type": "array",
"title": "Detail"
}
},
"type": "object",
"title": "HTTPValidationError"
}| Name | Required | Description | Default |
|---|---|---|---|
| page | No | Page number | |
| badges | No | Include products with these badges. Example: ['bestSeller', 'amazonChoice', 'newRelease', 'aPlus', 'video']. | |
| bsrMax | No | Maximum Best Sellers Rank. Example: 100000. | |
| bsrMin | No | Minimum Best Sellers Rank (lower = better). Example: 1. | |
| lqsMax | No | Maximum Listing Quality Score | |
| lqsMin | No | Minimum Listing Quality Score | |
| sortBy | No | Sort field | monthlySalesFloor |
| keyword | No | Search keyword | |
| pageSize | No | Page size (max 100) | |
| priceMax | No | Maximum product price. Example: 99.99. | |
| priceMin | No | Minimum product price. Example: 9.99. | |
| dateRange | No | Time range filter. Relative ('30d') or month ('2026-02'). Null = latest. | |
| fbaFeeMax | No | Maximum FBA fee. Example: 15.00. | |
| fbaFeeMin | No | Minimum FBA fee. Example: 3.00. | |
| ratingMax | No | Maximum star rating (0.0-5.0). Example: 5.0. | |
| ratingMin | No | Minimum star rating (0.0-5.0). Example: 4.0. | |
| sortOrder | No | Sort direction: asc or desc | desc |
| subBsrMax | No | Maximum sub-category BSR. Example: 50000. | |
| subBsrMin | No | Minimum sub-category BSR. Example: 1. | |
| listingAge | No | Max product age: '30d', '90d', '180d', '1y', '2y'. Null = no limit. | |
| qaCountMax | No | Maximum Q&A count. Example: 100. | |
| qaCountMin | No | Minimum Q&A count. Example: 5. | |
| marketplace | No | Amazon marketplace code | US |
| categoryPath | No | Category hierarchy from root to current level (e.g., ['Electronics', 'Computers', 'Laptops']) | |
| fulfillments | No | Fulfillment filter. Example: ['FBA', 'FBM']. | |
| excludeBadges | No | Exclude products with these badges. Supported: ['aPlus', 'video']. | |
| excludeBrands | No | Brand names to exclude. Example: ['Generic']. | |
| includeBrands | No | Brand names to include. Example: ['Apple', 'Samsung']. | |
| excludeSellers | No | Seller names to exclude. | |
| includeSellers | No | Seller names to include. Example: ['Apple Store']. | |
| ratingCountMax | No | Maximum total rating count. Example: 10000. | |
| ratingCountMin | No | Minimum total rating count. Example: 50. | |
| sellerCountMax | No | Maximum number of sellers. Example: 20. | |
| sellerCountMin | No | Minimum number of sellers. Example: 1. | |
| excludeKeywords | No | Keywords to exclude from results. Example: ['refurbished', 'used']. | |
| monthlySalesMax | No | Maximum monthly sales floor. Units sold. Example: 5000. | |
| monthlySalesMin | No | Minimum monthly sales floor. Units sold. Example: 100. | |
| variantCountMax | No | Maximum number of product variants. Example: 50. | |
| variantCountMin | No | Minimum number of product variants. Example: 2. | |
| bsrGrowthRateMax | No | Maximum BSR growth rate as decimal. | |
| bsrGrowthRateMin | No | Minimum BSR growth rate as decimal. Example: -0.2 = 20% improvement. | |
| keywordMatchType | No | Keyword match type: 'fuzzy', 'phrase', or 'exact'. Null = fuzzy. | |
| onlyCategoryRank | No | If true, only return products ranked in their category BSR. | |
| monthlyRevenueMax | No | Maximum monthly revenue floor. Example: 50000.00. | |
| monthlyRevenueMin | No | Minimum monthly revenue floor. Example: 1000.00. | |
| ratingFilterTarget | No | Choose whether rating-related filters apply to the current product or the most-rated variant. | |
| salesGrowthRateMax | No | Maximum sales growth rate as decimal. Example: 0.5 = 50% growth. | |
| salesGrowthRateMin | No | Minimum sales growth rate as decimal. Example: 0.1 = 10% growth. | |
| ratingToSalesRateMax | No | Maximum rating-to-sales rate as decimal. Example: 0.5. | |
| ratingToSalesRateMin | No | Minimum rating-to-sales rate as decimal. Example: 0.05. | |
| monthlyRatingCountMax | No | Maximum monthly new rating count. Example: 500. | |
| monthlyRatingCountMin | No | Minimum monthly new rating count. Example: 10. | |
| parentMonthlySalesMax | No | Maximum parent ASIN monthly sales floor. Units sold. | |
| parentMonthlySalesMin | No | Minimum parent ASIN monthly sales floor. Units sold. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully describes key behavioral traits: data freshness ('latest daily snapshot'), pagination behavior ('results are paginated (max 100 per page)'), and provides a concrete example of usage. However, it doesn't mention authentication requirements, rate limits, or error handling specifics.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is poorly structured with excessive technical documentation included (HTTP response codes, example responses, output schemas) that belongs in structured fields rather than the description. The core description is only the first paragraph, but it's buried under unnecessary API documentation that makes the tool definition bloated and difficult to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a complex tool with 54 parameters and no annotations, the description provides adequate context about the tool's purpose, usage scenarios, and key behavioral aspects. However, it lacks information about authentication, rate limits, and error handling that would be important for a production API tool. The inclusion of output schema details in the description is redundant since this information should be in structured fields.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, so the schema already documents all 54 parameters thoroughly. The description mentions filtering by 'keyword, category, and multi-dimensional filters' and provides an example with specific parameters, but doesn't add significant semantic meaning beyond what's already in the schema. This meets the baseline expectation when schema coverage is complete.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose as 'Search Amazon products by keyword, category, and multi-dimensional filters,' which is a specific verb+resource combination. It distinguishes itself from sibling tools by mentioning related endpoints for competitor analysis and trends, helping differentiate its search functionality from other product-related operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidance with 'Use this to discover products in a specific niche, analyze competitor listings, or find high-demand low-competition opportunities.' It also names specific alternatives ('Related: /products/competitors for competitor analysis, /products/history for trends'), giving clear context for when to use this tool versus others.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
openapi_v2_prompt_injection_detectBInspect
Detect prompt injection attacks
Detect prompt injection attacks in user input text.
Analyzes text using a fine-tuned DeBERTa model to identify potential prompt injection attacks such as instruction override, goal hijacking, data exfiltration, encoding obfuscation, and jailbreak roleplay.
Returns a classification label, confidence score, and boolean flag.
Responses:
200: Successful Response (Success Response) Content-Type: application/json
Example Response:
{
"success": true,
"meta": {
"requestId": "Requestid",
"timestamp": "Timestamp"
}
}Output Schema:
{
"properties": {
"success": {
"type": "boolean",
"title": "Success",
"description": "Whether the request was successful",
"default": true
},
"data": {
"description": "Response data payload"
},
"error": {
"description": "Error details if request failed"
},
"meta": {
"description": "Metadata for API responses.",
"properties": {
"requestId": {
"type": "string",
"title": "Requestid",
"description": "Unique request identifier"
},
"timestamp": {
"type": "string",
"title": "Timestamp",
"description": "Response timestamp in ISO 8601 format"
},
"total": {
"title": "Total",
"description": "Total number of records"
},
"page": {
"title": "Page",
"description": "Current page number"
},
"pageSize": {
"title": "Pagesize",
"description": "Number of records per page"
},
"totalPages": {
"title": "Totalpages",
"description": "Total number of pages"
},
"creditsRemaining": {
"title": "Creditsremaining",
"description": "Remaining API credits"
},
"creditsConsumed": {
"title": "Creditsconsumed",
"description": "Credits consumed by this request"
}
},
"type": "object",
"required": [
"requestId",
"timestamp"
],
"title": "ResponseMeta"
}
},
"type": "object",
"required": [
"meta"
],
"title": "OpenApiResponse[PromptInjectionResult]",
"examples": []
}422: Validation Error Content-Type: application/json
Example Response:
{
"detail": [
{
"loc": [],
"msg": "Message",
"type": "Error Type",
"ctx": {}
}
]
}Output Schema:
{
"properties": {
"detail": {
"items": {
"properties": {
"loc": {
"items": {},
"type": "array",
"title": "Location"
},
"msg": {
"type": "string",
"title": "Message"
},
"type": {
"type": "string",
"title": "Error Type"
},
"input": {
"title": "Input"
},
"ctx": {
"type": "object",
"title": "Context"
}
},
"type": "object",
"required": [
"loc",
"msg",
"type"
],
"title": "ValidationError"
},
"type": "array",
"title": "Detail"
}
},
"type": "object",
"title": "HTTPValidationError"
}| Name | Required | Description | Default |
|---|---|---|---|
| text | Yes | User input text to analyze for prompt injection attacks. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. While it mentions the model used (fine-tuned DeBERTa) and the types of attacks detected, it lacks critical behavioral details such as rate limits, authentication requirements, performance characteristics, error handling, or what happens when attacks are detected. For a security tool with zero annotation coverage, this is a significant gap in transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is poorly structured and excessively long, including irrelevant details like full HTTP response examples and output schemas that should be handled separately. The core description is front-loaded but buried under verbose API documentation, wasting space and reducing clarity. Every sentence does not earn its place, as much of the content repeats or extends beyond the tool's functional description.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of a security detection tool with no annotations and no output schema, the description is incomplete. It fails to explain the return values (e.g., what 'classification label, confidence score, and boolean flag' mean in practice), error conditions, or operational constraints. The inclusion of output schema snippets is confusing and does not compensate for the lack of a proper output schema or behavioral context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the single parameter 'text' well-documented in the schema as 'User input text to analyze for prompt injection attacks.' The description adds minimal value beyond this, merely restating that it analyzes 'user input text' without providing additional context like examples, edge cases, or formatting requirements. Baseline 3 is appropriate when the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('detect', 'analyzes') and resources ('prompt injection attacks', 'user input text'), distinguishing it from sibling tools which focus on categories, competitors, markets, products, and reviews. It explicitly names the types of attacks it identifies, making the purpose highly specific and differentiated.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by stating it analyzes 'user input text' for prompt injection attacks, suggesting it should be used when processing potentially malicious user inputs. However, it provides no explicit guidance on when to use this tool versus alternatives (e.g., other security tools or manual review) or any prerequisites, leaving the agent to infer appropriate scenarios without clear boundaries.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
openapi_v2_realtime_productAInspect
Realtime Product V2
Get realtime product details by scraping the Amazon product page live.
Use this for up-to-the-minute data when daily snapshots are insufficient. Example: pass asin="B07FR2V8SH" to get current price, rating, review count, BSR, and availability. Data fetched via Spider API in real time; latency is higher than snapshot endpoints (2-5 seconds). Related: /products/search for snapshot data, /products/history for trend analysis.
Responses:
200: Successful Response (Success Response) Content-Type: application/json
Example Response:
{
"success": true,
"meta": {
"requestId": "Requestid",
"timestamp": "Timestamp"
}
}Output Schema:
{
"properties": {
"success": {
"type": "boolean",
"title": "Success",
"description": "Whether the request was successful",
"default": true
},
"data": {
"description": "Response data payload"
},
"error": {
"description": "Error details if request failed"
},
"meta": {
"description": "Metadata for API responses.",
"properties": {
"requestId": {
"type": "string",
"title": "Requestid",
"description": "Unique request identifier"
},
"timestamp": {
"type": "string",
"title": "Timestamp",
"description": "Response timestamp in ISO 8601 format"
},
"total": {
"title": "Total",
"description": "Total number of records"
},
"page": {
"title": "Page",
"description": "Current page number"
},
"pageSize": {
"title": "Pagesize",
"description": "Number of records per page"
},
"totalPages": {
"title": "Totalpages",
"description": "Total number of pages"
},
"creditsRemaining": {
"title": "Creditsremaining",
"description": "Remaining API credits"
},
"creditsConsumed": {
"title": "Creditsconsumed",
"description": "Credits consumed by this request"
}
},
"type": "object",
"required": [
"requestId",
"timestamp"
],
"title": "ResponseMeta"
}
},
"type": "object",
"required": [
"meta"
],
"title": "OpenApiResponse[RealtimeProduct]",
"examples": []
}422: Validation Error Content-Type: application/json
Example Response:
{
"detail": [
{
"loc": [],
"msg": "Message",
"type": "Error Type",
"ctx": {}
}
]
}Output Schema:
{
"properties": {
"detail": {
"items": {
"properties": {
"loc": {
"items": {},
"type": "array",
"title": "Location"
},
"msg": {
"type": "string",
"title": "Message"
},
"type": {
"type": "string",
"title": "Error Type"
},
"input": {
"title": "Input"
},
"ctx": {
"type": "object",
"title": "Context"
}
},
"type": "object",
"required": [
"loc",
"msg",
"type"
],
"title": "ValidationError"
},
"type": "array",
"title": "Detail"
}
},
"type": "object",
"title": "HTTPValidationError"
}| Name | Required | Description | Default |
|---|---|---|---|
| asin | Yes | Amazon Standard Identification Number | |
| marketplace | No | Amazon marketplace code | US |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behavioral traits: the real-time scraping mechanism ('Data fetched via Spider API in real time'), performance characteristics ('latency is higher than snapshot endpoints (2-5 seconds)'), and the types of data returned (price, rating, review count, BSR, availability). However, it doesn't mention rate limits, authentication requirements, or error handling specifics.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and appropriately sized, with the core purpose and usage guidelines presented upfront. However, the inclusion of extensive response documentation (200 and 422 examples with schemas) adds bulk that could be streamlined, as some of this information might be redundant with structured output schemas if they were provided.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of a real-time scraping tool with no annotations and no output schema, the description does a good job covering the essential context: purpose, usage scenarios, performance characteristics, and data types. It includes example responses which partially compensate for the missing output schema. However, it lacks details on error handling, rate limits, and authentication requirements that would be important for complete agent understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents both parameters (asin and marketplace) with descriptions and constraints. The description adds minimal value beyond the schema by providing an example ASIN value ('B07FR2V8SH') and mentioning the marketplace parameter implicitly through context, but doesn't explain parameter interactions or usage nuances.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verb ('Get realtime product details') and resource ('by scraping the Amazon product page live'). It distinguishes itself from siblings by emphasizing real-time data acquisition versus snapshot endpoints, explicitly naming related tools like '/products/search' and '/products/history' for comparison.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool ('when daily snapshots are insufficient') and when not to (implied for snapshot data). It names specific alternatives ('/products/search for snapshot data, /products/history for trend analysis'), giving clear context for tool selection among siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
openapi_v2_realtime_reviewsAInspect
Realtime Reviews V2
Fetch realtime reviews for an ASIN from Amazon live.
Cursor-based pagination: omit cursor for the first page, then pass nextCursor from the previous response for subsequent pages. nextCursor=null means no more data. Related: /reviews/search for offline data with AI tags, /reviews/analysis for aggregated insights.
Responses:
200: Successful Response (Success Response) Content-Type: application/json
Example Response:
{
"success": true,
"meta": {
"requestId": "Requestid",
"timestamp": "Timestamp"
}
}Output Schema:
{
"properties": {
"success": {
"type": "boolean",
"title": "Success",
"description": "Whether the request was successful",
"default": true
},
"data": {
"description": "Response data payload"
},
"error": {
"description": "Error details if request failed"
},
"meta": {
"description": "Metadata for API responses.",
"properties": {
"requestId": {
"type": "string",
"title": "Requestid",
"description": "Unique request identifier"
},
"timestamp": {
"type": "string",
"title": "Timestamp",
"description": "Response timestamp in ISO 8601 format"
},
"total": {
"title": "Total",
"description": "Total number of records"
},
"page": {
"title": "Page",
"description": "Current page number"
},
"pageSize": {
"title": "Pagesize",
"description": "Number of records per page"
},
"totalPages": {
"title": "Totalpages",
"description": "Total number of pages"
},
"creditsRemaining": {
"title": "Creditsremaining",
"description": "Remaining API credits"
},
"creditsConsumed": {
"title": "Creditsconsumed",
"description": "Credits consumed by this request"
}
},
"type": "object",
"required": [
"requestId",
"timestamp"
],
"title": "ResponseMeta"
}
},
"type": "object",
"required": [
"meta"
],
"title": "OpenApiResponse[RealtimeReviews]",
"examples": []
}422: Validation Error Content-Type: application/json
Example Response:
{
"detail": [
{
"loc": [],
"msg": "Message",
"type": "Error Type",
"ctx": {}
}
]
}Output Schema:
{
"properties": {
"detail": {
"items": {
"properties": {
"loc": {
"items": {},
"type": "array",
"title": "Location"
},
"msg": {
"type": "string",
"title": "Message"
},
"type": {
"type": "string",
"title": "Error Type"
},
"input": {
"title": "Input"
},
"ctx": {
"type": "object",
"title": "Context"
}
},
"type": "object",
"required": [
"loc",
"msg",
"type"
],
"title": "ValidationError"
},
"type": "array",
"title": "Detail"
}
},
"type": "object",
"title": "HTTPValidationError"
}| Name | Required | Description | Default |
|---|---|---|---|
| asin | Yes | Amazon Standard Identification Number | |
| cursor | No | Pagination token from previous response's nextCursor. Omit for the first page. When the response's nextCursor is null, there are no more pages. | |
| marketplace | No | Amazon marketplace code | US |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and does well by explaining pagination behavior ('Cursor-based pagination: omit cursor for the first page, then pass nextCursor from the previous response for subsequent pages. nextCursor=null means no more data'). It also mentions the 'live' nature of the data. However, it doesn't cover potential rate limits, authentication requirements, or data freshness guarantees that would be helpful for a real-time API.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose and usage guidelines, but includes extensive response format documentation (200 and 422 responses with schemas) that duplicates what would typically be in an output schema. While the response documentation is valuable, it makes the description longer than necessary for tool selection purposes.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 3 parameters, 100% schema coverage, and no annotations, the description provides good context about pagination behavior, sibling tool differentiation, and the real-time nature of the data. The inclusion of response format documentation compensates for the lack of output schema. However, it could better address behavioral aspects like rate limits or data freshness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already fully documents all three parameters (asin, cursor, marketplace). The description adds context about cursor usage ('omit cursor for the first page') but doesn't provide additional semantic meaning beyond what's in the parameter descriptions. This meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Fetch realtime reviews for an ASIN from Amazon live'), identifies the resource (reviews for an ASIN), and distinguishes from siblings by mentioning '/reviews/search for offline data with AI tags' and '/reviews/analysis for aggregated insights'. This provides precise differentiation from related tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool ('Fetch realtime reviews') versus alternatives ('Related: /reviews/search for offline data with AI tags, /reviews/analysis for aggregated insights'). It also provides clear pagination guidance ('omit cursor for the first page, then pass nextCursor from the previous response'). This gives comprehensive usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
openapi_v2_reviews_analysisAInspect
Reviews Analyze V2
Analyze reviews by ASIN list or category to extract sentiment, ratings, and consumer insights.
Use this to understand customer satisfaction and common complaints before sourcing a product. Example: pass asins=["B07FR2V8SH"] with period="6m" for 6-month review analysis. Requires ≥50 reviews for meaningful results. Data sourced from review analysis pipeline; ASIN mode supports max 100 ASINs. Related: /products/search to find ASINs, /categories for category paths.
Responses:
200: Successful Response (Success Response) Content-Type: application/json
Example Response:
{
"success": true,
"meta": {
"requestId": "Requestid",
"timestamp": "Timestamp"
}
}Output Schema:
{
"properties": {
"success": {
"type": "boolean",
"title": "Success",
"description": "Whether the request was successful",
"default": true
},
"data": {
"description": "Response data payload"
},
"error": {
"description": "Error details if request failed"
},
"meta": {
"description": "Metadata for API responses.",
"properties": {
"requestId": {
"type": "string",
"title": "Requestid",
"description": "Unique request identifier"
},
"timestamp": {
"type": "string",
"title": "Timestamp",
"description": "Response timestamp in ISO 8601 format"
},
"total": {
"title": "Total",
"description": "Total number of records"
},
"page": {
"title": "Page",
"description": "Current page number"
},
"pageSize": {
"title": "Pagesize",
"description": "Number of records per page"
},
"totalPages": {
"title": "Totalpages",
"description": "Total number of pages"
},
"creditsRemaining": {
"title": "Creditsremaining",
"description": "Remaining API credits"
},
"creditsConsumed": {
"title": "Creditsconsumed",
"description": "Credits consumed by this request"
}
},
"type": "object",
"required": [
"requestId",
"timestamp"
],
"title": "ResponseMeta"
}
},
"type": "object",
"required": [
"meta"
],
"title": "OpenApiResponse[ReviewAnalysis]",
"examples": []
}422: Validation Error Content-Type: application/json
Example Response:
{
"detail": [
{
"loc": [],
"msg": "Message",
"type": "Error Type",
"ctx": {}
}
]
}Output Schema:
{
"properties": {
"detail": {
"items": {
"properties": {
"loc": {
"items": {},
"type": "array",
"title": "Location"
},
"msg": {
"type": "string",
"title": "Message"
},
"type": {
"type": "string",
"title": "Error Type"
},
"input": {
"title": "Input"
},
"ctx": {
"type": "object",
"title": "Context"
}
},
"type": "object",
"required": [
"loc",
"msg",
"type"
],
"title": "ValidationError"
},
"type": "array",
"title": "Detail"
}
},
"type": "object",
"title": "HTTPValidationError"
}| Name | Required | Description | Default |
|---|---|---|---|
| mode | Yes | Query mode: 'asin' for ASIN-based, 'category' for category-based | |
| asins | No | List of ASINs to analyze (max 100, required when mode='asin'). Example: ['B07FR2V8SH']. | |
| period | No | Time period for analysis | 6m |
| marketplace | No | Amazon marketplace code. | US |
| categoryPath | No | Category hierarchy from root. Example: ['Electronics', 'Computers']. Required when mode='category'. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses key behavioral traits: 'Requires ≥50 reviews for meaningful results,' 'Data sourced from review analysis pipeline,' and 'ASIN mode supports max 100 ASINs.' However, it doesn't cover other important aspects like rate limits, authentication needs, or what happens if requirements aren't met, leaving gaps for a mutation-like analysis tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with purpose and usage, but it includes extensive, redundant output schema details that belong in structured fields, not the description. This adds unnecessary length and reduces focus. The core content is concise, but the inclusion of schema examples and error responses wastes space and detracts from clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (5 parameters, no annotations, no output schema in context signals), the description is moderately complete. It covers purpose, usage, and some behavioral constraints but lacks details on output format, error handling, or deeper integration context. The output schema provided in the description compensates partially, but it's verbose and not efficiently integrated.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds minimal value beyond the schema: it provides an example ('Example: pass asins=["B07FR2V8SH"] with period="6m"') and hints at parameter interactions (e.g., ASIN mode limits). This meets the baseline for high schema coverage but doesn't significantly enhance understanding.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Analyze reviews by ASIN list or category to extract sentiment, ratings, and consumer insights.' It specifies the resource (reviews) and action (analyze to extract insights). However, it doesn't explicitly differentiate from sibling tools like 'openapi_v2_reviews_search' or 'openapi_v2_realtime_reviews', which might also involve review analysis, so it doesn't reach the highest score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use the tool: 'Use this to understand customer satisfaction and common complaints before sourcing a product.' It mentions related tools ('/products/search to find ASINs, /categories for category paths') but doesn't explicitly state when NOT to use it or name direct alternatives among the siblings, so it falls short of a perfect score.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
openapi_v2_reviews_searchAInspect
Reviews Search V2
Search reviews for an ASIN with multi-dimensional filters.
Filters: star rating range, verified purchase, Vine program, helpful vote count, date range, and AI-generated tags. Results sorted by recent/rating/helpfulVoteCount. Page-based pagination (default 10 per page, max 20). Data sourced from daily BigQuery snapshot with AI-generated tags. Related: /realtime/reviews for live data, /reviews/analysis for aggregated insights.
Responses:
200: Successful Response (Success Response) Content-Type: application/json
Example Response:
{
"success": true,
"meta": {
"requestId": "Requestid",
"timestamp": "Timestamp"
}
}Output Schema:
{
"properties": {
"success": {
"type": "boolean",
"title": "Success",
"description": "Whether the request was successful",
"default": true
},
"data": {
"title": "Data",
"description": "Response data payload"
},
"error": {
"description": "Error details if request failed"
},
"meta": {
"description": "Metadata for API responses.",
"properties": {
"requestId": {
"type": "string",
"title": "Requestid",
"description": "Unique request identifier"
},
"timestamp": {
"type": "string",
"title": "Timestamp",
"description": "Response timestamp in ISO 8601 format"
},
"total": {
"title": "Total",
"description": "Total number of records"
},
"page": {
"title": "Page",
"description": "Current page number"
},
"pageSize": {
"title": "Pagesize",
"description": "Number of records per page"
},
"totalPages": {
"title": "Totalpages",
"description": "Total number of pages"
},
"creditsRemaining": {
"title": "Creditsremaining",
"description": "Remaining API credits"
},
"creditsConsumed": {
"title": "Creditsconsumed",
"description": "Credits consumed by this request"
}
},
"type": "object",
"required": [
"requestId",
"timestamp"
],
"title": "ResponseMeta"
}
},
"type": "object",
"required": [
"meta"
],
"title": "OpenApiResponse[list[TaggedReview]]",
"examples": []
}422: Validation Error Content-Type: application/json
Example Response:
{
"detail": [
{
"loc": [],
"msg": "Message",
"type": "Error Type",
"ctx": {}
}
]
}Output Schema:
{
"properties": {
"detail": {
"items": {
"properties": {
"loc": {
"items": {},
"type": "array",
"title": "Location"
},
"msg": {
"type": "string",
"title": "Message"
},
"type": {
"type": "string",
"title": "Error Type"
},
"input": {
"title": "Input"
},
"ctx": {
"type": "object",
"title": "Context"
}
},
"type": "object",
"required": [
"loc",
"msg",
"type"
],
"title": "ValidationError"
},
"type": "array",
"title": "Detail"
}
},
"type": "object",
"title": "HTTPValidationError"
}| Name | Required | Description | Default |
|---|---|---|---|
| asin | Yes | Amazon Standard Identification Number | |
| page | No | Page number (1-indexed). | |
| sortBy | No | Sort field: 'recent' (date), 'rating' (star rating), 'helpfulVoteCount' (vote count). | recent |
| dateEnd | No | Latest review date (inclusive, YYYY-MM-DD). Example: '2026-04-01'. | |
| pageSize | No | Results per page (1-20, default 10). | |
| vineOnly | No | If true, only return Amazon Vine program reviews. | |
| dateStart | No | Earliest review date (inclusive, YYYY-MM-DD). Example: '2025-01-01'. | |
| ratingMax | No | Maximum star rating (inclusive). Example: 3 for negative reviews. | |
| ratingMin | No | Minimum star rating (inclusive). Example: 1. | |
| sortOrder | No | Sort direction: 'desc' or 'asc'. | desc |
| verifiedOnly | No | If true, only return verified purchase reviews. | |
| helpfulVoteCountMin | No | Minimum helpful vote count. Example: 5. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behavioral traits: data source ('daily BigQuery snapshot'), pagination behavior ('page-based pagination, default 10 per page, max 20'), sorting options, and filter dimensions. It doesn't mention rate limits or authentication requirements, but covers most operational aspects well.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with essential information but becomes bloated with extensive API response documentation that belongs in the output schema. The first paragraph is efficient, but the inclusion of full response schemas and examples makes it unnecessarily long and violates the principle that every sentence should earn its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (12 parameters, no annotations, no output schema), the description does a good job covering operational context. It explains data source, pagination, sorting, and filter dimensions, and provides usage guidance. The main gap is the lack of output format explanation, which would be helpful since there's no output schema provided in the context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all 12 parameters thoroughly. The description adds minimal parameter semantics beyond the schema - it lists filter categories and sorting options but doesn't provide additional context about parameter interactions or usage patterns. Baseline 3 is appropriate when the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Search reviews for an ASIN with multi-dimensional filters.' It specifies the resource (reviews for an ASIN) and the action (search with filters), and distinguishes it from sibling tools by mentioning related endpoints for live data and aggregated insights.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool versus alternatives: 'Related: /realtime/reviews for live data, /reviews/analysis for aggregated insights.' This clearly differentiates this tool from its siblings, indicating it's for filtered search of snapshot data rather than real-time access or analysis.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!