Skip to main content
Glama
Ownership verified

Server Details

Real-time Amazon data API built for AI agents. 200M+ products, 1B+ reviews, live BSR, pricing, and competitor data as clean JSON. 10 agent skills for market research, competitor monitoring, pricing analysis, and listing audits.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4/5 across 10 of 10 tools scored. Lowest: 3.1/5.

Server CoherenceA
Disambiguation4/5

Most tools have distinct purposes focused on different aspects of Amazon data analysis (categories, products, markets, reviews, etc.), with clear boundaries described in their documentation. However, there is some potential overlap between products_search and competitors tools as both involve product discovery, though their specific focuses differ enough to avoid major confusion.

Naming Consistency5/5

All tools follow a perfectly consistent naming pattern: 'openapi_v2_' prefix followed by a descriptive resource/action combination in snake_case (e.g., 'categories', 'products_search', 'reviews_analysis'). This uniformity makes the tool set predictable and easy to navigate.

Tool Count5/5

With 10 tools, this is well within the ideal 3-15 range for a comprehensive Amazon data analysis server. Each tool serves a specific, valuable function in the domain, from category exploration to product search, competitor analysis, market research, and review processing, with no obvious redundancy.

Completeness4/5

The tool set covers the core workflows for Amazon product research comprehensively: discovery (categories, products, markets), analysis (competitors, history, reviews), and real-time data (product, reviews). The only minor gap is the lack of a dedicated tool for seller or brand analysis, but the existing tools provide robust coverage for most agent tasks.

Available Tools

10 tools
openapi_v2_categoriesAInspect

Categories V2

Query Amazon category hierarchy by ID, path, parent, or keyword.

Use this to discover category structure for filtering in other endpoints. Example: pass categoryKeyword="yoga" to find matching categories, or parentCategoryPath=["Sports & Outdoors"] to list child categories.

Query modes (mutually exclusive):

  • No parameters: Returns all root categories

  • categoryId: Get specific category by ID

  • categoryPath: Get specific category by path

  • parentCategoryId: Get children of parent category by ID

  • parentCategoryPath: Get children of parent category by path

  • categoryKeyword: Search categories by keyword

Related: /products/search and /markets/search accept categoryPath for filtering.

Responses:

200: Successful Response (Success Response) Content-Type: application/json

Example Response:

{
  "success": true,
  "meta": {
    "requestId": "Requestid",
    "timestamp": "Timestamp"
  }
}

Output Schema:

{
  "properties": {
    "success": {
      "type": "boolean",
      "title": "Success",
      "description": "Whether the request was successful",
      "default": true
    },
    "data": {
      "title": "Data",
      "description": "Response data payload"
    },
    "error": {
      "description": "Error details if request failed"
    },
    "meta": {
      "description": "Metadata for API responses.",
      "properties": {
        "requestId": {
          "type": "string",
          "title": "Requestid",
          "description": "Unique request identifier"
        },
        "timestamp": {
          "type": "string",
          "title": "Timestamp",
          "description": "Response timestamp in ISO 8601 format"
        },
        "total": {
          "title": "Total",
          "description": "Total number of records"
        },
        "page": {
          "title": "Page",
          "description": "Current page number"
        },
        "pageSize": {
          "title": "Pagesize",
          "description": "Number of records per page"
        },
        "totalPages": {
          "title": "Totalpages",
          "description": "Total number of pages"
        },
        "creditsRemaining": {
          "title": "Creditsremaining",
          "description": "Remaining API credits"
        },
        "creditsConsumed": {
          "title": "Creditsconsumed",
          "description": "Credits consumed by this request"
        }
      },
      "type": "object",
      "required": [
        "requestId",
        "timestamp"
      ],
      "title": "ResponseMeta"
    }
  },
  "type": "object",
  "required": [
    "meta"
  ],
  "title": "OpenApiResponse[list[Category]]",
  "examples": []
}

422: Validation Error Content-Type: application/json

Example Response:

{
  "detail": [
    {
      "loc": [],
      "msg": "Message",
      "type": "Error Type",
      "ctx": {}
    }
  ]
}

Output Schema:

{
  "properties": {
    "detail": {
      "items": {
        "properties": {
          "loc": {
            "items": {},
            "type": "array",
            "title": "Location"
          },
          "msg": {
            "type": "string",
            "title": "Message"
          },
          "type": {
            "type": "string",
            "title": "Error Type"
          },
          "input": {
            "title": "Input"
          },
          "ctx": {
            "type": "object",
            "title": "Context"
          }
        },
        "type": "object",
        "required": [
          "loc",
          "msg",
          "type"
        ],
        "title": "ValidationError"
      },
      "type": "array",
      "title": "Detail"
    }
  },
  "type": "object",
  "title": "HTTPValidationError"
}
ParametersJSON Schema
NameRequiredDescriptionDefault
categoryIdNoCategory identifier
marketplaceNoAmazon marketplace codeUS
categoryPathNoCategory hierarchy from root to current level (e.g., ['Electronics', 'Computers', 'Laptops'])
categoryKeywordNoFilter by category name keyword (matches any level in category hierarchy, e.g., 'Electronics' or 'Laptops')
parentCategoryIdNoParent category ID
parentCategoryPathNoParent category path
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It does well by explaining the six query modes and their mutual exclusivity, providing example usage patterns, and mentioning the response structure. However, it doesn't explicitly state whether this is a read-only operation (though implied by 'query'), nor does it mention rate limits, authentication requirements, or pagination behavior that might be relevant for API tools.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections: purpose statement, usage guidance, query modes, and related endpoints. It's appropriately sized for a tool with six parameters and complex query logic. The only minor issue is that the response documentation (200 and 422 sections) is quite detailed and might be better handled by an output schema, but this is reasonable given the context.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 6 parameters, 100% schema coverage, and no output schema, the description provides good contextual completeness. It explains the purpose, usage scenarios, query modes with examples, and relationships to other tools. The main gap is the lack of explicit behavioral information about read-only nature, authentication, or rate limits, but the query modes and examples provide substantial practical guidance.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, so the schema already documents all six parameters thoroughly. The description adds some value by explaining the query modes and providing examples ('categoryKeyword="yoga"' and 'parentCategoryPath=["Sports & Outdoors"]'), but doesn't add significant semantic information beyond what's already in the parameter descriptions. This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Query Amazon category hierarchy by ID, path, parent, or keyword' and 'Use this to discover category structure for filtering in other endpoints.' It specifies the exact resource (Amazon category hierarchy) and multiple query methods, distinguishing it from sibling tools like products_search or reviews_search which handle different resources.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool: 'Use this to discover category structure for filtering in other endpoints' and names specific related endpoints ('/products/search and /markets/search accept categoryPath for filtering'). It also clearly explains the six mutually exclusive query modes, helping the agent choose the right parameter combination.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

openapi_v2_competitorsAInspect

Competitor Lookup V2

Search competitor products by keyword, brand, ASIN, or category with filters.

Use this to identify competing products around a specific listing or brand. Example: pass asin="B07FR2V8SH" to find all products competing in the same keywords and category. Data is based on the latest daily snapshot; results are paginated (max 100 per page). Related: /products/search for broader keyword discovery.

Responses:

200: Successful Response (Success Response) Content-Type: application/json

Example Response:

{
  "success": true,
  "meta": {
    "requestId": "Requestid",
    "timestamp": "Timestamp"
  }
}

Output Schema:

{
  "properties": {
    "success": {
      "type": "boolean",
      "title": "Success",
      "description": "Whether the request was successful",
      "default": true
    },
    "data": {
      "title": "Data",
      "description": "Response data payload"
    },
    "error": {
      "description": "Error details if request failed"
    },
    "meta": {
      "description": "Metadata for API responses.",
      "properties": {
        "requestId": {
          "type": "string",
          "title": "Requestid",
          "description": "Unique request identifier"
        },
        "timestamp": {
          "type": "string",
          "title": "Timestamp",
          "description": "Response timestamp in ISO 8601 format"
        },
        "total": {
          "title": "Total",
          "description": "Total number of records"
        },
        "page": {
          "title": "Page",
          "description": "Current page number"
        },
        "pageSize": {
          "title": "Pagesize",
          "description": "Number of records per page"
        },
        "totalPages": {
          "title": "Totalpages",
          "description": "Total number of pages"
        },
        "creditsRemaining": {
          "title": "Creditsremaining",
          "description": "Remaining API credits"
        },
        "creditsConsumed": {
          "title": "Creditsconsumed",
          "description": "Credits consumed by this request"
        }
      },
      "type": "object",
      "required": [
        "requestId",
        "timestamp"
      ],
      "title": "ResponseMeta"
    }
  },
  "type": "object",
  "required": [
    "meta"
  ],
  "title": "OpenApiResponse[list[Product]]",
  "examples": []
}

422: Validation Error Content-Type: application/json

Example Response:

{
  "detail": [
    {
      "loc": [],
      "msg": "Message",
      "type": "Error Type",
      "ctx": {}
    }
  ]
}

Output Schema:

{
  "properties": {
    "detail": {
      "items": {
        "properties": {
          "loc": {
            "items": {},
            "type": "array",
            "title": "Location"
          },
          "msg": {
            "type": "string",
            "title": "Message"
          },
          "type": {
            "type": "string",
            "title": "Error Type"
          },
          "input": {
            "title": "Input"
          },
          "ctx": {
            "type": "object",
            "title": "Context"
          }
        },
        "type": "object",
        "required": [
          "loc",
          "msg",
          "type"
        ],
        "title": "ValidationError"
      },
      "type": "array",
      "title": "Detail"
    }
  },
  "type": "object",
  "title": "HTTPValidationError"
}
ParametersJSON Schema
NameRequiredDescriptionDefault
asinNoAmazon Standard Identification Number (10-char alphanumeric). Example: 'B07FR2V8SH'.
pageNoPage number
badgesNoInclude products with these badges. Example: ['bestSeller', 'amazonChoice', 'newRelease', 'aPlus', 'video'].
sortByNoSort fieldmonthlySalesFloor
keywordNoSearch keyword
pageSizeNoPage size (max 100)
brandNameNoFilter by brand name.
dateRangeNoTime range filter. Relative ('30d') or month ('2026-01'). Default '30d'.30d
sortOrderNoSort direction: asc or descdesc
sellerNameNoFilter by seller name.
marketplaceNoAmazon marketplace codeUS
categoryPathNoCategory hierarchy from root to current level (e.g., ['Electronics', 'Computers', 'Laptops'])
fulfillmentsNoFulfillment filter. Example: ['FBA', 'FBM'].
excludeBadgesNoExclude products with these badges. Supported: ['aPlus', 'video'].
excludeBrandsNoBrand names to exclude. Example: ['Generic'].
includeBrandsNoBrand names to include. Example: ['Apple', 'Samsung'].
excludeSellersNoSeller names to exclude.
includeSellersNoSeller names to include. Example: ['Apple Store'].
sellerCountMaxNoMaximum number of sellers. Example: 20.
sellerCountMinNoMinimum number of sellers. Example: 1.
excludeKeywordsNoKeywords to exclude from results. Example: ['refurbished', 'used'].
keywordMatchTypeNoKeyword match type: 'fuzzy', 'phrase', or 'exact'. Null = fuzzy.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It adds valuable context beyond basic functionality: it mentions data is 'based on the latest daily snapshot,' results are 'paginated (max 100 per page),' and includes an example response structure. However, it lacks details on permissions, rate limits, or error handling beyond the example, leaving some behavioral gaps for a tool with 22 parameters.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded with key information in the first few sentences. However, it includes extensive example response and output schema details that are redundant since there's no output schema provided in context signals, and some formatting (like markdown code blocks) adds clutter without adding proportional value for tool selection.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (22 parameters, no annotations, no output schema), the description is fairly complete. It covers purpose, usage, behavioral traits like data freshness and pagination, and relates to siblings. However, it could improve by explaining parameter interactions or common use cases more explicitly, given the high parameter count and lack of structured output guidance.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all 22 parameters thoroughly. The description adds minimal parameter semantics beyond the schema, such as implying ASIN usage in the example and mentioning pagination limits. It doesn't provide additional syntax, format details, or usage examples for parameters beyond what the schema offers, meeting the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose as 'Search competitor products by keyword, brand, ASIN, or category with filters' and distinguishes it from sibling tools by mentioning '/products/search for broader keyword discovery.' It specifies the verb ('search'), resource ('competitor products'), and scope ('with filters'), making it highly specific and differentiated.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidance: 'Use this to identify competing products around a specific listing or brand' and gives an example with 'asin="B07FR2V8SH".' It also names an alternative ('/products/search for broader keyword discovery') and clarifies when to use this tool versus that alternative, offering clear context and exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

openapi_v2_products_historyAInspect

Product History V2

Get historical time-series data for a single ASIN over a date range.

Returns columnar arrays: high-frequency metrics (price, BSR, sales, rating, sellerCount) as daily arrays aligned with timestamps, and low-frequency fields (title, imageUrl, badges, inventoryStatus) as changelog entries that only record changes. Max date range: 730 days. Related: /products/search to discover ASINs first.

Responses:

200: Successful Response (Success Response) Content-Type: application/json

Example Response:

{
  "success": true,
  "meta": {
    "requestId": "Requestid",
    "timestamp": "Timestamp"
  }
}

Output Schema:

{
  "properties": {
    "success": {
      "type": "boolean",
      "title": "Success",
      "description": "Whether the request was successful",
      "default": true
    },
    "data": {
      "description": "Response data payload"
    },
    "error": {
      "description": "Error details if request failed"
    },
    "meta": {
      "description": "Metadata for API responses.",
      "properties": {
        "requestId": {
          "type": "string",
          "title": "Requestid",
          "description": "Unique request identifier"
        },
        "timestamp": {
          "type": "string",
          "title": "Timestamp",
          "description": "Response timestamp in ISO 8601 format"
        },
        "total": {
          "title": "Total",
          "description": "Total number of records"
        },
        "page": {
          "title": "Page",
          "description": "Current page number"
        },
        "pageSize": {
          "title": "Pagesize",
          "description": "Number of records per page"
        },
        "totalPages": {
          "title": "Totalpages",
          "description": "Total number of pages"
        },
        "creditsRemaining": {
          "title": "Creditsremaining",
          "description": "Remaining API credits"
        },
        "creditsConsumed": {
          "title": "Creditsconsumed",
          "description": "Credits consumed by this request"
        }
      },
      "type": "object",
      "required": [
        "requestId",
        "timestamp"
      ],
      "title": "ResponseMeta"
    }
  },
  "type": "object",
  "required": [
    "meta"
  ],
  "title": "OpenApiResponse[ProductHistoryTimeSeriesItem]",
  "examples": []
}

422: Validation Error Content-Type: application/json

Example Response:

{
  "detail": [
    {
      "loc": [],
      "msg": "Message",
      "type": "Error Type",
      "ctx": {}
    }
  ]
}

Output Schema:

{
  "properties": {
    "detail": {
      "items": {
        "properties": {
          "loc": {
            "items": {},
            "type": "array",
            "title": "Location"
          },
          "msg": {
            "type": "string",
            "title": "Message"
          },
          "type": {
            "type": "string",
            "title": "Error Type"
          },
          "input": {
            "title": "Input"
          },
          "ctx": {
            "type": "object",
            "title": "Context"
          }
        },
        "type": "object",
        "required": [
          "loc",
          "msg",
          "type"
        ],
        "title": "ValidationError"
      },
      "type": "array",
      "title": "Detail"
    }
  },
  "type": "object",
  "title": "HTTPValidationError"
}
ParametersJSON Schema
NameRequiredDescriptionDefault
asinYesAmazon Standard Identification Number (10 chars)
endDateYesEnd date in YYYY-MM-DD format
startDateYesStart date in YYYY-MM-DD format
marketplaceNoAmazon marketplace code.US
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It adds useful context like the 730-day max range and details on response structure (e.g., columnar arrays vs. changelog entries). However, it omits critical behavioral traits such as rate limits, authentication needs, or error handling specifics, leaving gaps for a mutation-free read tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with key information in the first two sentences, but it includes extensive, redundant output schema details (e.g., full JSON examples and schemas for 200 and 422 responses) that could be omitted since output schema is noted as false in context. This adds unnecessary length without enhancing tool understanding.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of a time-series data tool with no annotations and no output schema, the description does well by explaining the return data structure (columnar arrays vs. changelog entries) and constraints. However, it could be more complete by addressing potential errors or usage limits beyond the date range, slightly reducing the score.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents all parameters (asin, startDate, endDate, marketplace). The description does not add any parameter-specific semantics beyond what the schema provides, such as explaining ASIN format or date constraints. This meets the baseline for high schema coverage but offers no extra value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get historical time-series data for a single ASIN over a date range.' It specifies the verb ('Get'), resource ('historical time-series data'), and scope ('single ASIN over a date range'), distinguishing it from siblings like 'products_search' for discovery. The title 'Product History V2' reinforces this focus.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for usage: 'Max date range: 730 days' and 'Related: /products/search to discover ASINs first.' This guides the agent on constraints and prerequisites. However, it lacks explicit alternatives or exclusions, such as when to use this versus 'realtime_product' for current data, which prevents a perfect score.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

openapi_v2_prompt_injection_detectBInspect

Detect prompt injection attacks

Detect prompt injection attacks in user input text.

Analyzes text using a fine-tuned DeBERTa model to identify potential prompt injection attacks such as instruction override, goal hijacking, data exfiltration, encoding obfuscation, and jailbreak roleplay.

Returns a classification label, confidence score, and boolean flag.

Responses:

200: Successful Response (Success Response) Content-Type: application/json

Example Response:

{
  "success": true,
  "meta": {
    "requestId": "Requestid",
    "timestamp": "Timestamp"
  }
}

Output Schema:

{
  "properties": {
    "success": {
      "type": "boolean",
      "title": "Success",
      "description": "Whether the request was successful",
      "default": true
    },
    "data": {
      "description": "Response data payload"
    },
    "error": {
      "description": "Error details if request failed"
    },
    "meta": {
      "description": "Metadata for API responses.",
      "properties": {
        "requestId": {
          "type": "string",
          "title": "Requestid",
          "description": "Unique request identifier"
        },
        "timestamp": {
          "type": "string",
          "title": "Timestamp",
          "description": "Response timestamp in ISO 8601 format"
        },
        "total": {
          "title": "Total",
          "description": "Total number of records"
        },
        "page": {
          "title": "Page",
          "description": "Current page number"
        },
        "pageSize": {
          "title": "Pagesize",
          "description": "Number of records per page"
        },
        "totalPages": {
          "title": "Totalpages",
          "description": "Total number of pages"
        },
        "creditsRemaining": {
          "title": "Creditsremaining",
          "description": "Remaining API credits"
        },
        "creditsConsumed": {
          "title": "Creditsconsumed",
          "description": "Credits consumed by this request"
        }
      },
      "type": "object",
      "required": [
        "requestId",
        "timestamp"
      ],
      "title": "ResponseMeta"
    }
  },
  "type": "object",
  "required": [
    "meta"
  ],
  "title": "OpenApiResponse[PromptInjectionResult]",
  "examples": []
}

422: Validation Error Content-Type: application/json

Example Response:

{
  "detail": [
    {
      "loc": [],
      "msg": "Message",
      "type": "Error Type",
      "ctx": {}
    }
  ]
}

Output Schema:

{
  "properties": {
    "detail": {
      "items": {
        "properties": {
          "loc": {
            "items": {},
            "type": "array",
            "title": "Location"
          },
          "msg": {
            "type": "string",
            "title": "Message"
          },
          "type": {
            "type": "string",
            "title": "Error Type"
          },
          "input": {
            "title": "Input"
          },
          "ctx": {
            "type": "object",
            "title": "Context"
          }
        },
        "type": "object",
        "required": [
          "loc",
          "msg",
          "type"
        ],
        "title": "ValidationError"
      },
      "type": "array",
      "title": "Detail"
    }
  },
  "type": "object",
  "title": "HTTPValidationError"
}
ParametersJSON Schema
NameRequiredDescriptionDefault
textYesUser input text to analyze for prompt injection attacks.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. While it mentions the model used (fine-tuned DeBERTa) and the types of attacks detected, it lacks critical behavioral details such as rate limits, authentication requirements, performance characteristics, error handling, or what happens when attacks are detected. For a security tool with zero annotation coverage, this is a significant gap in transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is poorly structured and excessively long, including irrelevant details like full HTTP response examples and output schemas that should be handled separately. The core description is front-loaded but buried under verbose API documentation, wasting space and reducing clarity. Every sentence does not earn its place, as much of the content repeats or extends beyond the tool's functional description.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of a security detection tool with no annotations and no output schema, the description is incomplete. It fails to explain the return values (e.g., what 'classification label, confidence score, and boolean flag' mean in practice), error conditions, or operational constraints. The inclusion of output schema snippets is confusing and does not compensate for the lack of a proper output schema or behavioral context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the single parameter 'text' well-documented in the schema as 'User input text to analyze for prompt injection attacks.' The description adds minimal value beyond this, merely restating that it analyzes 'user input text' without providing additional context like examples, edge cases, or formatting requirements. Baseline 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('detect', 'analyzes') and resources ('prompt injection attacks', 'user input text'), distinguishing it from sibling tools which focus on categories, competitors, markets, products, and reviews. It explicitly names the types of attacks it identifies, making the purpose highly specific and differentiated.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by stating it analyzes 'user input text' for prompt injection attacks, suggesting it should be used when processing potentially malicious user inputs. However, it provides no explicit guidance on when to use this tool versus alternatives (e.g., other security tools or manual review) or any prerequisites, leaving the agent to infer appropriate scenarios without clear boundaries.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

openapi_v2_realtime_productAInspect

Realtime Product V2

Get realtime product details by scraping the Amazon product page live.

Use this for up-to-the-minute data when daily snapshots are insufficient. Example: pass asin="B07FR2V8SH" to get current price, rating, review count, BSR, and availability. Data fetched via Spider API in real time; latency is higher than snapshot endpoints (2-5 seconds). Related: /products/search for snapshot data, /products/history for trend analysis.

Responses:

200: Successful Response (Success Response) Content-Type: application/json

Example Response:

{
  "success": true,
  "meta": {
    "requestId": "Requestid",
    "timestamp": "Timestamp"
  }
}

Output Schema:

{
  "properties": {
    "success": {
      "type": "boolean",
      "title": "Success",
      "description": "Whether the request was successful",
      "default": true
    },
    "data": {
      "description": "Response data payload"
    },
    "error": {
      "description": "Error details if request failed"
    },
    "meta": {
      "description": "Metadata for API responses.",
      "properties": {
        "requestId": {
          "type": "string",
          "title": "Requestid",
          "description": "Unique request identifier"
        },
        "timestamp": {
          "type": "string",
          "title": "Timestamp",
          "description": "Response timestamp in ISO 8601 format"
        },
        "total": {
          "title": "Total",
          "description": "Total number of records"
        },
        "page": {
          "title": "Page",
          "description": "Current page number"
        },
        "pageSize": {
          "title": "Pagesize",
          "description": "Number of records per page"
        },
        "totalPages": {
          "title": "Totalpages",
          "description": "Total number of pages"
        },
        "creditsRemaining": {
          "title": "Creditsremaining",
          "description": "Remaining API credits"
        },
        "creditsConsumed": {
          "title": "Creditsconsumed",
          "description": "Credits consumed by this request"
        }
      },
      "type": "object",
      "required": [
        "requestId",
        "timestamp"
      ],
      "title": "ResponseMeta"
    }
  },
  "type": "object",
  "required": [
    "meta"
  ],
  "title": "OpenApiResponse[RealtimeProduct]",
  "examples": []
}

422: Validation Error Content-Type: application/json

Example Response:

{
  "detail": [
    {
      "loc": [],
      "msg": "Message",
      "type": "Error Type",
      "ctx": {}
    }
  ]
}

Output Schema:

{
  "properties": {
    "detail": {
      "items": {
        "properties": {
          "loc": {
            "items": {},
            "type": "array",
            "title": "Location"
          },
          "msg": {
            "type": "string",
            "title": "Message"
          },
          "type": {
            "type": "string",
            "title": "Error Type"
          },
          "input": {
            "title": "Input"
          },
          "ctx": {
            "type": "object",
            "title": "Context"
          }
        },
        "type": "object",
        "required": [
          "loc",
          "msg",
          "type"
        ],
        "title": "ValidationError"
      },
      "type": "array",
      "title": "Detail"
    }
  },
  "type": "object",
  "title": "HTTPValidationError"
}
ParametersJSON Schema
NameRequiredDescriptionDefault
asinYesAmazon Standard Identification Number
marketplaceNoAmazon marketplace codeUS
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behavioral traits: the real-time scraping mechanism ('Data fetched via Spider API in real time'), performance characteristics ('latency is higher than snapshot endpoints (2-5 seconds)'), and the types of data returned (price, rating, review count, BSR, availability). However, it doesn't mention rate limits, authentication requirements, or error handling specifics.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and appropriately sized, with the core purpose and usage guidelines presented upfront. However, the inclusion of extensive response documentation (200 and 422 examples with schemas) adds bulk that could be streamlined, as some of this information might be redundant with structured output schemas if they were provided.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of a real-time scraping tool with no annotations and no output schema, the description does a good job covering the essential context: purpose, usage scenarios, performance characteristics, and data types. It includes example responses which partially compensate for the missing output schema. However, it lacks details on error handling, rate limits, and authentication requirements that would be important for complete agent understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters (asin and marketplace) with descriptions and constraints. The description adds minimal value beyond the schema by providing an example ASIN value ('B07FR2V8SH') and mentioning the marketplace parameter implicitly through context, but doesn't explain parameter interactions or usage nuances.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verb ('Get realtime product details') and resource ('by scraping the Amazon product page live'). It distinguishes itself from siblings by emphasizing real-time data acquisition versus snapshot endpoints, explicitly naming related tools like '/products/search' and '/products/history' for comparison.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool ('when daily snapshots are insufficient') and when not to (implied for snapshot data). It names specific alternatives ('/products/search for snapshot data, /products/history for trend analysis'), giving clear context for tool selection among siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

openapi_v2_realtime_reviewsAInspect

Realtime Reviews V2

Fetch realtime reviews for an ASIN from Amazon live.

Cursor-based pagination: omit cursor for the first page, then pass nextCursor from the previous response for subsequent pages. nextCursor=null means no more data. Related: /reviews/search for offline data with AI tags, /reviews/analysis for aggregated insights.

Responses:

200: Successful Response (Success Response) Content-Type: application/json

Example Response:

{
  "success": true,
  "meta": {
    "requestId": "Requestid",
    "timestamp": "Timestamp"
  }
}

Output Schema:

{
  "properties": {
    "success": {
      "type": "boolean",
      "title": "Success",
      "description": "Whether the request was successful",
      "default": true
    },
    "data": {
      "description": "Response data payload"
    },
    "error": {
      "description": "Error details if request failed"
    },
    "meta": {
      "description": "Metadata for API responses.",
      "properties": {
        "requestId": {
          "type": "string",
          "title": "Requestid",
          "description": "Unique request identifier"
        },
        "timestamp": {
          "type": "string",
          "title": "Timestamp",
          "description": "Response timestamp in ISO 8601 format"
        },
        "total": {
          "title": "Total",
          "description": "Total number of records"
        },
        "page": {
          "title": "Page",
          "description": "Current page number"
        },
        "pageSize": {
          "title": "Pagesize",
          "description": "Number of records per page"
        },
        "totalPages": {
          "title": "Totalpages",
          "description": "Total number of pages"
        },
        "creditsRemaining": {
          "title": "Creditsremaining",
          "description": "Remaining API credits"
        },
        "creditsConsumed": {
          "title": "Creditsconsumed",
          "description": "Credits consumed by this request"
        }
      },
      "type": "object",
      "required": [
        "requestId",
        "timestamp"
      ],
      "title": "ResponseMeta"
    }
  },
  "type": "object",
  "required": [
    "meta"
  ],
  "title": "OpenApiResponse[RealtimeReviews]",
  "examples": []
}

422: Validation Error Content-Type: application/json

Example Response:

{
  "detail": [
    {
      "loc": [],
      "msg": "Message",
      "type": "Error Type",
      "ctx": {}
    }
  ]
}

Output Schema:

{
  "properties": {
    "detail": {
      "items": {
        "properties": {
          "loc": {
            "items": {},
            "type": "array",
            "title": "Location"
          },
          "msg": {
            "type": "string",
            "title": "Message"
          },
          "type": {
            "type": "string",
            "title": "Error Type"
          },
          "input": {
            "title": "Input"
          },
          "ctx": {
            "type": "object",
            "title": "Context"
          }
        },
        "type": "object",
        "required": [
          "loc",
          "msg",
          "type"
        ],
        "title": "ValidationError"
      },
      "type": "array",
      "title": "Detail"
    }
  },
  "type": "object",
  "title": "HTTPValidationError"
}
ParametersJSON Schema
NameRequiredDescriptionDefault
asinYesAmazon Standard Identification Number
cursorNoPagination token from previous response's nextCursor. Omit for the first page. When the response's nextCursor is null, there are no more pages.
marketplaceNoAmazon marketplace codeUS
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and does well by explaining pagination behavior ('Cursor-based pagination: omit cursor for the first page, then pass nextCursor from the previous response for subsequent pages. nextCursor=null means no more data'). It also mentions the 'live' nature of the data. However, it doesn't cover potential rate limits, authentication requirements, or data freshness guarantees that would be helpful for a real-time API.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose and usage guidelines, but includes extensive response format documentation (200 and 422 responses with schemas) that duplicates what would typically be in an output schema. While the response documentation is valuable, it makes the description longer than necessary for tool selection purposes.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 3 parameters, 100% schema coverage, and no annotations, the description provides good context about pagination behavior, sibling tool differentiation, and the real-time nature of the data. The inclusion of response format documentation compensates for the lack of output schema. However, it could better address behavioral aspects like rate limits or data freshness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already fully documents all three parameters (asin, cursor, marketplace). The description adds context about cursor usage ('omit cursor for the first page') but doesn't provide additional semantic meaning beyond what's in the parameter descriptions. This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Fetch realtime reviews for an ASIN from Amazon live'), identifies the resource (reviews for an ASIN), and distinguishes from siblings by mentioning '/reviews/search for offline data with AI tags' and '/reviews/analysis for aggregated insights'. This provides precise differentiation from related tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool ('Fetch realtime reviews') versus alternatives ('Related: /reviews/search for offline data with AI tags, /reviews/analysis for aggregated insights'). It also provides clear pagination guidance ('omit cursor for the first page, then pass nextCursor from the previous response'). This gives comprehensive usage context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

openapi_v2_reviews_analysisAInspect

Reviews Analyze V2

Analyze reviews by ASIN list or category to extract sentiment, ratings, and consumer insights.

Use this to understand customer satisfaction and common complaints before sourcing a product. Example: pass asins=["B07FR2V8SH"] with period="6m" for 6-month review analysis. Requires ≥50 reviews for meaningful results. Data sourced from review analysis pipeline; ASIN mode supports max 100 ASINs. Related: /products/search to find ASINs, /categories for category paths.

Responses:

200: Successful Response (Success Response) Content-Type: application/json

Example Response:

{
  "success": true,
  "meta": {
    "requestId": "Requestid",
    "timestamp": "Timestamp"
  }
}

Output Schema:

{
  "properties": {
    "success": {
      "type": "boolean",
      "title": "Success",
      "description": "Whether the request was successful",
      "default": true
    },
    "data": {
      "description": "Response data payload"
    },
    "error": {
      "description": "Error details if request failed"
    },
    "meta": {
      "description": "Metadata for API responses.",
      "properties": {
        "requestId": {
          "type": "string",
          "title": "Requestid",
          "description": "Unique request identifier"
        },
        "timestamp": {
          "type": "string",
          "title": "Timestamp",
          "description": "Response timestamp in ISO 8601 format"
        },
        "total": {
          "title": "Total",
          "description": "Total number of records"
        },
        "page": {
          "title": "Page",
          "description": "Current page number"
        },
        "pageSize": {
          "title": "Pagesize",
          "description": "Number of records per page"
        },
        "totalPages": {
          "title": "Totalpages",
          "description": "Total number of pages"
        },
        "creditsRemaining": {
          "title": "Creditsremaining",
          "description": "Remaining API credits"
        },
        "creditsConsumed": {
          "title": "Creditsconsumed",
          "description": "Credits consumed by this request"
        }
      },
      "type": "object",
      "required": [
        "requestId",
        "timestamp"
      ],
      "title": "ResponseMeta"
    }
  },
  "type": "object",
  "required": [
    "meta"
  ],
  "title": "OpenApiResponse[ReviewAnalysis]",
  "examples": []
}

422: Validation Error Content-Type: application/json

Example Response:

{
  "detail": [
    {
      "loc": [],
      "msg": "Message",
      "type": "Error Type",
      "ctx": {}
    }
  ]
}

Output Schema:

{
  "properties": {
    "detail": {
      "items": {
        "properties": {
          "loc": {
            "items": {},
            "type": "array",
            "title": "Location"
          },
          "msg": {
            "type": "string",
            "title": "Message"
          },
          "type": {
            "type": "string",
            "title": "Error Type"
          },
          "input": {
            "title": "Input"
          },
          "ctx": {
            "type": "object",
            "title": "Context"
          }
        },
        "type": "object",
        "required": [
          "loc",
          "msg",
          "type"
        ],
        "title": "ValidationError"
      },
      "type": "array",
      "title": "Detail"
    }
  },
  "type": "object",
  "title": "HTTPValidationError"
}
ParametersJSON Schema
NameRequiredDescriptionDefault
modeYesQuery mode: 'asin' for ASIN-based, 'category' for category-based
asinsNoList of ASINs to analyze (max 100, required when mode='asin'). Example: ['B07FR2V8SH'].
periodNoTime period for analysis6m
marketplaceNoAmazon marketplace code.US
categoryPathNoCategory hierarchy from root. Example: ['Electronics', 'Computers']. Required when mode='category'.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses key behavioral traits: 'Requires ≥50 reviews for meaningful results,' 'Data sourced from review analysis pipeline,' and 'ASIN mode supports max 100 ASINs.' However, it doesn't cover other important aspects like rate limits, authentication needs, or what happens if requirements aren't met, leaving gaps for a mutation-like analysis tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with purpose and usage, but it includes extensive, redundant output schema details that belong in structured fields, not the description. This adds unnecessary length and reduces focus. The core content is concise, but the inclusion of schema examples and error responses wastes space and detracts from clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (5 parameters, no annotations, no output schema in context signals), the description is moderately complete. It covers purpose, usage, and some behavioral constraints but lacks details on output format, error handling, or deeper integration context. The output schema provided in the description compensates partially, but it's verbose and not efficiently integrated.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds minimal value beyond the schema: it provides an example ('Example: pass asins=["B07FR2V8SH"] with period="6m"') and hints at parameter interactions (e.g., ASIN mode limits). This meets the baseline for high schema coverage but doesn't significantly enhance understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Analyze reviews by ASIN list or category to extract sentiment, ratings, and consumer insights.' It specifies the resource (reviews) and action (analyze to extract insights). However, it doesn't explicitly differentiate from sibling tools like 'openapi_v2_reviews_search' or 'openapi_v2_realtime_reviews', which might also involve review analysis, so it doesn't reach the highest score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use the tool: 'Use this to understand customer satisfaction and common complaints before sourcing a product.' It mentions related tools ('/products/search to find ASINs, /categories for category paths') but doesn't explicitly state when NOT to use it or name direct alternatives among the siblings, so it falls short of a perfect score.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources