Skip to main content
Glama

Server Details

MCP server for IT hardware parts research: normalize PNs, search listings, get subs/comps.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

9 tools
fetch-pageA
Read-only
Inspect

Fetch a web page and return its content as text, Markdown, or HTML. Includes rate limiting (2s per domain, max 10 req/min) for legal compliance. Automatically handles HTML-to-text conversion. Max response size: 1MB. Use for OEM verification and manufacturer website scraping.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesURL to fetch
formatNoOutput format (text, markdown, or html)
headersNoCustom HTTP headers
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds significant behavioral context beyond annotations: rate limiting details (2s per domain, max 10 req/min), legal compliance mention, automatic HTML-to-text conversion, and max response size (1MB). Annotations cover read-only and non-destructive aspects, but the description provides practical constraints and capabilities that aren't in structured fields.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with two sentences that each serve distinct purposes: first stating core functionality, then adding important behavioral constraints and use cases. Every element earns its place with no wasted words or redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with good annotations (read-only, non-destructive) and complete schema coverage, the description provides excellent contextual completeness. It adds crucial behavioral details (rate limits, size limits, conversion handling) and use cases that make the tool's practical application clear, despite the lack of output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema already documents all parameters well. The description mentions output formats (text, markdown, html) which aligns with the 'format' parameter enum, but doesn't add meaningful semantic context beyond what the schema provides. The baseline score of 3 is appropriate when schema coverage is complete.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('fetch', 'return') and resources ('web page', 'content as text, Markdown, or HTML'). It distinguishes from sibling tools by focusing on web page retrieval rather than price history, part validation, or other domain-specific operations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use the tool ('for OEM verification and manufacturer website scraping'), which helps differentiate it from siblings. However, it doesn't explicitly state when NOT to use it or name specific alternatives among the siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get-price-historyA
Read-onlyIdempotent
Inspect

Get purchase price history for a part number. Returns individual transactions with dates, prices, conditions, quantities, and vendor IDs, plus summary statistics (average, min, max, median, trend direction). Trend analysis compares recent vs older purchases (rising/falling/stable). Essential for fair market value assessment and negotiation.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of transactions to return (default: 50, max: 200)
partNumberYesPart number to get price history for
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only, non-destructive, and idempotent behavior, but the description adds valuable context beyond this: it specifies the return data includes individual transactions with detailed fields and summary statistics, and mentions trend analysis comparing recent vs. older purchases. This enhances understanding of the tool's output and analytical features, though it doesn't cover aspects like rate limits or authentication needs.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence, followed by details on returns and usage context in subsequent sentences. Each sentence adds value: the first defines the tool, the second elaborates on output, and the third explains application. There is no redundant or wasted text, making it efficient and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (2 parameters, no output schema), the description is largely complete: it covers purpose, output details, and usage context. However, without an output schema, it could benefit from more specifics on return format (e.g., structure of transactions or statistics). Annotations provide safety and idempotency, but the description compensates well by detailing behavioral aspects like trend analysis.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with both parameters ('partNumber' and 'limit') well-documented in the schema. The description does not add any parameter-specific details beyond what the schema provides (e.g., it doesn't explain the pattern for 'partNumber' or usage of 'limit' in context). Baseline score of 3 is appropriate as the schema carries the full burden of parameter documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with a specific verb ('Get') and resource ('purchase price history for a part number'), distinguishing it from siblings like 'search-parts' or 'get-substitutes' by focusing on historical transaction data rather than search or substitution. It explicitly mentions what is returned (transactions with specific fields and summary statistics), making the purpose unambiguous and distinct.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use the tool ('Essential for fair market value assessment and negotiation'), implying it's suited for pricing analysis. However, it does not explicitly state when not to use it or name alternatives among siblings (e.g., 'get-substitutes' for finding alternatives or 'search-parts' for broader searches), leaving some guidance gaps.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get-substitutesA
Read-onlyIdempotent
Inspect

Find substitute, equivalent, or cross-referenced part numbers. Queries PartsIQ database (34K+ cross-references from IQreseller) with static fallback. Covers HPE option/spare mappings, generation cross-refs, and Dell SKU/DP/N equivalents.

ParametersJSON Schema
NameRequiredDescriptionDefault
partNumberYesPart number to find substitutes/equivalents for
manufacturerNoManufacturer hintauto
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already cover read-only, non-destructive, and idempotent behavior, but the description adds valuable context beyond this: it specifies the data source ('PartsIQ database'), its size ('34K+ cross-references'), fallback mechanism ('static fallback'), and coverage details, which helps the agent understand the tool's reliability and scope.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence, followed by supporting details in a second sentence, with no wasted words. Each sentence adds value by specifying the database, fallback, and coverage examples.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (database querying with fallback) and rich annotations, the description is mostly complete. However, without an output schema, it could benefit from mentioning what the return values look like (e.g., list of substitutes), though the annotations and context provide adequate guidance for agent use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents both parameters. The description does not add any specific parameter semantics beyond what the schema provides, such as examples or constraints, but it implies the 'partNumber' parameter is central to the querying process.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Find substitute, equivalent, or cross-referenced part numbers') and resource ('PartsIQ database'), distinguishing it from siblings like 'normalize-pn' or 'validate-pn' by focusing on cross-referencing rather than validation or normalization.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('Find substitute, equivalent, or cross-referenced part numbers') and mentions specific use cases ('HPE option/spare mappings, generation cross-refs, and Dell SKU/DP/N equivalents'), but does not explicitly state when not to use it or name alternatives among sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get-vendor-trustA
Read-onlyIdempotent
Inspect

Look up vendor/seller trust information from IQreseller purchase history. Returns trust tier (preferred/neutral/avoid), transaction count, total spend, and satisfaction score. Use '*' as vendor name to get summary statistics. Essential for evaluating eBay sellers before purchasing.

ParametersJSON Schema
NameRequiredDescriptionDefault
vendorNameYesVendor or eBay seller name to look up trust info for. Use '*' to get summary stats.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already cover key behavioral traits (read-only, non-destructive, idempotent, closed-world). The description adds valuable context beyond annotations by specifying the source ('IQreseller purchase history'), the special '*' parameter behavior, and the practical use case for eBay seller evaluation, though it doesn't mention rate limits or authentication needs.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with core functionality, uses two efficient sentences with zero waste, and every part (purpose, returns, special case, usage context) earns its place without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (single parameter, no output schema), rich annotations, and 100% schema coverage, the description is mostly complete. It covers purpose, returns, special parameter behavior, and usage context, but lacks details on output format or error handling, which would be helpful despite annotations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents the single parameter. The description repeats the '*' usage note but doesn't add significant meaning beyond what the schema provides, such as format examples or edge cases. Baseline 3 is appropriate when schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('look up', 'returns') and resources ('vendor/seller trust information from IQreseller purchase history'), including detailed return values (trust tier, transaction count, total spend, satisfaction score). It distinguishes from siblings by focusing on vendor trust evaluation rather than part searching, price history, or validation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use the tool ('essential for evaluating eBay sellers before purchasing') and mentions the special case of using '*' for summary statistics. However, it does not explicitly state when not to use it or name alternative tools among siblings for similar purposes.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

normalize-pnA
Read-onlyIdempotent
Inspect

Normalize an IT hardware part number into its canonical form. Handles HPE (B21/001/spare), Dell (400-XXXX, DP/N), and IBM/Lenovo (FRU/CCIN) formats. Critical for deduplication and accurate lookups in the TPM market.

ParametersJSON Schema
NameRequiredDescriptionDefault
partNumberYesPart number is required
manufacturerNoManufacturer hint
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, destructiveHint=false, and idempotentHint=true, covering safety and idempotency. The description adds useful context about what the tool does (normalization for deduplication) and the specific manufacturer formats handled, which helps anticipate behavior. However, it doesn't disclose additional traits like rate limits, error handling, or performance characteristics beyond what annotations provide.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in two sentences: the first states the core functionality with specific examples, and the second explains the business value. Every sentence earns its place by adding distinct information (format handling and purpose), with zero redundant or vague phrasing, making it front-loaded and appropriately sized.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (normalization with manufacturer hints), rich annotations (covering safety and idempotency), and no output schema, the description is mostly complete. It explains what the tool does and why, but doesn't detail output format or error cases. However, annotations help fill gaps, making it sufficient for an agent to understand the tool's role without being fully exhaustive.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with both parameters clearly documented in the schema. The description doesn't add any parameter-specific details beyond what's in the schema (e.g., it doesn't explain format expectations for 'partNumber' or valid values for 'manufacturer'). Since the schema does the heavy lifting, the baseline score of 3 is appropriate, as the description doesn't compensate with extra semantic information.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verb ('normalize') and resource ('IT hardware part number'), and distinguishes it from siblings by specifying the canonical form transformation for specific manufacturer formats (HPE, Dell, IBM/Lenovo). It explicitly mentions the business context (deduplication and accurate lookups in TPM market), which further clarifies its unique role.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool (for normalizing part numbers into canonical form for deduplication and lookups), but doesn't explicitly state when not to use it or name alternatives among siblings like 'validate-pn' or 'verify-pn-oem'. The manufacturer formats listed give implicit guidance, but no explicit exclusions or comparisons are provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

score-listingA
Read-onlyIdempotent
Inspect

Score and rank eBay listings using PartsTable's 5-factor composite scoring engine. Factors: Price (30%), Vendor Trust (25%), Distance (20%), Condition (15%), Quantity (10%). Vendor trust scores are looked up from IQreseller purchase history (435 vendors). Distance is calculated from warehouse ZIP to seller location using haversine formula. Condition signals are extracted from listing text (warranty, tested, ships today). Returns items sorted by composite score (0-100) with full factor breakdown.

ParametersJSON Schema
NameRequiredDescriptionDefault
weightsNoCustom scoring weights (must sum to 1.0). Omit to use defaults: price=0.30, vendorTrust=0.25, distance=0.20, condition=0.15, quantity=0.10
listingsYesArray of eBay listing objects to score (1-200 items)
warehouseZipNo5-digit US ZIP code for distance calculation (default: 07054 Parsippany NJ)07054
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnlyHint=true, idempotentHint=true, and destructiveHint=false, covering safety and idempotency. The description adds valuable context beyond annotations: it explains the scoring engine details (5 factors, weight defaults, vendor trust source, distance calculation method, condition extraction), and mentions sorting behavior and output format. This enriches understanding without contradicting annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in two sentences: the first explains the scoring engine and factors, the second details implementation specifics and output. Every sentence adds essential information without redundancy, making it front-loaded and appropriately sized for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (3 parameters with nested objects, no output schema), the description provides comprehensive context: it explains the scoring methodology, factor details, and output behavior. However, it lacks explicit error handling or edge-case guidance, which slightly reduces completeness. Annotations cover safety aspects well.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds some semantic context by mentioning default weights and the warehouse ZIP for distance calculation, but does not provide significant additional meaning beyond what the schema specifies. This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Score and rank eBay listings') using a particular method ('PartsTable's 5-factor composite scoring engine'), and distinguishes this tool from siblings by focusing on scoring rather than fetching, searching, or validating. It specifies the exact factors and their default weights, making the purpose highly specific.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when scoring and ranking eBay listings is needed, but does not explicitly state when to use this tool versus alternatives like 'search-parts' or 'get-substitutes'. It provides context about the scoring factors but lacks explicit guidance on prerequisites or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search-partsA
Read-only
Inspect

Search for IT hardware parts on eBay using the Browse API. Automatically normalizes the part number before searching. Requires EBAY_CLIENT_ID and EBAY_CLIENT_SECRET environment variables. Returns prices, conditions, sellers, and item URLs.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results to return (1-50)
conditionNoFilter by item conditionany
partNumberYesIT hardware part number to search for
manufacturerNoManufacturer hint for better resultsauto
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only, open-world, non-idempotent, and non-destructive behavior. The description adds valuable context beyond annotations: it discloses authentication requirements ('Requires EBAY_CLIENT_ID and EBAY_CLIENT_SECRET environment variables'), the automatic normalization feature, and details about the return values (prices, conditions, sellers, URLs), which enhances transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, with three sentences that efficiently convey purpose, key behavior (normalization), requirements, and return values. Every sentence adds value without redundancy, making it concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity, rich annotations, and 100% schema coverage, the description is mostly complete. It covers purpose, behavior, requirements, and return values. However, without an output schema, it could benefit from more detail on response structure or error handling, leaving a minor gap in completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents all parameters. The description adds minimal parameter semantics by mentioning 'Automatically normalizes the part number before searching,' which relates to 'partNumber,' but does not provide additional meaning beyond the schema's descriptions. Baseline 3 is appropriate as the schema handles most of the parameter documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Search for IT hardware parts on eBay'), the resource ('parts'), and the method ('using the Browse API'). It distinguishes from siblings by specifying the eBay search functionality, unlike tools like 'normalize-pn' or 'get-price-history' which focus on different operations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool (searching eBay for IT hardware parts) and implies usage by mentioning automatic normalization. However, it does not explicitly state when not to use it or name alternatives among siblings, such as 'get-substitutes' for finding alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

validate-pnA
Read-onlyIdempotent
Inspect

Validate a part number against known manufacturer format rules. Returns structural validity, matched format rule, and warnings. IMPORTANT: Format validation only - does not confirm the part exists.

ParametersJSON Schema
NameRequiredDescriptionDefault
partNumberYesThe part number to validate
manufacturerNoExpected manufacturer, or omit to auto-detect
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only, non-destructive, and idempotent behavior, which the description aligns with by not contradicting. The description adds valuable context beyond annotations: it specifies the return values ('structural validity, matched format rule, and warnings') and clarifies the scope ('format validation only'), which helps the agent understand what to expect without an output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by key clarifications in two concise sentences. Every sentence earns its place by adding critical information (e.g., return values and scope limitation) without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (validation with format rules), rich annotations (covering safety and behavior), and lack of output schema, the description is largely complete. It explains the purpose, usage, and return values, though it could slightly enhance completeness by mentioning potential error cases or input constraints (e.g., format of 'partNumber'), but this is minor given the schema coverage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents both parameters. The description does not add any parameter-specific details beyond what the schema provides (e.g., no extra syntax or format rules for 'partNumber' or 'manufacturer'), meeting the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('validate a part number'), the resource ('against known manufacturer format rules'), and distinguishes it from siblings by emphasizing it's 'format validation only - does not confirm the part exists', differentiating from tools like 'verify-pn-oem' or 'search-parts' that might check existence.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It explicitly states when to use this tool ('format validation only') and when not to use it ('does not confirm the part exists'), providing clear alternatives by implication (e.g., use other tools for existence checks). This helps the agent choose this over siblings like 'verify-pn-oem' or 'search-parts' for pure format validation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

verify-pn-oemA
Read-only
Inspect

Verify a part number against OEM manufacturer websites for maximum confidence (1.0). Checks HPE PartSurfer, Dell Support, and Lenovo Parts Lookup. Returns verification status, OEM data, and confidence boost (+0.2 from DB level). Critical for achieving 100% PN confidence before quoting. Rate-limited for legal compliance (2s per domain, max 10 req/min).

ParametersJSON Schema
NameRequiredDescriptionDefault
partNumberYesPart number to verify
manufacturerNoForce specific manufacturer (hpe, dell, or lenovo)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond annotations: it discloses rate limits ('2s per domain, max 10 req/min') and legal compliance needs, which annotations don't cover. Annotations indicate read-only and non-destructive operations, and the description doesn't contradict this. However, it doesn't detail error handling or response formats, leaving some gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose and efficiently structured into sentences that each add value: verification process, OEMs checked, return details, usage context, and rate limits. There is no wasted text, making it highly concise and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (verification across multiple OEMs with rate limits), annotations cover safety (read-only, non-destructive), and schema fully documents inputs, the description is mostly complete. It lacks output schema details (e.g., return value structure), but compensates by explaining verification status and confidence boost. Slight gap in not detailing error cases or exact output format.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents the two parameters. The description implies the 'partNumber' parameter's use for verification but doesn't add meaning beyond the schema's descriptions. For 'manufacturer', it lists the OEMs checked, aligning with the enum values but not providing extra semantics. Baseline 3 is appropriate as the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('verify a part number') and resources ('against OEM manufacturer websites'), listing the exact OEMs checked (HPE PartSurfer, Dell Support, Lenovo Parts Lookup). It distinguishes from siblings by focusing on OEM verification for confidence boosting, unlike tools like 'validate-pn' or 'get-price-history'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool ('critical for achieving 100% PN confidence before quoting'), providing clear context for its application. It implies alternatives by specifying it's for OEM verification, distinguishing it from other validation or search tools in the sibling list, though it doesn't name specific alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources