freightgate-mcp-server
Server Details
Shipping intelligence — D&D charges, local charges, inland haulage. x402 USDC payments.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
24 toolsshippingrates_cfs_tariffsGet CFS Handling TariffsARead-onlyIdempotentInspect
Get Container Freight Station (CFS) handling tariffs — charges for LCL (Less than Container Load) cargo consolidation and deconsolidation at port warehouses.
Use this for LCL shipments to estimate warehouse handling costs. Returns per-unit handling rates, minimum charges, and storage fees at the specified port. Not relevant for FCL (Full Container Load) shipments.
PAID: $0.05/call via x402 (USDC on Base or Solana). Without payment, returns 402 with payment instructions.
Returns: Array of { facility, service_type, cargo_type, rate_per_unit, unit, minimum_charge, currency }.
| Name | Required | Description | Default |
|---|---|---|---|
| port | Yes | UN/LOCODE port code (e.g. INMAA, INMUN) | |
| service | No | Filter by service type | |
| x_payment | No | x402 payment proof header | |
| cargo_type | No | Filter by cargo type |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds critical cost information ($0.05 per call, x402 payment via USDC on Base/Solana) that annotations omit. Discloses return structure (per-unit rates, minimum charges, storage fees) beyond the read-only safety profile already declared in annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with purpose front-loaded, followed by return values, cost warning, then parameter details. Args section partially duplicates schema but justifies its space by including specific examples not present in the JSON schema descriptions.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Excellent coverage for a parameter-rich tool: explains domain-specific terminology (CFS, LCL), discloses paid nature critical for API usage, specifies return data types, and provides parameter examples. No output schema present, but description adequately covers expected returns.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Despite 100% schema coverage, description adds concrete value examples: port codes (INMAA, INMUN), service types (import, export), and cargo types (general, hazardous) that clarify expected parameter values beyond the schema's generic descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verb 'Get' with resource 'Container Freight Station (CFS) handling tariffs' and domain scope 'charges for LCL cargo consolidation/deconsolidation'. Explicitly targets CFS warehouse operations, distinguishing from sibling tools that cover general shipping rates, congestion, and port facilities.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides domain context (CFS/LCL cargo) and identifies as 'PAID endpoint' which affects usage decisions, but lacks explicit comparison to siblings like 'shippingrates_local_charges' or guidance on when to prefer this over other charge-related tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
shippingrates_congestionPort Congestion DataARead-onlyIdempotentInspect
Get port congestion metrics — vessel waiting times, berth occupancy, and delay trends for a specific port.
Use this to assess port efficiency and anticipate detention risk. High congestion often leads to longer container dwell times and higher D&D costs. For shipping disruption news and alerts (Red Sea, Suez, chokepoints), use shippingrates_congestion_news instead.
PAID: $0.02/call via x402 (USDC on Base or Solana). Without payment, returns 402 with payment instructions.
Returns: { port, congestion_level, avg_waiting_hours, berth_occupancy_pct, vessel_count, trend, period_days }.
| Name | Required | Description | Default |
|---|---|---|---|
| port | Yes | UN/LOCODE port code — e.g. INNSA (Nhava Sheva), AEJEA (Jebel Ali), SGSIN (Singapore) | |
| days_back | No | Days of historical data (default: 30) | |
| x_payment | No | x402 payment proof header |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare read-only/idempotent status, while the description adds critical behavioral context not covered elsewhere: the $0.02 per-call cost, x402 payment mechanism, and blockchain settlement details (USDC on Base or Solana).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with clear visual hierarchy (purpose → use case → cost → args → returns). The Args list and Returns line compensate for missing output schema. Minor verbosity in the bullet formatting prevents a perfect 5.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Complete coverage for a paid 3-parameter endpoint: explains the data returned (compensating for no output schema), documents payment requirements critical for invocation, and specifies historical data range capabilities.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, the baseline is 3. The description adds value by providing concrete UN/LOCODE examples ('INNSA', 'AEJEA') and clarifying the x_payment parameter's purpose as 'x402 payment proof header' beyond the raw schema description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a specific verb ('Get') and resource ('port congestion metrics'), then enumerates specific data points ('vessel waiting times, berth occupancy, and delays') that distinguish it from siblings like shippingrates_port or shippingrates_congestion_news.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear context on when to use ('assessing port efficiency and planning for potential detention costs'), but does not explicitly name sibling alternatives or when-not-to-use scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
shippingrates_congestion_newsShipping Disruption NewsARead-onlyIdempotentInspect
Get shipping disruption news aggregated from 7 trade press sources — with port tagging and severity classification. Covers Hormuz Strait, Red Sea/Houthi, Suez Canal, Bab el-Mandeb, port congestion, and weather events.
Use this for situational awareness — answers "are there any active disruptions affecting my route?" For quantitative port congestion metrics (waiting times, berth occupancy), use shippingrates_congestion instead. For route-level risk scoring, use shippingrates_risk_score.
PAID: $0.02/call via x402 (USDC on Base or Solana). Without payment, returns 402 with payment instructions.
Returns: Array of { headline, source, published_at, severity, affected_ports[], chokepoint, summary }.
| Name | Required | Description | Default |
|---|---|---|---|
| port | No | Port UN/LOCODE filter | |
| limit | No | Maximum number of results | |
| severity | No | Severity classification filter | |
| days_back | No | Days of historical news (default: 7) | |
| x_payment | No | x402 payment proof header |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover safety profile (read-only, idempotent). Description adds crucial behavioral context absent from annotations: exact pricing ($0.02), payment mechanism (x402, USDC on Base/Solana), data sources (7 trade press), and geographic coverage specifics.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with front-loaded value proposition (sources, regions), prominent cost disclosure, and organized Args section. Slightly redundant with schema but justified by the payment context integration. No wasted sentences.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Strong coverage of inputs and cost model, but lacks description of return values/news format despite having no output schema. For a paid endpoint, omission of response structure (e.g., article format, timestamp granularity) is notable gap.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (baseline 3). Description enhances semantics by contextualizing x_payment within the pricing model and explicitly enumerating severity filter values ('normal', 'elevated', 'congested'), adding value beyond the schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb ('Get') and resource ('shipping disruption news') with specific scope (7 trade press sources, Hormuz/Red Sea/Suez regions, port tagging). Distinguishes from generic 'congestion' sibling by emphasizing 'news' and media sources, though could explicitly contrast with shippingrates_congestion.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides critical cost constraint ($0.02 per call, x402 payment method) which governs when to invoke, but lacks explicit guidance on when to prefer this over shippingrates_congestion (data vs. news) or other alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
shippingrates_dd_calculateCalculate Demurrage & Detention CostsARead-onlyIdempotentInspect
Calculate demurrage and detention (D&D) costs for one carrier in one country.
Use this when the user needs a detailed cost breakdown for a specific carrier. Returns free days, per-diem rates for each tariff slab, and total cost. This is the core tool for logistics cost analysis — it answers "how much will I pay if my container is detained X days?"
To compare D&D costs across all carriers at once, use shippingrates_dd_compare instead.
PAID: $0.10/call via x402 (USDC on Base or Solana). Without payment, returns 402 with payment instructions.
Returns: { line, country, container_type, days, free_days, slabs: [{ from, to, rate_per_day, days, cost }], total_cost, currency }
| Name | Required | Description | Default |
|---|---|---|---|
| days | Yes | Number of detention days | |
| line | Yes | Shipping line slug — one of: maersk, msc, cma-cgm, hapag-lloyd, one, cosco | |
| country | Yes | ISO 2-letter country code (e.g. IN, AE, SG) | |
| x_payment | No | x402 payment proof header (optional — required for paid access) | |
| container_type | Yes | ISO 6346 container type — 20DV, 40DV, 40HC, 20RF, 40RF, 20OT, 40OT, 20FR, 40FR |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true and idempotentHint=true. The description adds significant behavioral context not in annotations: the $0.10 pricing, the x402/USDC payment method, and what happens when paid (returns detailed slab breakdown). It explains the business logic (free days, per-diem slabs) that the output will contain.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Structured with clear sections: purpose statement, behavioral notes (PAID), Args, and Returns. While lengthy due to including a full JSON return example, every sentence earns its place given the absence of an output schema in the structured data. The front-loading of the paid nature is appropriate for a tool with financial requirements.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Excellent completeness for a complex tool. Compensates for missing output schema by providing detailed JSON return structure showing the slab breakdown format, total_cost, and currency fields. Covers the paid authentication requirement comprehensively. For a tool with payment obligations and specific domain logic (D&D calculations), the description provides sufficient detail for correct invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage, establishing a baseline of 3. The description adds substantial value by providing concrete enum examples for the 'line' parameter (maersk, msc, cma-cgm), valid country codes (IN, AE, SG), and container codes (20DV, 40HC). It also clarifies that x_payment is 'optional' but 'required for paid access,' adding nuance beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Calculate') and clearly identifies the resource (demurrage and detention costs). It distinguishes from siblings like 'shippingrates_rates' or 'shippingrates_total_cost' by specifying 'D&D' costs and listing the specific inputs required (line, country, container_type, days).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear context that this is a 'PAID endpoint' costing '$0.10 per call' and explains the x402 payment mechanism ('include the payment proof in x_payment'). States it is the 'core tool for logistics cost analysis.' However, it does not explicitly differentiate from sibling 'shippingrates_dd_compare' or when to use one versus the other.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
shippingrates_dd_compareCompare D&D Across Shipping LinesARead-onlyIdempotentInspect
Compare demurrage and detention costs across ALL available carriers for the same country, container type, and detention days.
Use this for freight procurement and carrier selection — it answers "which carrier has the cheapest D&D in this country?" Returns a side-by-side comparison with each carrier's free days, slab rates, and total cost sorted cheapest first.
For a single carrier's detailed D&D breakdown, use shippingrates_dd_calculate instead.
PAID: $0.25/call via x402 (USDC on Base or Solana). Without payment, returns 402 with payment instructions.
Returns: Array of { line, free_days, total_cost, currency, slabs } for each available carrier, sorted by total_cost ascending.
| Name | Required | Description | Default |
|---|---|---|---|
| days | Yes | Number of detention days | |
| country | Yes | ISO 2-letter country code | |
| x_payment | No | x402 payment proof header | |
| container_type | Yes | ISO 6346 container type — 20DV, 40DV, 40HC, 20RF, 40RF, 20OT, 40OT, 20FR, 40FR |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds critical behavioral information not in annotations: '$0.25 per call via x402' payment requirement and 'Returns a side-by-side comparison' output format. Annotations cover safety profile (readOnly/idempotent), so description adds cost and response context without contradiction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with clear information hierarchy: purpose → output → use case → cost warning → parameters. Every sentence earns its place; payment disclosure is critical and front-loaded in its own line.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Comprehensive for a paid comparison endpoint: covers payment mechanism, return format (even without output schema), and domain context. Appropriate for complexity level, though could briefly define D&D acronyms for non-expert agents.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Despite 100% schema coverage (baseline 3), description adds value by providing example values ('e.g. "40HC", "20DV"') for container_type and clarifying 'to compare' context for days parameter in the Args section.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Excellent specificity: 'Compare demurrage and detention costs across multiple shipping lines' provides clear verb, resource, and scope. Explicitly distinguishes from sibling 'shippingrates_dd_calculate' by emphasizing 'across multiple' and 'side-by-side comparison'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear contextual signal with 'Essential for freight procurement and carrier selection' indicating when to use. Lacks explicit 'when not to use' or direct sibling comparison (e.g., 'use this instead of calculate when comparing options'), but use case is clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
shippingrates_facilitiesIndia ICD/CFS Facility DirectoryARead-onlyIdempotentInspect
Search India's Inland Container Depot (ICD) and Container Freight Station (CFS) facility directory — GPS coordinates, rail connectivity, operator details, and capacity.
Use this to find facilities near an inland destination in India, or to check if a specific ICD/CFS has rail connectivity. Useful for inland logistics planning in combination with shippingrates_inland_haulage.
PAID: $0.02/call via x402 (USDC on Base or Solana). Without payment, returns 402 with payment instructions.
Returns: Array of { code, name, type, state, city, lat, lon, operator, rail_connected, capacity }.
| Name | Required | Description | Default |
|---|---|---|---|
| code | No | Facility code filter | |
| type | No | Facility type filter | |
| state | No | Indian state name filter | |
| x_payment | No | x402 payment proof header | |
| rail_connected | No | Rail connectivity filter — 'true' or 'false' |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover read-only/idempotent safety (readOnlyHint: true), so the description appropriately focuses on adding the financial cost model ($0.02 per call) and hints at return payload contents (GPS coordinates, capacity, operator details) to compensate for the missing output schema. It does not contradict annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with the purpose front-loaded, followed by the critical cost warning. While the 'Args:' section largely mirrors the schema, it earns its place by linking x_payment to the payment cost explanation and keeping parameter documentation accessible.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema, the description adequately compensates by listing the specific data fields returned (GPS, capacity, operator details). Combined with the payment disclosure and complete parameter coverage, it provides sufficient context for a directory lookup tool with optional filters.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing a baseline of 3. The description adds value by contextualizing the x_payment parameter within the explicit paid endpoint warning, clarifying that it relates to the $0.02 cost. The Args section also reinforces valid values for rail_connected ('true' or 'false') and the enum constraint for type.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific verb (Search), resource (India ICD/CFS facility directory), and return data (GPS coordinates, rail connectivity, operator details, capacity). It effectively distinguishes from shipping-rate-focused siblings like shippingrates_rates or shippingrates_port by specifying Indian inland container facilities.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides critical usage context by flagging the PAID endpoint ($0.02 per call) and payment mechanism (x402 via USDC), which acts as a prerequisite warning. However, it lacks explicit guidance on when to use this versus sibling tools like shippingrates_inland_search or shippingrates_port.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
shippingrates_fxCurrency Exchange RatesARead-onlyIdempotentInspect
Get current exchange rate between two currencies — useful for converting shipping costs quoted in different currencies (USD, EUR, INR, AED, SGD, CNY, etc.).
Use this to normalize costs from different carriers/countries to a common currency for comparison. Rates are updated daily.
FREE — no payment required.
Returns: { from, to, rate, timestamp }
| Name | Required | Description | Default |
|---|---|---|---|
| to | Yes | Target currency code — e.g. "INR", "AED" | |
| from | Yes | Source currency code — e.g. "USD", "EUR" |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds valuable behavioral context beyond annotations: explicitly states 'This is a FREE endpoint — no payment required' and documents the JSON return structure with field types. Annotations cover read-only/idempotent safety; description adds cost transparency and response contract.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with clear visual hierarchy: one-line purpose, use-case context, pricing note, Args block, and Returns block. Every sentence earns its place; no redundancy or verbose filler.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple 2-parameter read-only utility, description is complete. It covers shipping domain context, cost/usage terms, input parameters, and response format despite the lack of formal output schema. Annotations adequately cover safety properties.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with complete descriptions for both 'from' and 'to' parameters. Description mirrors this with an Args section but adds no additional semantic meaning, format constraints, or validation rules beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description opens with specific verb 'Get' and clear resource 'exchange rates between two currencies.' It clearly distinguishes from shipping-focused siblings (tariffs, congestion, schedules) by identifying this as a currency conversion utility.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states use case: 'Useful for converting shipping costs quoted in different currencies,' providing clear context for when to invoke within the shipping domain. Lacks explicit 'when-not-to-use' or alternative tool guidance, but the shipping cost context is specific enough.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
shippingrates_inland_compareCompare Inland Haulage RatesARead-onlyIdempotentInspect
Compare inland haulage rates across ALL available carriers for a port-to-ICD/city pair — sorted cheapest first.
Use this for carrier selection on inland legs — answers "which carrier offers the cheapest trucking/rail from port X to city Y?" For a single carrier's rates, use shippingrates_inland_haulage instead. To discover what routes exist, use shippingrates_inland_search first.
PAID: $0.08/call via x402 (USDC on Base or Solana). Without payment, returns 402 with payment instructions.
Returns: Array of { carrier, mode, container_type, rate, currency, transit_days, weight_bracket } sorted by rate ascending.
| Name | Required | Description | Default |
|---|---|---|---|
| origin | Yes | Origin port UN/LOCODE — e.g. INNSA (Nhava Sheva), CNSHA (Shanghai), SGSIN (Singapore) | |
| x_payment | No | x402 payment proof header | |
| destination | Yes | Destination city or ICD code | |
| container_type | No | Container type (default: 20GP) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Beyond annotations (readOnly/idempotent), description discloses payment mechanism (x402, USDC chains), return sorting ('cheapest first'), and return payload structure ('carrier, mode, weight bracket').
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
First sentence is dense and informative. Payment disclosure is appropriately prominent. Args section is structured but somewhat redundant with complete schema documentation.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite missing output schema, description adequately explains return format and sorting. Payment disclosure is complete. No significant gaps for a read-only comparison tool of this complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, baseline is met. The 'Args' section largely duplicates schema descriptions but introduces minimal new value since schema already documents defaults and constraints.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description specifies exact action ('Compare'), resource ('inland haulage rates'), scope ('across all carriers for an ICD-port pair'), and distinguishes from sibling 'shippingrates_inland_haulage' by emphasizing cross-carrier comparison.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides critical usage constraint ($0.08 payment requirement via x402) but lacks explicit guidance on when to use this versus siblings like 'shippingrates_inland_haulage' or 'shippingrates_inland_search'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
shippingrates_inland_haulageGet Inland Haulage RatesARead-onlyIdempotentInspect
Get inland haulage (trucking/rail) rates for moving containers between a port and an inland location.
Use this when you know the specific origin port and destination and need rate quotes. Returns route-specific rates by container type including base rate, fuel surcharges, and estimated transit times.
To discover what routes exist first, use shippingrates_inland_search. To compare rates across all carriers for the same route, use shippingrates_inland_compare.
PAID: $0.05/call via x402 (USDC on Base or Solana). Without payment, returns 402 with payment instructions.
Returns: Array of { carrier, origin, destination, container_type, rate, fuel_surcharge, total, currency, transit_days, mode }.
| Name | Required | Description | Default |
|---|---|---|---|
| mode | No | Transport mode filter (PRE or ONC) | |
| origin | Yes | Origin port UN/LOCODE (e.g. INNSA, INMAA) | |
| x_payment | No | x402 payment proof header | |
| destination | Yes | Inland destination city name (e.g. Ahmedabad, Delhi) | |
| container_type | No | Container type filter — e.g. 20DV, 40HC, 20RF |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnly/idempotent/destructive status. The description adds crucial behavioral context not in annotations: the $0.05 per call cost via x402 payment protocol and a clear breakdown of what the response includes (base rate, fuel surcharges, transit times). No contradictions with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with purpose front-loaded, return values second, payment disclosure third. The Args section is somewhat redundant given 100% schema coverage but may aid LLM parsing. Minimal waste, appropriate length for complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 5 parameters and no output schema, the description adequately compensates by detailing return contents (rates, surcharges, transit times) and critical payment requirements. Could benefit from noting error behavior (e.g., route not found) but covers essential invocation context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing baseline 3. The Args section duplicates schema descriptions (e.g., origin port UN/LOCODE, x402 payment proof header) without adding semantic depth beyond the schema, though it maintains parity with structured documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states specific action ('Get') and domain resource ('inland haulage (trucking/rail) rates'). The scope is precisely defined ('moving containers between ports and inland locations'), effectively distinguishing from sibling tools like shippingrates_inland_compare and shippingrates_inland_search.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Usage is implied through the defined scope (ports to inland locations) and parameter specifics (UN/LOCODE for origin), but there is no explicit guidance on when to use this versus shippingrates_inland_compare or shippingrates_inland_search, nor when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
shippingrates_inland_searchSearch Inland Transport RoutesARead-onlyIdempotentInspect
Search for available inland transport routes (road/rail haulage) from port to inland destinations for a specific carrier.
Use this to discover what haulage routes a carrier offers in a country. For example, search "ahmedabad" to find routes from Nhava Sheva to Ahmedabad via Maersk. Returns route options with ICD/CFS codes and available container types.
For actual haulage rate quotes, use shippingrates_inland_haulage. For cross-carrier rate comparison, use shippingrates_inland_compare.
PAID: $0.03/call via x402 (USDC on Base or Solana). Without payment, returns 402 with payment instructions.
Returns: Array of { origin, destination, mode, container_types, icd_code } matching the search criteria.
| Name | Required | Description | Default |
|---|---|---|---|
| line | Yes | Shipping line slug — one of: maersk, msc, cma-cgm, hapag-lloyd, one, cosco | |
| country | Yes | ISO 2-letter country code | |
| keyword | No | Search term — city name, region, or route | |
| x_payment | No | x402 payment proof header |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds essential behavioral context absent from annotations: monetary cost per invocation, payment protocol (x402), accepted networks (Base/Solana), and return data categories (haulage options, estimated costs). Annotations sufficiently cover safety profile (read-only/idempotent).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
First two sentences overlap in meaning ('inland transport routes' vs 'haulage options, routes'). The Args section redundantly documents parameters already fully described in the schema. Payment information is appropriately placed and essential.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequately complete for a 4-parameter tool with simple types. Critically discloses paid endpoint nature and payment header requirement. Mentions return data categories (costs, routes) to compensate for missing output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with complete parameter descriptions. The Args section duplicates these definitions without adding semantic depth, usage examples, or validation rules beyond the structured schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific action (search) and resource (inland transport routes) with transport modes (road/rail) and scope (port to inland destination). Lacks explicit differentiation from similar siblings like 'shippingrates_inland_compare' or 'shippingrates_inland_haulage'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides critical usage constraint (pay-per-use at $0.03/call via x402/USDC) but lacks guidance on when to select this versus alternative tools like 'shippingrates_inland_compare' or general 'shippingrates_search'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
shippingrates_linesList Shipping LinesARead-onlyIdempotentInspect
List all shipping lines in the ShippingRates database with per-country record counts.
Use this to discover which carriers and countries have data before querying specific tools. Returns each carrier's name, slug, SCAC code, and a breakdown of available D&D tariff and local charge records per country.
FREE — no payment required.
Returns: Array of { line, slug, scac, countries: [{ code, name, dd_records, lc_records }] }
Related tools: Use shippingrates_stats for aggregate totals, shippingrates_search for keyword-based discovery.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds valuable billing context ('FREE endpoint') and data scope ('6 major shipping lines') beyond annotations. Discloses return structure (array with country breakdowns) which is critical given no output schema exists. Aligns with readOnlyHint=true annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Five distinct information chunks each earning their place: purpose, specific carriers, data structure, billing notice, and return type. Line breaks separate concerns effectively. No redundancy or templated fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Thoroughly compensates for missing output schema by detailing exact return structure (6 named carriers with per-country counts) and billing status. Appropriate completeness for a simple discovery endpoint with strong safety annotations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Zero parameters present (confirmed by context signals), triggering baseline score of 4. Description appropriately focuses on return value semantics rather than inventing parameter documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb 'Get' with clear resource 'shipping lines' and scope 'with per-country record counts'. Explicitly names the 6 specific carriers covered (Maersk, MSC, etc.), distinguishing it from calculation/query siblings like shippingrates_rates or shippingrates_calculate.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear context through 'FREE endpoint — no payment required' (billing guidance) and specific scope (returns 6 major lines with counts), though lacks explicit 'when to use vs shippingrates_search' comparison. The per-country record count detail implies discovery/coverage checking use case.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
shippingrates_local_chargesGet Port Local ChargesBRead-onlyIdempotentInspect
Get local charges at a port for a specific carrier — Terminal Handling Charges (THC), documentation fees (BL/DO), seal fees, and other port-specific charges.
Use this when calculating total shipping costs at origin or destination. Combine with shippingrates_dd_calculate for a complete port cost picture, or use shippingrates_total_cost for an all-in-one landed cost estimate.
PAID: $0.05/call via x402 (USDC on Base or Solana). Without payment, returns 402 with payment instructions.
Returns: Array of { charge_type, charge_name, amount, currency, container_type, direction } for all applicable charges at the port.
| Name | Required | Description | Default |
|---|---|---|---|
| line | Yes | Shipping line slug — one of: maersk, msc, cma-cgm, hapag-lloyd, one, cosco | |
| country | Yes | ISO 2-letter country code | |
| port_code | No | Port code to filter (e.g. INMUN for Mumbai) | |
| x_payment | No | x402 payment proof header |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover safety hints (readOnly, idempotent, non-destructive). Description adds critical payment disclosure ('PAID endpoint: $0.05 per call via x402') and briefly mentions return format ('detailed breakdown'), which agents need to know before invoking.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with clear paragraph breaks: purpose first, return value second, pricing third, then Args. No wasted words. The Args section is slightly redundant given the schema but serves as a quick reference.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Lacks output schema, though description briefly states what is returned. Omits guidance on behavior when optional port_code is omitted (returns all ports in country?). Payment mechanism is disclosed but not explained. Adequate given parameter richness but gaps remain.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with detailed descriptions and examples (e.g., 'INMUN for Mumbai'). The description's 'Args' section repeats parameter names and types but actually provides less detail than the schema (omits the Mumbai example), adding no semantic value beyond the structured schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb 'Get' with specific resource 'local charges' and concrete examples (THC, documentation fees, seal fees). However, it does not distinguish from sibling tools like 'shippingrates_surcharges' or 'shippingrates_total_cost' which may overlap conceptually.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives (e.g., when to use local_charges vs surcharges vs total_cost). No prerequisites or contextual triggers provided despite 21 sibling tools existing.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
shippingrates_portPort LookupARead-onlyIdempotentInspect
Look up port details by UN/LOCODE — name, country, coordinates, timezone, and terminal facilities.
Use this to validate port codes or get port metadata. If you don't know the UN/LOCODE, use shippingrates_search with the port or city name first.
PAID: $0.01/call via x402 (USDC on Base or Solana). Without payment, returns 402 with payment instructions.
Returns: { port_code, port_name, country, country_code, lat, lon, timezone, facilities }
| Name | Required | Description | Default |
|---|---|---|---|
| code | Yes | UN/LOCODE port code — e.g. "INNSA", "AEJEA", "SGSIN" | |
| x_payment | No | x402 payment proof header |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Excellent behavioral disclosure beyond annotations. Critically exposes the paid nature ($0.01/call via x402), payment currencies (USDC on Base/Solana), and return payload structure (coordinates, timezone, facilities) despite lack of formal output schema. No contradiction with read-only annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with clear visual hierarchy (description, cost warning, args, returns). Information-dense but organized. The 'Args:' and 'Returns:' labeling is slightly verbose but functional. Payment disclosure is appropriately prominent.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Comprehensive for a lookup tool. Compensates for missing output schema by explicitly documenting return fields (port_code, port_name, lat, lon, etc.). Payment requirement documentation is complete. No gaps given tool complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, baseline is met. Description adds substantial value via concrete examples mapping codes to human-readable port names ('INNSA' = Nhava Sheva), clarifying the addressing semantics beyond raw string type.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verb 'Look up' + resource 'port details' + method 'by UN/LOCODE'. Uniquely identifies this as a port metadata lookup distinct from rate-finding or transit-scheduling siblings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit when-to-use or when-not-to-use guidance provided. Lacks comparison to siblings like shippingrates_search or shippingrates_facilities that might overlap in functionality. Usage must be inferred from the UN/LOCODE specificity.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
shippingrates_ratesFreight RatesARead-onlyIdempotentInspect
Get ocean freight rates between two ports, optionally filtered by container type.
Use this to compare base freight costs across carriers for a specific trade lane. Returns current spot rates and contract rate indicators with trend data. For a complete cost picture including surcharges and local charges, use shippingrates_total_cost instead.
PAID: $0.03/call via x402 (USDC on Base or Solana). Without payment, returns 402 with payment instructions.
Returns: Array of { carrier, origin, destination, container_type, rate, currency, effective_date, trend }.
| Name | Required | Description | Default |
|---|---|---|---|
| origin | Yes | Origin port UN/LOCODE — e.g. INNSA (Nhava Sheva), CNSHA (Shanghai), SGSIN (Singapore) | |
| x_payment | No | x402 payment proof header | |
| destination | Yes | Destination port UN/LOCODE — e.g. AEJEA (Jebel Ali), NLRTM (Rotterdam), USNYC (New York) | |
| container_type | No | Container type filter — e.g. 20DV, 40HC, 20RF |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds substantial behavioral context not present in annotations: explicit pricing ($0.03 per call), payment mechanism (x402 protocol, USDC on Base/Solana networks), return data structure (current/historical rates with trend indicators), and specific return fields (carrier, rate, currency, effective dates). While annotations declare readOnly/idempotent status, the description explains the financial and data contract implications critical for agent decision-making.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Employs a structured Args/Returns format that efficiently organizes 4 parameters and explains the return value despite no output schema. The cost warning is appropriately prominent. Examples (INNSA, AEJEA, 40HC) are embedded directly with parameters where they provide maximum value. No redundant prose; every line conveys distinct information (purpose, pricing, parameters, return structure).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Lacking an output schema, the description compensates by detailing the return structure ('Array of freight rates with carrier, rate, currency, effective dates'). It covers the critical business constraint (paid endpoint) and authentication requirement (x402). With 100% schema coverage and simple flat parameters, the description provides sufficient context for an agent to invoke this financial tool correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% providing baseline documentation, but the description adds valuable semantic examples for UN/LOCODEs (INNSA, AEJEA) and container codes (40HC, 20DV) that clarify expected string formats. It correctly identifies x_payment as an 'x402 payment proof header' linking it to the protocol mentioned in the pricing section. Score reflects value added above the complete schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verb ('Get') + specific resource ('ocean freight rates') + scope constraint ('between two ports'). Effectively distinguishes from 20+ siblings including inland-focused tools (shippingrates_inland_haulage) and aggregate cost tools (shippingrates_total_cost) by specifying 'ocean' and port-to-port scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit critical usage constraints:付费 requirement ($0.03/call), specific blockchain payment rails (USDC on Base/Solana), and optional vs required parameters. While it does not explicitly name sibling alternatives (e.g., 'use shippingrates_total_cost for all-in pricing'), the 'ocean freight rates' specificity and cost disclosure effectively guide appropriate invocation contexts.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
shippingrates_regulatoryRegulatory UpdatesARead-onlyIdempotentInspect
Get recent shipping regulatory updates and compliance requirements for a specific country — customs regulations, documentation requirements, trade restrictions, and policy changes.
Use this to stay current on regulatory changes that may affect shipments to/from a country.
PAID: $0.01/call via x402 (USDC on Base or Solana). Without payment, returns 402 with payment instructions.
Returns: Array of { title, description, effective_date, impact_level, category, country }.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max results (default: 10) | |
| country | Yes | ISO 2-letter country code | |
| x_payment | No | x402 payment proof header |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare read-only/idempotent safety, but description adds critical behavioral context: $0.01 cost per call, x402 payment mechanism (USDC on Base/Solana), and return value structure. Does not disclose rate limits or caching behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with clear sections. Slightly redundant in Args/Returns pseudo-documentation since schema is complete, but Returns section is necessary given lack of output schema. Payment warning appropriately placed.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Comprehensive for complexity: covers payment requirement (critical for invocation), describes return structure (compensating for missing output schema), and explains parameter examples. No output schema exists, and description adequately documents the array return type and its fields.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage (baseline 3). Description adds value by providing concrete country code examples ('IN', 'AE', 'SG') and clarifying x_payment as 'x402 payment proof header' beyond the schema's generic description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Excellent purpose statement with specific verb 'Get', clear resource 'regulatory updates', and scope 'shipping in a specific country'. Clearly distinguishes from sibling tools which focus on rates, schedules, and congestion rather than compliance.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Lists coverage areas (customs, documentation, trade restrictions) which implies usage context, but lacks explicit when-to-use guidance or contrast with alternatives. Siblings like shippingrates_cfs_tariffs or shippingrates_local_charges might overlap in trade domain but aren't differentiated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
shippingrates_reliabilitySchedule ReliabilityARead-onlyIdempotentInspect
Get schedule reliability metrics for a carrier — on-time performance percentage, average delay in days, and sample size.
Use this for carrier selection and benchmarking — answers "how reliable is this carrier on this trade lane?" On-time is defined as arriving within ±1 day of scheduled ETA (industry standard per Sea-Intelligence).
PAID: $0.02/call via x402 (USDC on Base or Solana). Without payment, returns 402 with payment instructions.
Returns: { line, trade_lane, on_time_pct, avg_delay_days, sample_size, period }.
| Name | Required | Description | Default |
|---|---|---|---|
| line | Yes | Shipping line slug — one of: maersk, msc, cma-cgm, hapag-lloyd, one, cosco | |
| x_payment | No | x402 payment proof header | |
| trade_lane | No | Trade lane filter — e.g. 'Asia-Europe', 'Transpacific', 'Asia-Middle East' |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds critical cost information absent from annotations: '$0.02 per call via x402 (USDC on Base or Solana)'. Also describes return structure (on-time %, average delay, sample size) compensating for missing output schema. Aligns perfectly with readOnly/idempotent annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Excellent structure: purpose → use case → payment warning → args → returns. Front-loads critical payment constraint. No redundancy; every line delivers unique information not derivable from schema or annotations.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Comprehensive for tool complexity: covers 3 parameters (1 required), payment requirements, and return structure despite no output schema. Safety profile covered by annotations, cost profile by description. Sufficient for agent invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Despite 100% schema coverage, adds valuable specifics: enumerates valid line slugs (maersk, msc, etc.), provides concrete trade_lane example ('Asia-Europe'), and clarifies x_payment as 'proof header'. Enhances raw schema with actionable examples.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific action (Get), resource (schedule reliability metrics), and scope (shipping line) with concrete outputs (on-time performance, average delays). Distinguished from sibling tools like shippingrates_rates or shippingrates_transit by focusing on reliability/benchmarking metrics.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear business context ('Useful for carrier selection and benchmarking') helping users understand when to invoke. Lacks explicit 'when not to use' or sibling comparisons, but the specific metric focus sufficiently positions it among 20+ shipping tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
shippingrates_risk_scoreRoute Risk AssessmentARead-onlyIdempotentInspect
Get a composite risk score (0-100) for a shipping route — combines port congestion, active disruption news, and chokepoint impact analysis (Hormuz, Suez, Bab el-Mandeb, Panama Canal).
Use this for route risk screening — answers "how risky is this trade lane right now?" Scores above 70 indicate elevated risk. For detailed congestion metrics, use shippingrates_congestion. For news detail, use shippingrates_congestion_news.
PAID: $0.10/call via x402 (USDC on Base or Solana). Without payment, returns 402 with payment instructions.
Returns: { origin, destination, risk_score, risk_level, congestion_factor, disruption_factor, chokepoints_affected[], recommendation }.
| Name | Required | Description | Default |
|---|---|---|---|
| origin | Yes | Origin port UN/LOCODE — e.g. INNSA (Nhava Sheva), CNSHA (Shanghai), SGSIN (Singapore) | |
| x_payment | No | x402 payment proof header | |
| destination | Yes | Destination port UN/LOCODE — e.g. AEJEA (Jebel Ali), NLRTM (Rotterdam), USNYC (New York) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnly/idempotent/destructive hints, but description adds crucial behavioral context: cost structure ($0.10 per call), payment authentication requirements (x402 header), and specific return value details (composite score 0-100, chokepoint analysis). No contradictions with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with front-loaded value: first sentence defines purpose and outputs, second sentence provides critical cost warning, followed by clear Args list. No filler text. The Args section slightly duplicates the schema but is formatted cleanly for LLM parsing.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite no output schema, the description adequately covers expected returns (congestion data, news alerts, chokepoint analysis, composite score). Includes critical payment context required for a paid API. With annotations present and 100% param coverage, the description sufficiently prepares an agent for invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage (origin: 'Origin port UN/LOCODE', destination: 'Destination port UN/LOCODE', x_payment: 'x402 payment proof header'). The Args section in the description merely repeats these exact descriptions without adding format syntax, validation rules, or examples beyond the schema. Baseline 3 is appropriate given high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verb 'Get' with clear resource 'route risk assessment' and distinguishes from siblings by detailing unique features: congestion data, active news alerts, chokepoint impact analysis (specifically naming Hormuz, Suez, Bab el-Mandeb), and composite risk score 0-100. This differentiates it from shippingrates_congestion (just congestion) and shippingrates_congestion_news (just news).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly warns this is a 'PAID endpoint: $0.10 per call' with specific payment mechanism details (x402, USDC on Base or Solana), which is critical usage guidance for an LLM agent. While it doesn't explicitly name alternatives like 'use shippingrates_congestion instead for free congestion-only data', the scope differentiation via chokepoint mentions and composite score implies when to use this premium endpoint.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
shippingrates_searchSearch ShippingRates DataARead-onlyIdempotentInspect
Search the ShippingRates database by keyword — matches against carrier names, port names, country names, and charge types.
Use this for exploratory queries when you don't know exact codes. For example, search "mumbai" to find port codes, or "hapag" to find Hapag-Lloyd data coverage. Returns matching trade lanes, local charges, and shipping line information.
FREE — no payment required.
Returns: { trade_lanes: [...], local_charges: [...], lines: [...] } matching the keyword.
Related tools: Use shippingrates_port for structured port lookup by UN/LOCODE, shippingrates_lines for full carrier listing.
| Name | Required | Description | Default |
|---|---|---|---|
| keyword | Yes | Search term — e.g. "maersk", "mumbai", "hapag-lloyd" |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover safety profile (readOnlyHint=true, destructiveHint=false) and idempotency. Description adds valuable behavioral context not in annotations: 'FREE endpoint — no payment required' (cost model) and specific enumeration of return fields (THC, DO_fee, BL_fee, seal_fee), clarifying what data structures to expect despite no output schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Front-loaded purpose statement followed by usage guidance, cost notice, and structured Args/Returns sections. No redundant fluff. Each sentence delivers distinct value (purpose, usage, cost, parameter semantics, return structure).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter search tool with no output schema, the description adequately compensates by detailing the specific charge types and data structures returned. The 'FREE endpoint' note is critical business context. No significant gaps given the tool's simplicity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage with description and examples. The description adds semantic categories beyond the schema's examples: explicitly stating the keyword can be a 'shipping line name, port, country, or charge type', which helps the agent understand valid search domains.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description opens with specific verb 'Search' + resource 'ShippingRates shipping data' and clarifies return scope (trade lanes, local charges, line info). This distinguishes it from sibling tools like shippingrates_local_charges or shippingrates_port which presumably return only one entity type.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states 'Use this for exploratory queries' with concrete examples (shipping line name, port, country), effectively contrasting it against the many specific lookup siblings (e.g., shippingrates_lines, shippingrates_port). Lacks explicit 'when not to use' naming of alternatives, but the exploratory vs. specific distinction is clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
shippingrates_statsShippingRates Database StatisticsARead-onlyIdempotentInspect
Get current statistics for the ShippingRates shipping intelligence database.
Use this as a starting point to understand what data is available before calling other tools. Returns record counts for D&D tariffs, local charges, transit schedules, freight rates, surcharges, ports, shipping lines, countries, and the last data refresh timestamp.
FREE — no payment required.
Returns: { tariff_records, ports, transit_schedules, freight_rates, local_charges, shipping_lines, countries, last_scrape (ISO datetime) }
Related tools: Use shippingrates_lines for per-carrier breakdowns, shippingrates_search for keyword discovery.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds valuable behavioral context beyond annotations: explicitly states 'FREE endpoint — no payment required,' and documents return structure including 'last_scrape' freshness indicator. Annotations cover safety profile (readOnly/idempotent), description adds economic and data-freshness context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with clear sections: purpose, return values, usage guidance, and cost. The Returns JSON block is slightly bulky for a description field but compensates effectively for missing output schema. Every sentence provides distinct value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Excellent coverage for a simple metadata tool. Despite no output schema, description fully documents return structure including all fields (tariff_records, last_scrape, etc.). Covers purpose, usage, cost, and response format completely.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Zero parameters present, meeting baseline 4. No parameter explanation needed or expected.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific purpose: 'Get current statistics for the ShippingRates shipping intelligence database.' Uses specific verb+resource and distinguishes from siblings (which query specific data like rates, tariffs, congestion) by focusing on metadata/scope/freshness.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear when-to-use: 'Use this to understand the scope and freshness of available data.' Also highlights cost consideration ('FREE endpoint'). Lacks explicit 'when not to use' or named alternatives, but distinction from data-querying siblings is clear from context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
shippingrates_surchargesShipping SurchargesARead-onlyIdempotentInspect
Get carrier-specific surcharges — BAF (Bunker Adjustment Factor), CAF (Currency Adjustment Factor), PSS (Peak Season Surcharge), EBS (Emergency Bunker Surcharge), and more.
Use this to understand surcharge exposure for a carrier in a specific country/direction. These are charges added on top of base freight rates. For a complete cost breakdown, use shippingrates_total_cost which includes surcharges automatically.
PAID: $0.02/call via x402 (USDC on Base or Solana). Without payment, returns 402 with payment instructions.
Returns: Array of { surcharge_type, surcharge_name, amount, currency, per_unit, effective_from, effective_to, direction }.
| Name | Required | Description | Default |
|---|---|---|---|
| line | Yes | Shipping line slug — one of: maersk, msc, cma-cgm, hapag-lloyd, one, cosco | |
| country | No | ISO 2-letter country code | |
| direction | No | Trade direction — 'import' or 'export' | |
| x_payment | No | x402 payment proof header |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Beyond annotations declaring readOnly/idempotent status, the description adds crucial behavioral context: the monetary cost per call, the authentication mechanism (x402 payment proof), and the exact structure of returned data (type, amount, currency, effective dates). This provides essential context for agent decision-making.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description uses a clear structured format with distinct sections for purpose, cost warning, Args, and Returns. While information-dense, every sentence serves a necessary function (cost disclosure, parameter specification, return value documentation).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema, the description adequately documents the return structure (array with type, amount, currency, dates). The payment requirement is clearly stated. It could potentially clarify error behavior for missing payment, but covers all essential invocation and return semantics.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is elevated. The description adds value by enumerating the specific shipping line options (maersk, msc, etc.) and clarifying that x_payment is a 'proof header' for the x402 payment system, adding domain context beyond the raw schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get surcharges') and resource, listing exact surcharge types (BAF, CAF, PSS, EBS) that distinguish this from sibling tools like shippingrates_rates or shippingrates_local_charges. The scope is precisely defined by shipping line, country, and direction.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly identifies this as a 'PAID endpoint' costing '$0.02 per call' and specifies the payment mechanism (x402 via USDC on Base or Solana), which is critical prerequisite information for invocation. While it doesn't explicitly name alternatives, the specific surcharge focus differentiates it from the many sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
shippingrates_total_costFull Landed Cost CalculatorARead-onlyIdempotentInspect
Calculate the full landed cost of shipping a container — combines freight rates, surcharges, local charges (origin + destination), demurrage/detention estimates, and transit time into one comprehensive estimate.
This is the most comprehensive tool — a single call replaces 5-6 individual queries. Use this when the user needs an all-in cost estimate for a specific shipment. For individual cost components, use the dedicated tools: shippingrates_rates (freight), shippingrates_surcharges, shippingrates_local_charges, shippingrates_dd_calculate (detention).
PAID: $0.15/call via x402 (USDC on Base or Solana). Without payment, returns 402 with payment instructions.
Returns: { freight: { rate, currency }, surcharges: { total, items[] }, local_charges: { origin: { total, items[] }, destination: { total, items[] } }, detention: { days, cost, currency }, transit: { days, service }, total_landed_cost, currency }
| Name | Required | Description | Default |
|---|---|---|---|
| line | Yes | Shipping line slug — one of: maersk, msc, cma-cgm, hapag-lloyd, one, cosco | |
| origin | Yes | Origin port UN/LOCODE — e.g. INNSA (Nhava Sheva), CNSHA (Shanghai), SGSIN (Singapore) | |
| x_payment | No | x402 payment proof header | |
| destination | Yes | Destination port or inland location | |
| container_type | Yes | ISO 6346 container type — 20DV, 40DV, 40HC, 20RF, 40RF, 20OT, 40OT, 20FR, 40FR | |
| detention_days | No | Expected detention days (default: 0) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnly/idempotent/destructive status; description adds critical behavioral context missing from structured data: $0.15 cost per call, x402 payment requirement (USDC on Base/Solana), and the specific aggregation logic (what components are summed). No contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with clear Args/Returns sections. The Returns block compensates for missing output schema without being wasteful. First sentence front-loads value proposition. Minor verbosity in enumerating all cost components.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Excellent coverage for a paid tool: documents pricing model ($0.15), payment mechanism (x402), and provides complete return structure documentation despite no formal output schema. Could mention idempotency implications or retry safety hinted by annotations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (baseline 3), but description adds value by providing concrete examples (INNSA, AEJEA, 40HC) that clarify format expectations, and explicitly documents the x_payment purpose as 'x402 payment proof header' which aids agent reasoning about auth.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Excellent specificity: 'Calculate the full landed cost of shipping a container' clearly states the action and resource, while 'combines freight rates, surcharges, local charges... into one comprehensive estimate' explicitly differentiates from individual component siblings like shippingrates_rates or shippingrates_surcharges.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides strong contextual guidance by stating 'single call replaces 5-6 individual queries' and 'most powerful tool,' implying preference over granular alternatives. Lacks explicit 'when not to use' (e.g., when user needs only one specific component) or cost-tradeoff guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
shippingrates_transitTransit Time LookupARead-onlyIdempotentInspect
Get estimated ocean transit times between two ports across all available carriers.
Use this for quick transit time comparison between ports — answers "how long does it take to ship from A to B?" Returns carrier-specific transit durations, service types, and frequencies.
For detailed routing with transhipment ports and service codes, use shippingrates_transit_schedules instead.
PAID: $0.02/call via x402 (USDC on Base or Solana). Without payment, returns 402 with payment instructions.
Returns: Array of { carrier, transit_days, service_type, frequency, direct_or_transhipment }.
| Name | Required | Description | Default |
|---|---|---|---|
| origin | Yes | Origin port UN/LOCODE — e.g. INNSA (Nhava Sheva), CNSHA (Shanghai), SGSIN (Singapore) | |
| x_payment | No | x402 payment proof header | |
| destination | Yes | Destination port UN/LOCODE — e.g. AEJEA (Jebel Ali), NLRTM (Rotterdam), USNYC (New York) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnly/idempotent/destructive safety, so description appropriately focuses on adding cost behavior ($0.02 per call, x402 payment method) and return structure (Array of transit options with carrier, duration, service type). Does not mention rate limits or caching behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with clear sections: operation description, payment critical info, Args, and Returns. Every sentence serves a distinct purpose—no redundancy with structured fields. Front-loaded with the core action (Get estimated transit times).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Comprehensive for a lookup tool: covers the paid nature (crucial for this endpoint), required port codes, optional payment header, and describes return format despite lacking output schema. Minor gap in explicit sibling differentiation from shippingrates_transit_schedules.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with complete property descriptions. Description adds concrete UN/LOCODE examples ('INNSA', 'AEJEA') that clarify the expected port code format beyond the schema's generic string type, and reinforces the optional nature of x_payment through the 'optional' label and payment context.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verb 'Get' + resource 'estimated transit times' + scope 'between two ports'. Distinguishes from sibling shippingrates_transit_schedules by focusing on duration/frequency estimates rather than specific schedules, and from general rate tools by focusing on time rather than cost.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides critical usage context regarding the $0.02 payment requirement (USDC on Base/Solana) in the 'PAID endpoint' warning. However, lacks explicit guidance on when to use this vs. the similar sibling shippingrates_transit_schedules (transit times vs. schedules) or when payment is required vs. optional.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
shippingrates_transit_schedulesTransit Schedules by CarrierARead-onlyIdempotentInspect
Get detailed transit schedules for a specific carrier — service codes, routing via transhipment ports, transit days, and sailing frequency.
Use this when you need routing details beyond just transit time — e.g., which transhipment ports are used, what service string applies, or weekly frequency. For a quick transit time comparison across all carriers, use shippingrates_transit instead.
PAID: $0.03/call via x402 (USDC on Base or Solana). Without payment, returns 402 with payment instructions.
Returns: Array of { carrier, service_code, origin, destination, transit_days, transhipment_ports[], frequency, direct }.
| Name | Required | Description | Default |
|---|---|---|---|
| origin | No | Origin port UN/LOCODE filter | |
| carrier | Yes | Carrier SCAC code or slug | |
| max_days | No | Maximum transit days filter | |
| x_payment | No | x402 payment proof header | |
| destination | No | Destination port UN/LOCODE filter |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description provides critical behavioral information absent from annotations: explicit cost disclosure ('$0.03 per call via x402') and payment requirements ('USDC on Base or Solana'). This supports the readOnly/idempotent hints in annotations by clarifying the financial barrier to read access.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description mixes narrative and structured Args documentation. While the paid endpoint warning is appropriately front-loaded, the Args section largely duplicates the JSON schema. The format is readable but slightly redundant for an agent that can parse schemas.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 5-parameter read-only tool with no output schema, the description adequately covers the critical cost/payment dimension and implies return content (service codes, routing). The combination of annotations (safety) and description (cost/semantics) provides sufficient context for invocation decisions.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Despite 100% schema coverage setting a baseline of 3, the description adds value by providing concrete examples ('MAEU', 'maersk') for the carrier parameter and explicitly labeling optional vs required parameters in the Args section, aiding LLM interpretation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves 'detailed transit schedules' with specific attributes (service codes, routing, transhipment ports, frequency). However, it fails to distinguish from similar sibling tools like 'shippingrates_transit' or 'shippingrates_vessel_schedule', leaving ambiguity about which transit-related tool to select.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
There is no explicit guidance on when to use this tool versus alternatives, nor are prerequisites (beyond the optional payment header) mentioned. The description does not clarify when detailed schedules are needed over simpler transit time lookups or vessel-specific schedules.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
shippingrates_vessel_scheduleVessel ScheduleARead-onlyIdempotentInspect
Get upcoming vessel arrivals and departures at a specific port.
Use this to check what vessels are expected at a port — useful for booking planning and tracking. Returns vessel names, carriers, ETAs/ETDs, and service routes.
For transit time estimates between two ports, use shippingrates_transit. For detailed service-level routing, use shippingrates_transit_schedules.
PAID: $0.02/call via x402 (USDC on Base or Solana). Without payment, returns 402 with payment instructions.
Returns: Array of { vessel_name, carrier, voyage, eta, etd, service, from_port, to_port }.
| Name | Required | Description | Default |
|---|---|---|---|
| port | Yes | UN/LOCODE port code — e.g. INNSA (Nhava Sheva), AEJEA (Jebel Ali), SGSIN (Singapore) | |
| x_payment | No | x402 payment proof header | |
| days_ahead | No | Days to look ahead (default: 14) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Beyond the readOnly/idempotent annotations, the description adds crucial behavioral context: the specific cost ($0.02 per call), the payment mechanism (x402 via USDC on Base or Solana), and the return format (Array of vessel arrivals/departures with vessel name, carrier, ETA/ETD), which compensates for the missing output schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with clear sections: purpose statement, pricing warning, Args list with types and examples, and Returns description. Every sentence earns its place; even the pricing information is front-loaded given its importance for agent decision-making.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the 100% schema coverage and present annotations, the description is complete: it describes the return structure (compensating for no output schema), provides payment details for this paid endpoint, and includes format examples. No additional behavioral warnings or side effects are expected for this read-only operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While the schema has 100% coverage, the description adds valuable semantic context including concrete UN/LOCODE examples ('INNSA', 'AEJEA') for the port parameter and clarifies that x_payment relates to the x402 payment proof header, helping the agent understand the parameter's purpose.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves 'upcoming vessel schedules at a port' with specific scope (expected arrivals, departures, and vessel details). However, it does not explicitly differentiate from the sibling tool 'shippingrates_transit_schedules', which could cause confusion between port-specific vessel calls versus route-based transit schedules.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides critical usage constraints by noting this is a 'PAID endpoint: $0.02 per call' and requires an x402 payment proof header. However, it lacks explicit guidance on when to use this versus alternatives like 'shippingrates_transit_schedules' or the general search tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!