FreightGate MCP Server
Server Details
Container shipping intelligence for AI agents — demurrage & detention charges, local charges, inland haulage, CFS tariffs across 800+ ports and 45+ shipping lines. Pay-per-request with USDC via x402 protocol on Base and Solana networks. 9 tools including 3 free endpoints.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.1/5 across 24 of 24 tools scored.
Most tools have distinct purposes with clear boundaries, such as shippingrates_dd_calculate for single-carrier D&D costs versus shippingrates_dd_compare for multi-carrier comparisons. However, some tools like shippingrates_congestion and shippingrates_congestion_news have overlapping domains (port congestion) that could cause confusion, though descriptions help differentiate them by focusing on metrics versus news.
All tool names follow a consistent snake_case pattern with a 'shippingrates_' prefix and descriptive verb_noun combinations, such as shippingrates_inland_haulage and shippingrates_total_cost. This uniformity makes the tool set predictable and easy to navigate.
With 24 tools, the set is comprehensive for the shipping logistics domain, covering areas like rates, congestion, D&D, and inland transport. While slightly high, each tool serves a specific function, and the count is reasonable given the broad scope of freight intelligence.
The tool set provides complete coverage for shipping logistics, including core components like freight rates, surcharges, local charges, D&D, inland haulage, and risk assessment. Tools like shippingrates_total_cost integrate multiple aspects, ensuring no significant gaps for agent workflows in cost estimation and route planning.
Available Tools
24 toolsshippingrates_cfs_tariffsGet CFS Handling TariffsARead-onlyIdempotentInspect
Get Container Freight Station (CFS) handling tariffs — charges for LCL (Less than Container Load) cargo consolidation and deconsolidation at port warehouses.
Use this for LCL shipments to estimate warehouse handling costs. Returns per-unit handling rates, minimum charges, and storage fees at the specified port. Not relevant for FCL (Full Container Load) shipments.
PAID: $0.05/call via x402 (USDC on Base or Solana). Without payment, returns 402 with payment instructions.
Returns: Array of { facility, service_type, cargo_type, rate_per_unit, unit, minimum_charge, currency }.
| Name | Required | Description | Default |
|---|---|---|---|
| port | Yes | UN/LOCODE port code (e.g. INMAA, INMUN) | |
| service | No | Filter by service type | |
| x_payment | No | x402 payment proof header | |
| cargo_type | No | Filter by cargo type |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnly/idempotent status, while the description adds crucial operational context: the $0.05 per-call cost via x402, the specific return data structure (per-unit rates, minimum charges, storage fees), and the payment header requirement. No contradictions with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is optimally structured: purpose statement, return value specification, cost warning, then parameter details. Every sentence delivers distinct value (scope, output, cost, inputs) with no redundancy or fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema, the description adequately explains what data is returned (handling rates, minimums, storage fees). The paid nature of the endpoint is clearly disclosed. Minor gap: no mention of error behaviors or invalid port code handling, though openWorldHint=false implies standard validation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, the baseline is 3. The description adds value by providing concrete example values for 'service' (import, export) and 'cargo_type' (general, hazardous) that the schema lacks, helping the agent construct valid filter parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a specific verb-resource pair ('Get Container Freight Station handling tariffs') and clarifies the specific domain (LCL cargo consolidation/deconsolidation at port warehouses). This distinguishes it from sibling tools like shippingrates_local_charges or shippingrates_inland_haulage by specifying the CFS/LCL context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains the domain (CFS/LCL cargo) which implies usage context, but provides no explicit comparison to alternatives or exclusions (e.g., 'use this for LCL shipments, not FCL'). The agent must infer applicability from the domain explanation alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
shippingrates_congestionPort Congestion DataARead-onlyIdempotentInspect
Get port congestion metrics — vessel waiting times, berth occupancy, and delay trends for a specific port.
Use this to assess port efficiency and anticipate detention risk. High congestion often leads to longer container dwell times and higher D&D costs. For shipping disruption news and alerts (Red Sea, Suez, chokepoints), use shippingrates_congestion_news instead.
PAID: $0.02/call via x402 (USDC on Base or Solana). Without payment, returns 402 with payment instructions.
Returns: { port, congestion_level, avg_waiting_hours, berth_occupancy_pct, vessel_count, trend, period_days }.
| Name | Required | Description | Default |
|---|---|---|---|
| port | Yes | UN/LOCODE port code — e.g. INNSA (Nhava Sheva), AEJEA (Jebel Ali), SGSIN (Singapore) | |
| days_back | No | Days of historical data (default: 30) | |
| x_payment | No | x402 payment proof header |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
While annotations declare the operation is readOnly and idempotent, the description adds critical behavioral context not found in structured fields: the payment model ('PAID endpoint: $0.02 per call via x402'), accepted currencies/networks (USDC on Base or Solana), and a preview of return values ('Congestion metrics with waiting times, delays, and trend data'). No contradictions with annotations exist.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description follows a logical structure: purpose → use case → cost → parameters → returns. Every sentence earns its place; the payment warning is critical for a paid endpoint, and the parameter examples prevent encoding errors. No redundant or wasted language.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 3 parameters (all documented), no output schema, and unique cost requirements, the description provides complete coverage: it explains what data is returned, distinguishes the tool from general port queries, and warns about the micropayment requirement—essential information for successful invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description adds value by providing concrete examples for the port parameter ('INNSA', 'AEJEA'), clarifying the days_back default, and contextualizing the x_payment parameter within the broader payment workflow. This exceeds the schema's basic definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a specific verb ('Get') and clearly identifies the resource (port congestion metrics) with concrete data points (vessel waiting times, berth occupancy, delays). This distinguishes it from siblings like shippingrates_rates or shippingrates_vessel_schedule which focus on pricing and schedules rather than operational congestion metrics.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use the tool ('assessing port efficiency and planning for potential detention costs'), giving the agent a concrete use case. However, it does not explicitly mention sibling alternatives or specific scenarios where this tool should not be used (e.g., when real-time news is needed instead of metrics).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
shippingrates_congestion_newsShipping Disruption NewsARead-onlyIdempotentInspect
Get shipping disruption news aggregated from 7 trade press sources — with port tagging and severity classification. Covers Hormuz Strait, Red Sea/Houthi, Suez Canal, Bab el-Mandeb, port congestion, and weather events.
Use this for situational awareness — answers "are there any active disruptions affecting my route?" For quantitative port congestion metrics (waiting times, berth occupancy), use shippingrates_congestion instead. For route-level risk scoring, use shippingrates_risk_score.
PAID: $0.02/call via x402 (USDC on Base or Solana). Without payment, returns 402 with payment instructions.
Returns: Array of { headline, source, published_at, severity, affected_ports[], chokepoint, summary }.
| Name | Required | Description | Default |
|---|---|---|---|
| port | No | Port UN/LOCODE filter | |
| limit | No | Maximum number of results | |
| severity | No | Severity classification filter | |
| days_back | No | Days of historical news (default: 7) | |
| x_payment | No | x402 payment proof header |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds substantial behavioral context beyond annotations: cost structure ($0.02/call), payment mechanism (USDC on Base/Solana via x402), data provenance (7 trade press sources), and processing features (port tagging, severity classification). Annotations confirm read-only/idempotent safety.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with clear front-loading (sources, coverage, cost). Three distinct sections (description, pricing, parameters). Minor redundancy between Args list and schema, but presents information in LLM-friendly format.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Comprehensive for a paid retrieval tool: covers payment requirements, filtering capabilities, and data scope. Missing only output format description (return structure of news items), though annotations and complete input schema partially compensate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema already documents all parameters. The Args list restates parameter purposes without adding significant semantic depth (e.g., no format examples or validation rules beyond schema).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb 'Get' with resource 'shipping disruption news', scope '7 trade press sources', and clear geographic coverage (Hormuz, Red Sea, Suez). Distinguishes from sibling 'shippingrates_congestion' by emphasizing news content and media sources.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides critical usage context via 'PAID endpoint: $0.02 per call' warning and payment method (x402). Implies use-case through coverage areas and news focus, though does not explicitly contrast with 'shippingrates_congestion' sibling.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
shippingrates_dd_calculateCalculate Demurrage & Detention CostsARead-onlyIdempotentInspect
Calculate demurrage and detention (D&D) costs for one carrier in one country.
Use this when the user needs a detailed cost breakdown for a specific carrier. Returns free days, per-diem rates for each tariff slab, and total cost. This is the core tool for logistics cost analysis — it answers "how much will I pay if my container is detained X days?"
To compare D&D costs across all carriers at once, use shippingrates_dd_compare instead.
PAID: $0.10/call via x402 (USDC on Base or Solana). Without payment, returns 402 with payment instructions.
Returns: { line, country, container_type, days, free_days, slabs: [{ from, to, rate_per_day, days, cost }], total_cost, currency }
| Name | Required | Description | Default |
|---|---|---|---|
| days | Yes | Number of detention days | |
| line | Yes | Shipping line slug — one of: maersk, msc, cma-cgm, hapag-lloyd, one, cosco | |
| country | Yes | ISO 2-letter country code (e.g. IN, AE, SG) | |
| x_payment | No | x402 payment proof header (optional — required for paid access) | |
| container_type | Yes | ISO 6346 container type — 20DV, 40DV, 40HC, 20RF, 40RF, 20OT, 40OT, 20FR, 40FR |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
While annotations confirm read-only/idempotent safety, the description adds essential behavioral context: the payment model (cost, currency, blockchain networks), authentication mechanism (x402 proof header), and comprehensive return structure (slab breakdown schema). It does not contradict annotations (readOnlyHint=true aligns with calculation semantics), though it omits error behavior for unpaid requests.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with clear sections for purpose, pricing, and arguments/returns. While the Args/Returns blocks are verbose, they are justified by the absence of a formal output schema in the structured fields. The front-loading of the payment requirement ensures critical cost information is not missed.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (paid API, domain-specific slab calculations), the description provides comprehensive coverage including input examples and a full JSON return structure. It adequately compensates for the missing output schema. Minor gaps include lack of error handling documentation for authentication failures or invalid parameter combinations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description adds value by enumerating concrete examples for the 'line' parameter (maersk, msc, cma-cgm, etc.) and 'country' codes (IN, AE, SG), and clarifies that 'x_payment' is specifically for 'authenticated access' via payment proof, enhancing the schema's basic description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a precise action verb ('Calculate') and specific resource ('demurrage and detention (D&D) costs'), including exact scope modifiers (shipping line, country, container type, days). It distinguishes itself from siblings like 'shippingrates_dd_compare' by emphasizing the 'detailed slab breakdown' output specific to this calculation tool.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description identifies this as 'the core tool for logistics cost analysis' and provides critical usage constraints regarding the paid endpoint ($0.10 per call via x402) and the optional x_payment header requirement. However, it lacks explicit guidance on when to prefer this over the sibling 'shippingrates_dd_compare' or 'shippingrates_total_cost' tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
shippingrates_dd_compareCompare D&D Across Shipping LinesARead-onlyIdempotentInspect
Compare demurrage and detention costs across ALL available carriers for the same country, container type, and detention days.
Use this for freight procurement and carrier selection — it answers "which carrier has the cheapest D&D in this country?" Returns a side-by-side comparison with each carrier's free days, slab rates, and total cost sorted cheapest first.
For a single carrier's detailed D&D breakdown, use shippingrates_dd_calculate instead.
PAID: $0.25/call via x402 (USDC on Base or Solana). Without payment, returns 402 with payment instructions.
Returns: Array of { line, free_days, total_cost, currency, slabs } for each available carrier, sorted by total_cost ascending.
| Name | Required | Description | Default |
|---|---|---|---|
| days | Yes | Number of detention days | |
| country | Yes | ISO 2-letter country code | |
| x_payment | No | x402 payment proof header | |
| container_type | Yes | ISO 6346 container type — 20DV, 40DV, 40HC, 20RF, 40RF, 20OT, 40OT, 20FR, 40FR |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnly/idempotent hints, but description adds critical behavioral context not in structured data: the $0.25 payment requirement per call, the x402 payment mechanism, and that it 'Returns a side-by-side comparison' indicating the output format. No contradictions with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Efficiently structured with purpose first, output format second, use case third, and pricing warning fourth. The Args section is well-organized. No redundant words; every sentence delivers distinct value (scope, output, use case, cost).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 4-parameter read-only comparison tool with 100% schema coverage, the description adequately covers the paid nature, return format, and parameter semantics. Could improve by explicitly contrasting with shippingrates_dd_calculate, but sufficient for correct agent invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage (baseline 3). Description adds value by providing concrete examples for container_type ('40HC', '20DV') not present in schema, and clarifies x_payment purpose as 'x402 payment proof header' matching the schema but reinforcing the payment behavior described earlier.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description opens with specific verb ('Compare') and resource ('demurrage and detention costs'), explicitly scopes to 'across multiple shipping lines' which distinguishes it from sibling shippingrates_dd_calculate, and clarifies the comparison dimensions (country, container type, days).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear usage context ('Essential for freight procurement and carrier selection') establishing when to use, but lacks explicit 'when not to use' guidance or naming of alternatives like shippingrates_dd_calculate for single-line calculations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
shippingrates_facilitiesIndia ICD/CFS Facility DirectoryARead-onlyIdempotentInspect
Search India's Inland Container Depot (ICD) and Container Freight Station (CFS) facility directory — GPS coordinates, rail connectivity, operator details, and capacity.
Use this to find facilities near an inland destination in India, or to check if a specific ICD/CFS has rail connectivity. Useful for inland logistics planning in combination with shippingrates_inland_haulage.
PAID: $0.02/call via x402 (USDC on Base or Solana). Without payment, returns 402 with payment instructions.
Returns: Array of { code, name, type, state, city, lat, lon, operator, rail_connected, capacity }.
| Name | Required | Description | Default |
|---|---|---|---|
| code | No | Facility code filter | |
| type | No | Facility type filter | |
| state | No | Indian state name filter | |
| x_payment | No | x402 payment proof header | |
| rail_connected | No | Rail connectivity filter — 'true' or 'false' |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare read-only and safe operation; the description adds critical behavioral context not covered by annotations: the $0.02 per-call cost, x402 payment requirement, and acceptable cryptocurrencies (USDC on Base or Solana), which is essential for agent decision-making.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with the core purpose front-loaded, followed by critical payment information. While the Args section duplicates schema content, it is cleanly formatted and the payment disclosure earns its place as vital invocation context.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite lacking an output schema, the description compensates by listing return data fields (GPS, operator details, etc.) and includes essential payment context for a paid endpoint. It covers necessary invocation information, though explicit sibling differentiation would further improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the structured schema already fully documents all 5 parameters. The Args section in the description largely repeats schema information without adding significant semantic depth or usage examples beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it searches an India ICD/CFS facility directory and specifies what data is returned (GPS coordinates, rail connectivity, operator details, capacity), using specific verbs and resource identification that distinguishes it from port or general shipping rate tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description defines the specific domain (India ICD/CFS facilities) which implies usage context for inland container logistics, but lacks explicit guidance on when to use this versus sibling tools like shippingrates_port or shippingrates_inland_search.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
shippingrates_fxCurrency Exchange RatesARead-onlyIdempotentInspect
Get current exchange rate between two currencies — useful for converting shipping costs quoted in different currencies (USD, EUR, INR, AED, SGD, CNY, etc.).
Use this to normalize costs from different carriers/countries to a common currency for comparison. Rates are updated daily.
FREE — no payment required.
Returns: { from, to, rate, timestamp }
| Name | Required | Description | Default |
|---|---|---|---|
| to | Yes | Target currency code — e.g. "INR", "AED" | |
| from | Yes | Source currency code — e.g. "USD", "EUR" |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds valuable operational context beyond annotations: explicitly states 'FREE endpoint — no payment required' and documents the return object structure ({ 'rate': number, 'timestamp': string }) which is critical given the absence of an output schema. No contradictions with the readOnlyHint and idempotentHint annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with clear sections (purpose, use case, cost note, args, returns). Minor redundancy between the Args section and input schema, but the Returns documentation is essential given no output schema exists. Each sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple 2-parameter read-only tool with complete schema coverage, the description provides comprehensive context: domain application (shipping), cost model (free), input examples, and output structure. Fully sufficient for correct agent invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the structured fields already define the parameters completely. The description's Args section essentially mirrors the schema (source/target currency codes with examples), adding only minor additional currency examples (INR, USD). Baseline score appropriate when schema carries the full semantic load.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a specific verb and resource ('Get current exchange rates between two currencies'), clearly distinguishing it from shipping-specific siblings like shippingrates_rates or shippingrates_congestion. The shipping cost conversion use case further clarifies its specific role in the tool suite.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear context for when to use ('converting shipping costs quoted in different currencies'), linking it to the shipping domain while distinguishing its currency-specific function. Lacks explicit 'when not to use' guidance, though the sibling tools' distinct purposes (congestion, schedules, tariffs) make misuse unlikely.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
shippingrates_inland_compareCompare Inland Haulage RatesARead-onlyIdempotentInspect
Compare inland haulage rates across ALL available carriers for a port-to-ICD/city pair — sorted cheapest first.
Use this for carrier selection on inland legs — answers "which carrier offers the cheapest trucking/rail from port X to city Y?" For a single carrier's rates, use shippingrates_inland_haulage instead. To discover what routes exist, use shippingrates_inland_search first.
PAID: $0.08/call via x402 (USDC on Base or Solana). Without payment, returns 402 with payment instructions.
Returns: Array of { carrier, mode, container_type, rate, currency, transit_days, weight_bracket } sorted by rate ascending.
| Name | Required | Description | Default |
|---|---|---|---|
| origin | Yes | Origin port UN/LOCODE — e.g. INNSA (Nhava Sheva), CNSHA (Shanghai), SGSIN (Singapore) | |
| x_payment | No | x402 payment proof header | |
| destination | Yes | Destination city or ICD code | |
| container_type | No | Container type (default: 20GP) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Beyond annotations declaring read-only/idempotent safety, the description adds crucial behavioral context: return sorting (cheapest first), specific output fields (carrier, mode, weight bracket), and detailed payment mechanics (x402 protocol, USDC on Base/Solana). This adequately covers the paid nature of the endpoint.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with clear information hierarchy: purpose first, return format second, payment warning third, then parameter reference. However, the Args section is redundant given complete schema coverage, and could be condensed or eliminated to reduce duplication.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description appropriately compensates by detailing the return structure (sorted list with specific fields). It covers the payment requirement essential for a paid endpoint. Minor gap: no mention of error states (e.g., no routes found) or rate limiting.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is appropriately met. The Args section restates parameter meanings already present in the schema without adding new semantic details (examples, validation rules, or format constraints), serving as readable documentation but not compensating beyond the structured schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific verb (Compare), resource (inland haulage rates), and scope (across all carriers for an ICD-port pair). It distinguishes from siblings like 'shippingrates_inland_haulage' by emphasizing comparison across carriers and 'sorted cheapest first'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description identifies this as a 'PAID endpoint' with specific pricing ($0.08), which is critical usage context. However, it lacks explicit guidance on when to use this versus 'shippingrates_inland_search' or 'shippingrates_inland_haulage', leaving sibling selection to implicit interpretation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
shippingrates_inland_haulageGet Inland Haulage RatesARead-onlyIdempotentInspect
Get inland haulage (trucking/rail) rates for moving containers between a port and an inland location.
Use this when you know the specific origin port and destination and need rate quotes. Returns route-specific rates by container type including base rate, fuel surcharges, and estimated transit times.
To discover what routes exist first, use shippingrates_inland_search. To compare rates across all carriers for the same route, use shippingrates_inland_compare.
PAID: $0.05/call via x402 (USDC on Base or Solana). Without payment, returns 402 with payment instructions.
Returns: Array of { carrier, origin, destination, container_type, rate, fuel_surcharge, total, currency, transit_days, mode }.
| Name | Required | Description | Default |
|---|---|---|---|
| mode | No | Transport mode filter (PRE or ONC) | |
| origin | Yes | Origin port UN/LOCODE (e.g. INNSA, INMAA) | |
| x_payment | No | x402 payment proof header | |
| destination | Yes | Inland destination city name (e.g. Ahmedabad, Delhi) | |
| container_type | No | Container type filter — e.g. 20DV, 40HC, 20RF |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Excellent disclosure beyond annotations: explicitly states pricing ($0.05, x402 payment), return value structure (base rate, fuel surcharges, transit times), and payment mechanism (USDC on Base/Solana). Annotations confirm read-only/idempotent, but description adds the commercial and response format context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with clear visual hierarchy: purpose statement, return value description, cost warning, and parameter list. Front-loaded with critical information (PAID endpoint prominently placed). No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Comprehensive for a paid lookup tool: explains return values (base rates, surcharges, transit times) since no output schema exists, documents payment requirements, and covers all 5 parameters. Minor gap: no mention of rate limits or error conditions, but acceptable given complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage, establishing baseline 3. Description adds helpful examples (e.g., 'INNSA', 'Ahmedabad', '20GP') in the Args section, but largely mirrors schema definitions without adding significant semantic depth or parameter interdependencies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb ('Get') and resource ('inland haulage rates') with specific scope ('between ports and inland locations'). Includes transport modes (trucking/rail) in parentheses. Does not explicitly distinguish from siblings like 'shippingrates_inland_compare' or 'shippingrates_inland_search', which would elevate it to a 5.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides critical cost context ('PAID endpoint: $0.05 per call') which constrains usage, but lacks explicit when-to-use guidance versus siblings (e.g., when to use this single-route lookup vs. 'inland_compare' or 'inland_search').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
shippingrates_inland_searchSearch Inland Transport RoutesARead-onlyIdempotentInspect
Search for available inland transport routes (road/rail haulage) from port to inland destinations for a specific carrier.
Use this to discover what haulage routes a carrier offers in a country. For example, search "ahmedabad" to find routes from Nhava Sheva to Ahmedabad via Maersk. Returns route options with ICD/CFS codes and available container types.
For actual haulage rate quotes, use shippingrates_inland_haulage. For cross-carrier rate comparison, use shippingrates_inland_compare.
PAID: $0.03/call via x402 (USDC on Base or Solana). Without payment, returns 402 with payment instructions.
Returns: Array of { origin, destination, mode, container_types, icd_code } matching the search criteria.
| Name | Required | Description | Default |
|---|---|---|---|
| line | Yes | Shipping line slug — one of: maersk, msc, cma-cgm, hapag-lloyd, one, cosco | |
| country | Yes | ISO 2-letter country code | |
| keyword | No | Search term — city name, region, or route | |
| x_payment | No | x402 payment proof header |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
While annotations cover safety profile (readOnly/idempotent), the description adds crucial behavioral context: the cost per call ($0.03 via x402) and what the search returns ('haulage options, routes, and estimated costs'). This supplements the annotations effectively.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with purpose front-loaded, followed by behavioral details and parameter listing. Slightly redundant to repeat all parameter descriptions when schema coverage is complete, but the payment warning is appropriately prominent.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite lacking an output schema, the description adequately explains return values ('haulage options, routes, and estimated costs') and discloses the paid nature. Complete enough for a 4-parameter search tool with clear annotations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, providing complete param documentation. The description's 'Args' section largely mirrors the schema definitions without adding significant semantic depth, though it correctly marks optional params. Baseline 3 is appropriate given schema completeness.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches for 'inland transport routes (road/rail haulage)' and distinguishes ocean vs. inland transport. However, it fails to differentiate from sibling tools like 'shippingrates_inland_compare' or 'shippingrates_inland_haulage', which could confuse selection.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides critical usage constraints regarding payment ('PAID endpoint: $0.03 per call'), but lacks guidance on when to use this versus related inland tools or prerequisites for the x402 payment header.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
shippingrates_linesList Shipping LinesARead-onlyIdempotentInspect
List all shipping lines in the ShippingRates database with per-country record counts.
Use this to discover which carriers and countries have data before querying specific tools. Returns each carrier's name, slug, SCAC code, and a breakdown of available D&D tariff and local charge records per country.
FREE — no payment required.
Returns: Array of { line, slug, scac, countries: [{ code, name, dd_records, lc_records }] }
Related tools: Use shippingrates_stats for aggregate totals, shippingrates_search for keyword-based discovery.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already cover safety profile (readOnly, idempotent, non-destructive). Description adds valuable billing context ('FREE endpoint') and data scope transparency (specific 6 carriers included, tariff/charge record counts per country). No contradictions with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Five sentences, all front-loaded with critical information. Sentence 1 states purpose, 2-3 specify data content, 4 covers billing, 5 describes return type. Zero redundancy; every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema exists, so description must explain return values. It specifies 'Array of shipping lines with country breakdowns' and details the included fields (number of tariff/charge records per country). Could benefit from exact field names, but adequate for a simple zero-parameter list operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has zero parameters (empty properties object). Per calibration rules, 0 params = baseline 4. Description correctly implies no filtering is possible or required for this global list operation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description provides specific verb ('Get'), resource ('shipping lines'), and scope ('available in ShippingRates with per-country record counts'). It explicitly lists the 6 specific carriers covered (Maersk, MSC, etc.), clearly distinguishing this from siblings like shippingrates_rates or shippingrates_vessel_schedule which handle different data types.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
States 'This is a FREE endpoint — no payment required,' providing crucial context for selecting this over potentially paid alternatives among the 20+ sibling tools. Lacks explicit 'when not to use' guidance or named alternatives, but the specific scope ('per-country record counts') provides implicit filtering.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
shippingrates_local_chargesGet Port Local ChargesARead-onlyIdempotentInspect
Get local charges at a port for a specific carrier — Terminal Handling Charges (THC), documentation fees (BL/DO), seal fees, and other port-specific charges.
Use this when calculating total shipping costs at origin or destination. Combine with shippingrates_dd_calculate for a complete port cost picture, or use shippingrates_total_cost for an all-in-one landed cost estimate.
PAID: $0.05/call via x402 (USDC on Base or Solana). Without payment, returns 402 with payment instructions.
Returns: Array of { charge_type, charge_name, amount, currency, container_type, direction } for all applicable charges at the port.
| Name | Required | Description | Default |
|---|---|---|---|
| line | Yes | Shipping line slug — one of: maersk, msc, cma-cgm, hapag-lloyd, one, cosco | |
| country | Yes | ISO 2-letter country code | |
| port_code | No | Port code to filter (e.g. INMUN for Mumbai) | |
| x_payment | No | x402 payment proof header |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare the operation as read-only and idempotent. The description adds crucial behavioral context not in annotations: the payment requirement (x402/USDC), cost per call, and the return value structure ('detailed breakdown'). It does not contradict annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and front-loaded: opening with the core purpose, followed by return value, payment warning, and parameter listing. Every sentence provides distinct information (action, output, cost, inputs) with no redundancy or waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (4 parameters, paid endpoint, no output schema), the description adequately covers the essential context: payment mechanism, return description, and required vs optional parameters. It appropriately compensates for the missing output schema by describing the return value.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema already documents all parameters thoroughly (including the INMUN example for port_code and the x402 explanation). The Args section in the description largely repeats the schema without adding significant semantic value or usage examples beyond the structured data.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get local charges') and resource (port/shipping line charges), with concrete examples (THC, documentation fees, seal fees) that distinguish it from freight rates or general surcharges. The scope is precisely defined.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description identifies this as a 'PAID endpoint' with specific pricing ($0.05), which is critical usage information. However, it lacks explicit guidance on when to use this versus siblings like 'shippingrates_surcharges' or 'shippingrates_total_cost', or when the optional port_code should be provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
shippingrates_portPort LookupARead-onlyIdempotentInspect
Look up port details by UN/LOCODE — name, country, coordinates, timezone, and terminal facilities.
Use this to validate port codes or get port metadata. If you don't know the UN/LOCODE, use shippingrates_search with the port or city name first.
PAID: $0.01/call via x402 (USDC on Base or Solana). Without payment, returns 402 with payment instructions.
Returns: { port_code, port_name, country, country_code, lat, lon, timezone, facilities }
| Name | Required | Description | Default |
|---|---|---|---|
| code | Yes | UN/LOCODE port code — e.g. "INNSA", "AEJEA", "SGSIN" | |
| x_payment | No | x402 payment proof header |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare read-only, idempotent safety properties, so the description appropriately focuses on adding the critical payment context ('PAID endpoint: $0.01 per call via x402') and return value structure (coordinates, timezone, facilities) not present in annotations. It does not cover error cases (e.g., invalid codes).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description uses a clear structured format (purpose, cost warning, Args, Returns) with the most critical information (lookup purpose and cost) front-loaded. The Args/Returns sections are slightly redundant with the schema but justified by the lack of output schema and added examples.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema, the description comprehensively documents the return structure (port name, country, coordinates, etc.) and crucially explains the x402 payment mechanism (USDC on Base/Solana) required for invocation, providing sufficient context for a paid API tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While the schema has 100% coverage, the description adds valuable semantic context by mapping example codes to human-readable port names ('INNSA' → Nhava Sheva, 'AEJEA' → Jebel Ali), helping agents understand the parameter format better than the raw schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a specific verb ('Look up') and resource ('port details') scoped by the unique identifier type ('UN/LOCODE'), clearly distinguishing it from sibling tools like 'shippingrates_search' or 'shippingrates_rates' which handle broader queries.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description establishes clear context by specifying UN/LOCODE-based lookup and explicitly warns about the paid nature of the endpoint ($0.01 per call). However, it does not explicitly state when to use the general 'shippingrates_search' sibling instead for fuzzy/port-name queries.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
shippingrates_ratesFreight RatesARead-onlyIdempotentInspect
Get ocean freight rates between two ports, optionally filtered by container type.
Use this to compare base freight costs across carriers for a specific trade lane. Returns current spot rates and contract rate indicators with trend data. For a complete cost picture including surcharges and local charges, use shippingrates_total_cost instead.
PAID: $0.03/call via x402 (USDC on Base or Solana). Without payment, returns 402 with payment instructions.
Returns: Array of { carrier, origin, destination, container_type, rate, currency, effective_date, trend }.
| Name | Required | Description | Default |
|---|---|---|---|
| origin | Yes | Origin port UN/LOCODE — e.g. INNSA (Nhava Sheva), CNSHA (Shanghai), SGSIN (Singapore) | |
| x_payment | No | x402 payment proof header | |
| destination | Yes | Destination port UN/LOCODE — e.g. AEJEA (Jebel Ali), NLRTM (Rotterdam), USNYC (New York) | |
| container_type | No | Container type filter — e.g. 20DV, 40HC, 20RF |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover safety (readOnlyHint, idempotentHint), so the description appropriately focuses on business logic. It critically discloses the payment requirement ('PAID endpoint: $0.03 per call via x402'), which is absent from annotations. It also describes the return data structure ('Array of freight rates with carrier, rate, currency...') compensating for the lack of output schema. No contradictions with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with clear sections for purpose, payment warning, arguments, and returns. It is front-loaded with the core action. Minor deduction for slight redundancy between the Args section and the input schema, though the examples justify the repetition.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a paid endpoint with no output schema, the description is comprehensive. It discloses pricing ($0.03, USDC, networks), describes the return value structure and content (current/historical/trend), and provides examples for the non-obvious port code format. This covers all critical gaps left by the structured metadata.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, the baseline is 3. The description adds significant value by providing concrete examples for the UN/LOCODE format ('INNSA', 'AEJEA') and container codes ('40HC', '20DV'), clarifying the expected parameter syntax beyond the schema's basic type definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'Get[s] ocean freight rates between two ports for a specific container type,' using a specific verb and resource. It specifies the domain (ocean freight) which distinguishes it from inland haulage siblings, though it does not explicitly differentiate from closely related tools like shippingrates_total_cost or shippingrates_transit.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage through scope ('ocean freight rates between two ports') and mentions it returns 'current and historical rate data with trend indicators,' suggesting when to use it for trend analysis. However, it lacks explicit when-to-use/when-not-to-use guidance or naming of alternative tools from the extensive sibling list.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
shippingrates_regulatoryRegulatory UpdatesARead-onlyIdempotentInspect
Get recent shipping regulatory updates and compliance requirements for a specific country — customs regulations, documentation requirements, trade restrictions, and policy changes.
Use this to stay current on regulatory changes that may affect shipments to/from a country.
PAID: $0.01/call via x402 (USDC on Base or Solana). Without payment, returns 402 with payment instructions.
Returns: Array of { title, description, effective_date, impact_level, category, country }.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max results (default: 10) | |
| country | Yes | ISO 2-letter country code | |
| x_payment | No | x402 payment proof header |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnly/idempotent hints, so the description appropriately focuses on additional behavioral traits: the payment model ('PAID endpoint: $0.01 per call via x402'), accepted currencies/chains (USDC on Base or Solana), and return value structure ('Array of regulatory updates with title, description...'). No contradictions with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured and front-loaded: purpose first, coverage areas second, cost warning third, then Args/Returns. The Args section repeats schema info but justifies its existence with helpful examples. The Returns section compensates for the missing output schema. No wasted sentences, though the Args list is slightly redundant given perfect schema coverage.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 3 parameters (one requiring external payment) and no output schema, the description successfully covers all critical gaps: it explains the return format, documents the cost model, and provides parameter examples. It could mention rate limits or error scenarios, but it covers the essentials for invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, the baseline is 3. The description adds valuable context beyond the schema: concrete examples for country codes ('IN', 'AE', 'SG'), clarifies that x_payment is for 'x402 payment proof,' and notes the default behavior for limit. These examples and payment context enhance usability significantly.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a specific verb ('Get') and resource ('regulatory updates and compliance requirements'), clearly scoped to 'shipping in a specific country.' It distinguishes from siblings (which focus on rates, congestion, schedules) by explicitly listing coverage areas: 'customs regulations, documentation requirements, trade restrictions, and policy changes.'
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear domain context through the 'Covers...' sentence, implying when to use this tool (for compliance/customs issues). However, it lacks explicit guidance on when NOT to use it or direct comparison to siblings like shippingrates_rates or shippingrates_search, leaving the agent to infer from the name alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
shippingrates_reliabilitySchedule ReliabilityARead-onlyIdempotentInspect
Get schedule reliability metrics for a carrier — on-time performance percentage, average delay in days, and sample size.
Use this for carrier selection and benchmarking — answers "how reliable is this carrier on this trade lane?" On-time is defined as arriving within ±1 day of scheduled ETA (industry standard per Sea-Intelligence).
PAID: $0.02/call via x402 (USDC on Base or Solana). Without payment, returns 402 with payment instructions.
Returns: { line, trade_lane, on_time_pct, avg_delay_days, sample_size, period }.
| Name | Required | Description | Default |
|---|---|---|---|
| line | Yes | Shipping line slug — one of: maersk, msc, cma-cgm, hapag-lloyd, one, cosco | |
| x_payment | No | x402 payment proof header | |
| trade_lane | No | Trade lane filter — e.g. 'Asia-Europe', 'Transpacific', 'Asia-Middle East' |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnly/idempotent, while description adds critical cost behavior ('PAID endpoint: $0.02 per call via x402') and return value structure ('on-time %, average delay, sample size') not present in structured fields. No contradictions with safety annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with clear sections for purpose, use case, cost warning, arguments, and returns. Slightly redundant with schema in Args section, but earns its place by including examples and payment context.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Complete for a paid API tool: discloses pricing mechanism (x402/USDC), describes return payload despite no output schema, and covers all 3 parameters including optional payment header. Missing only error/payment-failure behavior.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage (baseline 3), but description adds concrete examples beyond schema: lists enum values for line and provides 'Asia-Europe' example for trade_lane, aiding LLM inference.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verb 'Get' + resource 'schedule reliability metrics' with specific data points 'on-time performance, average delays'. Distinct from siblings like shippingrates_transit_schedules or shippingrates_vessel_schedule which focus on timetables rather than performance analytics.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides use case context ('Useful for carrier selection and benchmarking') implying when to use, but lacks explicit when-not-to-use guidance or differentiation from similar tools like shippingrates_transit that might also provide timing data.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
shippingrates_risk_scoreRoute Risk AssessmentARead-onlyIdempotentInspect
Get a composite risk score (0-100) for a shipping route — combines port congestion, active disruption news, and chokepoint impact analysis (Hormuz, Suez, Bab el-Mandeb, Panama Canal).
Use this for route risk screening — answers "how risky is this trade lane right now?" Scores above 70 indicate elevated risk. For detailed congestion metrics, use shippingrates_congestion. For news detail, use shippingrates_congestion_news.
PAID: $0.10/call via x402 (USDC on Base or Solana). Without payment, returns 402 with payment instructions.
Returns: { origin, destination, risk_score, risk_level, congestion_factor, disruption_factor, chokepoints_affected[], recommendation }.
| Name | Required | Description | Default |
|---|---|---|---|
| origin | Yes | Origin port UN/LOCODE — e.g. INNSA (Nhava Sheva), CNSHA (Shanghai), SGSIN (Singapore) | |
| x_payment | No | x402 payment proof header | |
| destination | Yes | Destination port UN/LOCODE — e.g. AEJEA (Jebel Ali), NLRTM (Rotterdam), USNYC (New York) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Strong disclosure beyond annotations: explicitly states $0.10 cost per call, payment mechanism (x402, USDC on Base/Solana), and describes the analytical scope (specific maritime chokepoints). Annotations only indicate safety (readOnly/idempotent); description adds essential commercial and domain context. Minor gap: no mention of rate limits or failure behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with clear hierarchy: functionality first, payment warning second, parameters third. The payment cost is front-loaded where it should be. Args section is somewhat redundant with complete schema coverage, but formatted readably. No wasted words; every sentence conveys distinct information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given this is a paid, complex analytical tool, the description adequately covers the unusual aspects (payment details, specific chokepoints monitored, return value range 0-100). Without output schema, it hints at return structure sufficiently. Could benefit from mentioning authentication requirements or rate limits, but covers the critical commercial constraint (paid endpoint) that annotations miss.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline 3 applies. The Args section repeats schema descriptions exactly (UN/LOCODE for ports, x402 header definition) without adding syntactic details, validation rules, or examples beyond what the schema already provides. The 'optional' flag for x_payment is implied by schema but explicitly stated, providing marginal value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Excellent specificity: states it retrieves route risk assessment with specific data components (congestion, news alerts, chokepoint analysis) and distinguishes from siblings via unique mention of Hormuz/Suez/Bab el-Mandeb chokepoints and composite 0-100 risk score. Verb-resource combination is precise.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implied usage through output description (use when you need risk scores with chokepoint analysis), but lacks explicit when-to-use guidance versus siblings like shippingrates_congestion or shippingrates_reliability. No mention of prerequisites or alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
shippingrates_searchSearch ShippingRates DataARead-onlyIdempotentInspect
Search the ShippingRates database by keyword — matches against carrier names, port names, country names, and charge types.
Use this for exploratory queries when you don't know exact codes. For example, search "mumbai" to find port codes, or "hapag" to find Hapag-Lloyd data coverage. Returns matching trade lanes, local charges, and shipping line information.
FREE — no payment required.
Returns: { trade_lanes: [...], local_charges: [...], lines: [...] } matching the keyword.
Related tools: Use shippingrates_port for structured port lookup by UN/LOCODE, shippingrates_lines for full carrier listing.
| Name | Required | Description | Default |
|---|---|---|---|
| keyword | Yes | Search term — e.g. "maersk", "mumbai", "hapag-lloyd" |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint and idempotentHint. Description adds valuable business context not in annotations: 'FREE endpoint — no payment required' and specific return value details (THC, DO_fee, BL_fee, seal_fee). No contradictions with annotation flags.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with purpose front-loaded, followed by usage guidance, billing note, and Args/Returns documentation. Minor redundancy exists in re-documenting the keyword parameter, but this is offset by additional examples and semantic clarification. No wasted sentences.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description adequately enumerates return data categories (trade lanes, specific local charge types like THC/BL_fee, line info). With 22 sibling tools in a complex domain, the 'exploratory' framing provides sufficient context for selection, though explicit output structure (list vs object) is not detailed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, baseline is 3. Description adds value by expanding valid keyword semantics beyond schema examples: explicitly mentioning 'charge type' and reinforcing that ports/countries are valid search terms alongside shipping lines, helping the agent understand the broad search surface.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description opens with specific verb ('Search') + resource ('ShippingRates shipping data'), and explicitly defines scope (trade lanes, local charges, shipping line info). The 'exploratory queries' framing effectively distinguishes this from sibling calculation/lookup tools like shippingrates_dd_calculate or shippingrates_rates.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear when-to-use guidance ('Use this for exploratory queries') with concrete examples (shipping line name, port, country). However, it lacks explicit exclusion criteria or named alternatives for when users need specific rate calculations rather than search.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
shippingrates_statsShippingRates Database StatisticsARead-onlyIdempotentInspect
Get current statistics for the ShippingRates shipping intelligence database.
Use this as a starting point to understand what data is available before calling other tools. Returns record counts for D&D tariffs, local charges, transit schedules, freight rates, surcharges, ports, shipping lines, countries, and the last data refresh timestamp.
FREE — no payment required.
Returns: { tariff_records, ports, transit_schedules, freight_rates, local_charges, shipping_lines, countries, last_scrape (ISO datetime) }
Related tools: Use shippingrates_lines for per-carrier breakdowns, shippingrates_search for keyword discovery.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare read-only/idempotent safety. Description adds valuable behavioral context not in annotations: cost ('FREE endpoint — no payment required') and detailed return structure (JSON example with tariff_records, last_scrape fields). No contradictions with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Efficiently structured: purpose statement, return value enumeration, usage context, cost note, and output schema documentation. Every sentence adds value; no redundancy despite including inline return structure documentation.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Compensates perfectly for missing output schema by providing complete JSON return structure inline. Annotations fully cover behavioral hints. For a zero-parameter statistics tool, description provides sufficient context for correct invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema contains 0 parameters. Per rubric baseline, 0 params warrants a score of 4. Description correctly implies no configuration is needed for this statistics endpoint.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb 'Get' + resource 'statistics for the ShippingRates shipping intelligence database' clearly defines scope. Lists specific entities counted (tariff records, ports, trade lanes, etc.) which distinguishes this metadata tool from operational siblings like shippingrates_search or shippingrates_rates.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states 'Use this to understand the scope and freshness of available data,' providing clear context for when to invoke it (pre-query validation). Notes 'FREE endpoint' for cost-sensitive decisions. Lacks explicit 'when not to use' contrast with data-retrieval siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
shippingrates_surchargesShipping SurchargesARead-onlyIdempotentInspect
Get carrier-specific surcharges — BAF (Bunker Adjustment Factor), CAF (Currency Adjustment Factor), PSS (Peak Season Surcharge), EBS (Emergency Bunker Surcharge), and more.
Use this to understand surcharge exposure for a carrier in a specific country/direction. These are charges added on top of base freight rates. For a complete cost breakdown, use shippingrates_total_cost which includes surcharges automatically.
PAID: $0.02/call via x402 (USDC on Base or Solana). Without payment, returns 402 with payment instructions.
Returns: Array of { surcharge_type, surcharge_name, amount, currency, per_unit, effective_from, effective_to, direction }.
| Name | Required | Description | Default |
|---|---|---|---|
| line | Yes | Shipping line slug — one of: maersk, msc, cma-cgm, hapag-lloyd, one, cosco | |
| country | No | ISO 2-letter country code | |
| direction | No | Trade direction — 'import' or 'export' | |
| x_payment | No | x402 payment proof header |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare read-only/idempotent status. Description adds crucial behavioral context not in annotations: payment requirement ('PAID endpoint: $0.02 per call'), payment mechanism ('x402'), and return structure ('Array of surcharges with type, amount, currency, effective dates').
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Efficient 5-section structure: purpose, return summary, payment notice, args specification, return specification. No filler text; payment disclosure is critical and earns its place. Well front-loaded with the essential action statement.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema exists, but description compensates by detailing return format ('Array of surcharges...'). Covers all 4 parameters including optional payment header. Given the read-only lookup nature and good annotations, coverage is sufficient though output schema would improve it.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with complete property descriptions. Description mirrors schema definitions in Args section without adding semantic depth (e.g., no explanation of what x402 is, no business context for direction). Baseline 3 appropriate given schema completeness.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb ('Get') + resource ('surcharges') with parenthetical examples (BAF, CAF, PSS, EBS, etc.) + scope ('shipping line in specific country/direction'). Clearly distinguishes from sibling rate/transit tools by focusing specifically on surcharge types.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implicit usage context by listing specific surcharge categories (BAF, CAF, etc.), but lacks explicit when-to-use guidance versus siblings like 'shippingrates_rates' or 'shippingrates_local_charges'. No alternatives or exclusions named.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
shippingrates_total_costFull Landed Cost CalculatorARead-onlyIdempotentInspect
Calculate the full landed cost of shipping a container — combines freight rates, surcharges, local charges (origin + destination), demurrage/detention estimates, and transit time into one comprehensive estimate.
This is the most comprehensive tool — a single call replaces 5-6 individual queries. Use this when the user needs an all-in cost estimate for a specific shipment. For individual cost components, use the dedicated tools: shippingrates_rates (freight), shippingrates_surcharges, shippingrates_local_charges, shippingrates_dd_calculate (detention).
PAID: $0.15/call via x402 (USDC on Base or Solana). Without payment, returns 402 with payment instructions.
Returns: { freight: { rate, currency }, surcharges: { total, items[] }, local_charges: { origin: { total, items[] }, destination: { total, items[] } }, detention: { days, cost, currency }, transit: { days, service }, total_landed_cost, currency }
| Name | Required | Description | Default |
|---|---|---|---|
| line | Yes | Shipping line slug — one of: maersk, msc, cma-cgm, hapag-lloyd, one, cosco | |
| origin | Yes | Origin port UN/LOCODE — e.g. INNSA (Nhava Sheva), CNSHA (Shanghai), SGSIN (Singapore) | |
| x_payment | No | x402 payment proof header | |
| destination | Yes | Destination port or inland location | |
| container_type | Yes | ISO 6346 container type — 20DV, 40DV, 40HC, 20RF, 40RF, 20OT, 40OT, 20FR, 40FR | |
| detention_days | No | Expected detention days (default: 0) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations establish the operation is read-only and idempotent. The description adds essential behavioral context not found in annotations: the payment requirement (financial cost per call), the aggregation logic (what 5-6 queries are being combined), and a detailed JSON structure of the return value despite no formal output schema being present.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and front-loaded with the core purpose, followed by value proposition, cost warning, and documented Args/Returns sections. While the Args section duplicates schema information, it adds examples that justify its inclusion. The Returns section is valuable given the lack of formal output schema.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a complex tool with 6 parameters, payment requirements, and a rich return object, the description is comprehensive. It compensates for the absence of a formal output schema by providing a complete JSON structure in the Returns section, documents all parameters with examples, and discloses the financial cost model.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is met. The description adds significant value by providing concrete examples for origin ('INNSA'), destination ('AEJEA', 'DELHI'), and container types ('40HC'), and clarifying the x_payment parameter's purpose as 'x402 payment proof header' beyond the schema's generic description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a specific verb ('Calculate') and clear resource ('full landed cost of shipping a container'), and explicitly lists the five cost components aggregated (freight, surcharges, local charges, demurrage/detention, transit time), clearly distinguishing it from sibling tools that likely handle these individually.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description effectively positions this as an aggregation tool ('single call replaces 5-6 individual queries') and provides critical usage context via the 'PAID endpoint' warning with specific cost ($0.15) and payment mechanism (x402). It could be improved by explicitly naming sibling tools to use for individual component lookups.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
shippingrates_transitTransit Time LookupARead-onlyIdempotentInspect
Get estimated ocean transit times between two ports across all available carriers.
Use this for quick transit time comparison between ports — answers "how long does it take to ship from A to B?" Returns carrier-specific transit durations, service types, and frequencies.
For detailed routing with transhipment ports and service codes, use shippingrates_transit_schedules instead.
PAID: $0.02/call via x402 (USDC on Base or Solana). Without payment, returns 402 with payment instructions.
Returns: Array of { carrier, transit_days, service_type, frequency, direct_or_transhipment }.
| Name | Required | Description | Default |
|---|---|---|---|
| origin | Yes | Origin port UN/LOCODE — e.g. INNSA (Nhava Sheva), CNSHA (Shanghai), SGSIN (Singapore) | |
| x_payment | No | x402 payment proof header | |
| destination | Yes | Destination port UN/LOCODE — e.g. AEJEA (Jebel Ali), NLRTM (Rotterdam), USNYC (New York) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The annotations declare readOnlyHint=true and idempotentHint=true, confirming safe, repeatable reads. The description adds crucial behavioral context not in annotations: the payment requirement (cost, currency, blockchain), and the return format ('Array of transit options with carrier, duration, service type') which compensates for the missing output schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with clear sections (purpose, cost, args, returns) and front-loaded with the essential action. While the Args list partially duplicates the schema, it earns its place by adding examples. The payment warning is prominently placed. No sentences feel wasted, though the formatting is slightly verbose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema, the description adequately describes the return structure. It covers the payment requirement, provides parameter examples, and explains what data is returned. It could improve by noting error conditions (e.g., invalid port codes) or rate limits, but it covers the essential behavioral and contractual context for a paid API.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is met. The description adds value by providing concrete UN/LOCODE examples ('INNSA', 'AEJEA') that clarify the expected format, and explicitly noting that x_payment is 'optional' despite being a paid endpoint, which helps the agent understand the parameter is not strictly required in the schema sense (though payment is required for success).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the core function ('Get estimated transit times between two ports') with specific verbs and resource. It mentions 'carrier-specific transit durations' and 'frequencies' which distinguishes it from siblings like shippingrates_transit_schedules (likely detailed timetables) and shippingrates_rates (pricing). However, it does not explicitly contrast with the similarly-named transit_schedules sibling.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides critical usage context by flagging this as a 'PAID endpoint' with specific pricing ($0.02) and payment mechanism (x402 via USDC), which constrains when to invoke it. However, it lacks explicit guidance on when to use this tool versus the 20+ siblings, particularly shippingrates_transit_schedules or shippingrates_rates which may overlap in domain.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
shippingrates_transit_schedulesTransit Schedules by CarrierARead-onlyIdempotentInspect
Get detailed transit schedules for a specific carrier — service codes, routing via transhipment ports, transit days, and sailing frequency.
Use this when you need routing details beyond just transit time — e.g., which transhipment ports are used, what service string applies, or weekly frequency. For a quick transit time comparison across all carriers, use shippingrates_transit instead.
PAID: $0.03/call via x402 (USDC on Base or Solana). Without payment, returns 402 with payment instructions.
Returns: Array of { carrier, service_code, origin, destination, transit_days, transhipment_ports[], frequency, direct }.
| Name | Required | Description | Default |
|---|---|---|---|
| origin | No | Origin port UN/LOCODE filter | |
| carrier | Yes | Carrier SCAC code or slug | |
| max_days | No | Maximum transit days filter | |
| x_payment | No | x402 payment proof header | |
| destination | No | Destination port UN/LOCODE filter |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate read-only, safe operation, but the description adds crucial behavioral context: the per-call cost and cryptocurrency payment requirements (x402 header). It also clarifies the data scope (service codes, routing details) beyond what annotations provide.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured: purpose statement first, cost warning second, followed by organized Args list. Every sentence earns its place with no redundancy or repetition of structured schema data.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema, the description appropriately hints at return content (service codes, routing). Combined with complete parameter documentation and payment disclosure, it provides sufficient context for invocation despite missing rate limit or error handling details.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, the baseline is met. The description adds value by providing concrete examples (e.g., 'MAEU', 'maersk' for carrier) and clarifying the optional vs required nature of filters, which aids the LLM in parameter construction.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves 'detailed transit schedules for a carrier' and lists specific data returned (service codes, routing, transhipment ports, frequency). However, it does not explicitly distinguish when to use this versus siblings like 'shippingrates_transit' or 'shippingrates_vessel_schedule'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides critical usage context by flagging this as a 'PAID endpoint' with specific cost ($0.03) and payment mechanism (x402, USDC). However, it lacks explicit guidance on when to choose this over alternative shipping tools or what prerequisites (like having x402 set up) are needed.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
shippingrates_vessel_scheduleVessel ScheduleARead-onlyIdempotentInspect
Get upcoming vessel arrivals and departures at a specific port.
Use this to check what vessels are expected at a port — useful for booking planning and tracking. Returns vessel names, carriers, ETAs/ETDs, and service routes.
For transit time estimates between two ports, use shippingrates_transit. For detailed service-level routing, use shippingrates_transit_schedules.
PAID: $0.02/call via x402 (USDC on Base or Solana). Without payment, returns 402 with payment instructions.
Returns: Array of { vessel_name, carrier, voyage, eta, etd, service, from_port, to_port }.
| Name | Required | Description | Default |
|---|---|---|---|
| port | Yes | UN/LOCODE port code — e.g. INNSA (Nhava Sheva), AEJEA (Jebel Ali), SGSIN (Singapore) | |
| x_payment | No | x402 payment proof header | |
| days_ahead | No | Days to look ahead (default: 14) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare read-only/idempotent properties, so the description adds essential behavioral context not found elsewhere: explicit pricing ($0.02 per call), payment mechanism details (x402, USDC on Base/Solana), and return value structure. No contradictions with annotations detected.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Efficiently structured with purpose statement first, followed by cost warning (critical for paid endpoints), then organized Args and Returns sections. No redundant words; every sentence provides distinct information not duplicated in schema or annotations.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Excellent completeness given no output schema exists. The description compensates by fully documenting the return array structure and fields. Additionally covers the financial/cost implications essential for a paid API, leaving no critical gaps for invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing a baseline of 3. The description adds significant value by providing concrete, valid examples for the port parameter ('INNSA', 'AEJEA') and reinforcing the payment context for x_payment that the schema only describes structurally.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description opens with specific verb 'Get' + clear resource 'upcoming vessel schedules' + scope 'at a port', and distinguishes from siblings (e.g., shippingrates_rates, shippingrates_congestion) by focusing on vessel arrival/departure movements rather than pricing or port conditions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear context through the 'Returns' section explaining exactly what data structure to expect (vessel name, carrier, ETA/ETD). While it doesn't explicitly name sibling alternatives to avoid, the return specification effectively guides when to use this tool (when vessel movement details are needed). Critically warns that this is a 'PAID endpoint' with specific cost ($0.02).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!