Skip to main content
Glama
IBM

Chuk MCP Maritime Archives

by IBM

maritime_compare_speed_groups

Compare sailing speed distributions between two time periods using Mann-Whitney U test and Cohen's d effect size.

Instructions

Compare sailing speed distributions between two time periods.

Computes daily speeds for each period, then runs a Mann-Whitney U test to determine if the difference is statistically significant. Also returns Cohen's d effect size.

Args: period1_years: First period as "YYYY/YYYY" range or "YYYY,YYYY,..." list period2_years: Second period as "YYYY/YYYY" range or "YYYY,YYYY,..." list lat_min: Minimum latitude for position bounding box lat_max: Maximum latitude for position bounding box lon_min: Minimum longitude for position bounding box lon_max: Maximum longitude for position bounding box nationality: Filter tracks by nationality code direction: Filter by "eastbound" or "westbound" month_start: Filter by start month (1-12). Supports wrap-around month_end: Filter by end month (1-12). Used with month_start aggregate_by: Unit of analysis — "observation" (default) or "voyage" (one mean per voyage, statistically independent) include_samples: If True, include raw speed arrays in response min_speed_km_day: Minimum speed filter (default: 5.0) max_speed_km_day: Maximum speed filter (default: 400.0) wind_force_min: Minimum Beaufort force (0-12). Requires wind data wind_force_max: Maximum Beaufort force (0-12). Requires wind data exclude_years: Years to exclude from both periods, as "YYYY/YYYY" range or "YYYY,YYYY,..." list. output_mode: Response format - "json" (default) or "text"

Returns: JSON or text with group statistics, Mann-Whitney U, z-score, p-value, and Cohen's d effect size

Tips for LLMs: - Use aggregate_by="voyage" for statistically independent samples - Use wind_force_min/max to condition on Beaufort force - Use maritime_did_speed_test for formal direction x period interaction - p < 0.05 indicates statistically significant difference - Cohen's d > 0.8 indicates a large effect size - Periods accept comma-separated year lists for non-contiguous years (e.g., "1720,1728,1747" for ENSO El Nino years)

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
period1_yearsYes
period2_yearsYes
lat_minNo
lat_maxNo
lon_minNo
lon_maxNo
nationalityNo
directionNo
month_startNo
month_endNo
aggregate_byNoobservation
include_samplesNo
min_speed_km_dayNo
max_speed_km_dayNo
wind_force_minNo
wind_force_maxNo
exclude_yearsNo
output_modeNojson
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full behavioral burden. It explains the computation steps (daily speeds, statistical test) and optional return of raw arrays, but does not explicitly declare read-only or non-destructive nature. Given the analytical context, it is adequately transparent but could be more explicit.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured: purpose, method, parameter list, returns, and tips. It is front-loaded with the main objective. While not extremely concise, each sentence adds value with no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 18 parameters and no output schema, the description covers the core functionality, parameter semantics, and return format. Tips address common use cases. Minor gaps: it doesn't mention data prerequisites (e.g., wind data required for wind_force filters) or edge cases like missing values.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

All 18 parameters are documented in the Args section with descriptions that add meaning beyond schema names. Some descriptions are brief (e.g., 'nationality' lacks code format), but overall they clarify usage significantly. The 0% schema coverage is fully compensated.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Compare sailing speed distributions between two time periods.' It specifies the statistical method (Mann-Whitney U test) and effect size (Cohen's d), and differentiates from sibling tool maritime_did_speed_test via tips.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides usage context through a 'Tips' section, advising when to use aggregate_by='voyage' for independent samples, and mentions an alternative tool (maritime_did_speed_test). However, it does not explicitly state when not to use this tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/IBM/chuk-mcp-maritime-archives'

If you have feedback or need assistance with the MCP directory API, please join our Discord server