Skip to main content
Glama

DaedalMap Hurricane and Tropical Cyclone Data

Server Details

Global tropical cyclone tracks from IBTrACS, 1842-present. Wind, pressure, and paths. Free.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
xyver/daedal-map
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 3.9/5 across 3 of 3 tools scored.

Server CoherenceA
Disambiguation4/5

get_catalog lists available packs, get_pack gets metadata for one pack, and query_dataset runs queries. Their purposes are distinct, though get_catalog and get_pack both involve discovery but at different granularities.

Naming Consistency5/5

All three tool names follow a clear verb_noun pattern: get_catalog, get_pack, query_dataset. No mixing of conventions.

Tool Count4/5

3 tools is minimal but appropriate for a data discovery and query server. The set is focused, though one could imagine a tool to list sample queries or handle subscriptions.

Completeness3/5

Covers catalog browsing, pack metadata retrieval, and dataset querying. Missing update/delete operations, but that is expected for read-only data access. Could benefit from a tool to get available query parameters or sample data.

Available Tools

3 tools
get_catalogGet CatalogA
Read-only
Inspect

Free discovery. Returns the list of live agent-ready data packs available on DaedalMap.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable context beyond the readOnlyHint annotation by specifying that it returns 'live agent-ready data packs' and that this is for 'discovery' purposes. While annotations cover the safety aspect (read-only), the description provides operational context about what kind of data is returned and the tool's discovery role.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise with two clear, front-loaded sentences. 'Free discovery' immediately establishes context, and 'Returns the list...' completes the functional explanation. Every word earns its place with zero redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a zero-parameter read-only tool without an output schema, the description provides adequate context about what the tool does and what it returns. However, it could be more complete by specifying the return format (e.g., list structure, metadata included) or any limitations of the 'live agent-ready' qualification.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0 parameters and 100% schema description coverage, the baseline would be 4. The description appropriately doesn't waste space discussing non-existent parameters, and the phrase 'Free discovery' provides useful semantic context about the tool's zero-parameter nature as an exploration function.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Free discovery', 'Returns') and resources ('list of live agent-ready data packs available on DaedalMap'). It distinguishes itself from siblings like get_earthquake_events or get_fx_rates by focusing on catalog discovery rather than specific data retrieval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for discovering available data packs, but provides no explicit guidance on when to use this tool versus alternatives like get_pack or query_dataset. The phrase 'Free discovery' suggests a preliminary exploration function, but lacks clear when/when-not instructions or named alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_packGet PackA
Read-only
Inspect

Free discovery. Returns detailed metadata, coverage, metrics, and first-query guidance for one pack.

ParametersJSON Schema
NameRequiredDescriptionDefault
pack_idYesPack identifier such as 'currency', 'earthquakes', 'volcanoes', 'tsunamis', 'hurricanes', 'un_sdg', or 'world_factbook'.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations provide readOnlyHint=true, indicating a safe read operation. The description adds valuable context beyond this: it specifies 'Free discovery' (implying no cost or authentication barriers) and details the return content (metadata, coverage, metrics, guidance), which helps the agent understand the tool's behavior and output scope without contradicting annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded and efficient in two concise sentences: the first sets the context ('Free discovery'), and the second specifies the return values. Every word earns its place, with no redundancy or unnecessary elaboration, making it easy for an agent to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (1 parameter, no output schema), the description is mostly complete: it covers purpose, behavior, and return content. However, it could be slightly improved by mentioning any limitations (e.g., pack availability) or linking more explicitly to sibling tools, though annotations and schema provide adequate support for agent usage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the parameter pack_id well-documented in the schema. The description does not add any parameter-specific semantics beyond what the schema provides, such as examples or usage tips, so it meets the baseline for high schema coverage without compensating with extra information.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Free discovery', 'Returns') and resources ('detailed metadata, coverage, metrics, and first-query guidance for one pack'), distinguishing it from siblings like get_catalog (which likely lists multiple packs) or query_dataset (which queries data rather than providing metadata).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by specifying it's for 'one pack' and mentions 'first-query guidance', suggesting it's for initial exploration. However, it lacks explicit guidance on when to use this versus alternatives like get_catalog or query_dataset, though the focus on a single pack's metadata provides some differentiation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

query_datasetQuery DatasetB
Read-only
Inspect

Generic structured query for direct source_id or pack_id access using the same contract as POST /api/v1/query/dataset. Free packs: currency, hurricanes, un_sdg, volcanoes, world_factbook. Paid packs: earthquakes, tsunamis (x402 Base USDC).

ParametersJSON Schema
NameRequiredDescriptionDefault
sortNoOptional sort instructions for row-returning queries.
limitNoMaximum number of rows to return for the requested source or pack.
outputNoOptional output controls such as response format hints.
filtersNoStructured filters including time, region_ids, and compare clauses.
metricsNoMetric ids to return. Use event_count for aggregate counts when supported.
pack_idNoPack id such as 'currency', 'earthquakes', 'volcanoes', 'tsunamis', 'hurricanes', 'un_sdg', or 'world_factbook'.
source_idNoConcrete source id such as 'earthquakes_events', 'volcanoes_events', 'hurricanes_events', or 'un_sdg/01'.
request_idNoOptional caller-supplied request id for tracing and idempotency.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The annotation readOnlyHint=true already indicates this is a safe read operation. The description adds useful context about the API contract and pack types (free vs paid with pricing), but doesn't disclose behavioral traits like rate limits, authentication requirements, response format, or pagination behavior beyond what annotations provide.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized with two sentences that efficiently convey key information: the tool's purpose/contract and pack examples. It's front-loaded with the core functionality, though the second sentence could be more structured. Every sentence earns its place by adding value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (8 parameters, nested objects) and rich schema coverage (100%), the description provides adequate context about packs and API contract. However, with no output schema and no annotations beyond readOnlyHint, it lacks information about return values, error handling, or detailed behavioral constraints that would be helpful for a query tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema already documents all 8 parameters thoroughly. The description adds marginal value by mentioning pack_id examples and the API contract, but doesn't provide additional parameter semantics beyond what's in the schema descriptions. Baseline 3 is appropriate when schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool performs a 'generic structured query for direct source_id or pack_id access' and mentions it uses the same contract as a specific API endpoint. It distinguishes from siblings by mentioning specific pack types (free vs paid) but doesn't explicitly differentiate from all sibling tools like get_catalog or get_fx_rates.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by listing specific pack examples (free: currency, hurricanes, etc.; paid: earthquakes, tsunamis) and mentions pricing for paid packs. However, it doesn't provide explicit guidance on when to use this tool versus alternatives like get_earthquake_events or get_fx_rates, nor does it mention prerequisites or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.