Skip to main content
Glama

DaedalMap UN Sustainable Development Goals

Server Details

UN SDG country indicators across all 17 goals: poverty, health, education, climate. Free.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
xyver/daedal-map
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 3.9/5 across 3 of 3 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a distinct purpose: discover available packs (get_catalog), inspect a specific pack's details (get_pack), and query data (query_dataset). No overlap.

Naming Consistency4/5

The 'get_' prefix for descriptive tools and 'query_' for data access is consistent. However, 'query_dataset' is a verb_noun form while 'get_catalog' and 'get_pack' are verb_noun, but 'query_dataset' could be considered a verb_noun as well. Minor style difference.

Tool Count4/5

3 tools is a reasonable minimal set for the apparent scope: discovery, inspection, and query. Slightly thin but appropriate for a domain centered on accessing pre-defined datasets.

Completeness3/5

The surface covers discovery and querying, but there are potential gaps: no tool for listing available query endpoints, no authentication/account management, and no way to retrieve raw metadata beyond packs. Adequate for basic use.

Available Tools

3 tools
get_catalogGet CatalogA
Read-only
Inspect

Free discovery. Returns the list of live agent-ready data packs available on DaedalMap.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations provide readOnlyHint=true, indicating a safe read operation. The description adds value by specifying 'Free discovery' and 'live agent-ready data packs,' which gives context about cost and readiness. It doesn't contradict annotations, and while it could mention more behavioral traits like response format, it compensates well given the annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads key information ('Free discovery') and states the core action and resource. There is no wasted text, making it easy for an agent to parse quickly and understand the tool's essence.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (0 parameters, read-only, no output schema), the description is complete enough for an agent to use it correctly. It covers purpose and context, though it could benefit from slight elaboration on output or usage scenarios to reach a perfect score, but it's adequate for the complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0 parameters and 100% schema coverage, the baseline is high. The description adds no parameter details, which is fine since there are none. It effectively communicates that no inputs are needed, aligning with the empty schema, so it meets expectations without redundancy.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with 'Returns the list of live agent-ready data packs available on DaedalMap,' specifying the verb 'returns' and resource 'list of data packs.' It distinguishes from siblings like get_earthquake_events by focusing on catalog discovery rather than specific data types, though it could be more explicit about the distinction.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for discovery of available data packs, which suggests it's for initial exploration or listing resources. However, it lacks explicit guidance on when to use this versus alternatives like get_pack (for specific packs) or query_dataset (for querying data), leaving some ambiguity for the agent.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_packGet PackA
Read-only
Inspect

Free discovery. Returns detailed metadata, coverage, metrics, and first-query guidance for one pack.

ParametersJSON Schema
NameRequiredDescriptionDefault
pack_idYesPack identifier such as 'currency', 'earthquakes', 'volcanoes', 'tsunamis', 'hurricanes', 'un_sdg', or 'world_factbook'.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations provide readOnlyHint=true, indicating a safe read operation. The description adds valuable behavioral context beyond annotations: it specifies the return content (metadata, coverage, metrics, guidance) and hints at cost-free access ('Free discovery'), which isn't covered by annotations. No contradictions with annotations exist.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that efficiently conveys purpose, behavior, and context without unnecessary words. It's front-loaded with key information ('Free discovery') and every phrase adds value, making it highly concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (1 parameter, read-only, no output schema), the description is mostly complete. It covers purpose, behavior, and usage context adequately. However, it could benefit from more explicit guidance on when to use versus siblings, and details on output format (e.g., structure of returned metadata) are missing, though not critical without an output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the parameter pack_id fully documented in the schema. The description doesn't add any parameter-specific details beyond what the schema provides, such as examples or usage notes. Baseline 3 is appropriate since the schema handles parameter documentation effectively.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Returns detailed metadata, coverage, metrics, and first-query guidance') and resource ('for one pack'), distinguishing it from siblings like get_catalog (likely returns multiple packs) or query_dataset (likely queries data within packs). The phrase 'Free discovery' adds context about cost/accessibility.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context through 'Free discovery' and 'first-query guidance', suggesting this is for initial exploration of a pack. However, it doesn't explicitly state when to use this tool versus alternatives like get_catalog (for browsing all packs) or query_dataset (for actual data queries), missing explicit sibling differentiation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

query_datasetQuery DatasetA
Read-only
Inspect

Generic structured query for direct source_id or pack_id access using the same contract as POST /api/v1/query/dataset. Free packs: currency, hurricanes, un_sdg, volcanoes, world_factbook. Paid packs: earthquakes, tsunamis (x402 Base USDC).

ParametersJSON Schema
NameRequiredDescriptionDefault
sortNoOptional sort instructions for row-returning queries.
limitNoMaximum number of rows to return for the requested source or pack.
outputNoOptional output controls such as response format hints.
filtersNoStructured filters including time, region_ids, and compare clauses.
metricsNoMetric ids to return. Use event_count for aggregate counts when supported.
pack_idNoPack id such as 'currency', 'earthquakes', 'volcanoes', 'tsunamis', 'hurricanes', 'un_sdg', or 'world_factbook'.
source_idNoConcrete source id such as 'earthquakes_events', 'volcanoes_events', 'hurricanes_events', or 'un_sdg/01'.
request_idNoOptional caller-supplied request id for tracing and idempotency.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The annotation readOnlyHint=true already indicates this is a safe read operation. The description adds useful context about the API contract and pack pricing, but doesn't disclose behavioral traits like rate limits, authentication requirements, pagination behavior, or error handling. No contradiction with annotations exists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized with two sentences: one stating the purpose and API contract, and another listing available packs. It's front-loaded with the core functionality. However, the pack listing could be more concise, and there's some redundancy in listing pack names that also appear in the schema.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (8 parameters, nested objects, no output schema) and the annotation covering only read-only status, the description is moderately complete. It covers the purpose and available packs but lacks information about return values, error conditions, authentication, or performance characteristics that would be helpful for an AI agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema already documents all 8 parameters thoroughly. The description adds minimal parameter semantics by mentioning 'direct source_id or pack_id access' and listing pack examples, but doesn't provide additional syntax, format details, or usage examples beyond what the schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool performs a 'generic structured query for direct source_id or pack_id access' and references a specific API endpoint, providing a specific verb ('query') and resource ('dataset'). It distinguishes from siblings by mentioning 'direct source_id or pack_id access' versus catalog or specific event tools, but doesn't explicitly contrast with all siblings like 'get_fx_rates'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context by listing available free and paid packs with pricing details, which helps determine when this tool is appropriate. However, it doesn't explicitly state when to use this tool versus alternatives like 'get_catalog' or 'get_earthquake_events', nor does it provide exclusion criteria or prerequisites for usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.