Skip to main content
Glama

DaedalMap Historical FX Rates

Server Details

Historical foreign exchange rates and currency comparisons from the DaedalMap MCP lane.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
xyver/daedal-map
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 3.5/5 across 3 of 3 tools scored.

Server CoherenceA
Disambiguation4/5

The three tools have distinct purposes: get_catalog lists available packs, get_pack returns metadata for one pack, and get_fx_rates retrieves actual FX rate data. Each targets a different use case, though get_catalog and get_pack both involve pack discovery, which could cause minor confusion.

Naming Consistency4/5

All tool names follow a consistent verb_noun pattern with underscores ('get_catalog', 'get_fx_rates', 'get_pack'). 'get' is used across all, and nouns are specific. Minor deviation: 'get_fx_rates' uses 'fx_rates' while others use 'catalog' and 'pack', but still clear.

Tool Count4/5

With 3 tools, the count is minimal but appropriate for a focused domain of discovering and querying FX data packs. It is not too thin because the tools cover the core workflow: discover packs, inspect one pack, and fetch rates. Could benefit from a metadata-only query tool.

Completeness3/5

The tools cover the basic lifecycle: discovery (catalog and pack details) and data retrieval (rates). However, there is no tool for listing specific currencies, filtering by date range beyond granularity, or performing other common operations like metadata-only queries, which may limit agents.

Available Tools

3 tools
get_catalogGet CatalogB
Read-only
Inspect

Free discovery. Returns the list of live agent-ready data packs available on DaedalMap.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds some behavioral context beyond the annotations: it specifies that the list includes 'live agent-ready data packs' and implies a discovery function. The annotations already declare readOnlyHint=true, so the agent knows it's a safe read operation. However, the description doesn't disclose details like rate limits, authentication needs, or pagination behavior, leaving gaps in behavioral understanding.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise and front-loaded with essential information in just two sentences. Every word earns its place: 'Free discovery' sets the context, and 'Returns the list...' clearly states the action and resource. There is no wasted text or redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (0 parameters, read-only, no output schema), the description is adequate but not fully complete. It explains what the tool does but lacks details on output format (e.g., structure of the returned list), error handling, or how it integrates with siblings. For a basic listing tool, this is minimally viable but leaves room for improvement in contextual guidance.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Since there are 0 parameters and schema description coverage is 100%, the baseline is high. The description doesn't need to explain parameters, but it does clarify that this is a 'Free discovery' tool with no inputs required, which aligns with the empty schema. No additional parameter semantics are needed or provided.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with a specific verb ('Returns') and resource ('list of live agent-ready data packs available on DaedalMap'), making it easy to understand what it does. However, it doesn't explicitly differentiate itself from sibling tools like 'get_pack' or 'query_dataset', which might have overlapping functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides minimal guidance on when to use this tool. It mentions 'Free discovery' which implies a general-purpose listing function, but offers no explicit advice on when to choose this over alternatives like 'get_pack' (which might retrieve specific packs) or 'query_dataset' (which might allow filtering). No exclusions or prerequisites are stated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_fx_ratesGet FX RatesB
Read-only
Inspect

Free tool. Queries the currency pack using filters.region_ids plus filters.time.granularity to return daily, weekly, or monthly FX data.

ParametersJSON Schema
NameRequiredDescriptionDefault
sortNoOptional sort instructions for row-returning queries.
limitNoMaximum number of rows to return for the requested granularity and time span.
outputNoOptional output controls such as response format hints.
filtersYesStructured filters including region_ids with loc_id country codes, time range, and granularity.
metricsNoOptional metric ids. Defaults to 'local_per_usd' for FX rate queries.
request_idNoOptional caller-supplied request id for tracing and idempotency.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations provide readOnlyHint=true, indicating a safe read operation. The description adds value by stating 'Free tool,' which implies no authentication or payment requirements, and specifies the data type ('FX data') and granularity options ('daily, weekly, or monthly'). However, it doesn't disclose rate limits, error handling, or data freshness, which are important behavioral traits beyond the annotations. No contradiction with annotations exists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise with two sentences: the first states the tool's free nature and core functionality, and the second specifies key filters and output. It's front-loaded with essential information and avoids redundancy. However, the second sentence could be slightly more structured for clarity, and there's minor room for improvement in flow.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (6 parameters, nested objects, no output schema) and annotations covering safety, the description is adequate but has gaps. It explains the purpose and key filters but doesn't cover output format, error cases, or how results are structured. With no output schema, more detail on return values would be beneficial, making it minimally complete but not fully comprehensive.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents all 6 parameters. The description adds minimal semantics by mentioning 'filters.region_ids' and 'filters.time.granularity,' which aligns with the schema's filter description. It doesn't provide additional details like format examples or default behaviors beyond the schema, so it meets the baseline for high coverage without significant enhancement.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Queries the currency pack using filters.region_ids plus filters.time.granularity to return daily, weekly, or monthly FX data.' It specifies the verb ('queries'), resource ('currency pack'), and output type ('FX data'), distinguishing it from siblings like get_earthquake_events or get_volcanic_activity. However, it doesn't explicitly differentiate from get_catalog or get_pack, which might also query data packs, so it's not fully sibling-distinctive.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by mentioning 'filters.region_ids' and 'filters.time.granularity', suggesting it's for filtered FX rate queries. It starts with 'Free tool,' which hints at no cost constraints. However, it lacks explicit guidance on when to use this tool versus alternatives like query_dataset or get_pack, and doesn't specify prerequisites or exclusions, leaving usage somewhat open to interpretation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_packGet PackA
Read-only
Inspect

Free discovery. Returns detailed metadata, coverage, metrics, and first-query guidance for one pack.

ParametersJSON Schema
NameRequiredDescriptionDefault
pack_idYesPack identifier such as 'currency', 'earthquakes', 'volcanoes', 'tsunamis', 'hurricanes', 'un_sdg', or 'world_factbook'.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations provide readOnlyHint=true, indicating a safe read operation. The description adds context by specifying it returns metadata and guidance, which is useful beyond the annotation. However, it does not disclose other behavioral traits like rate limits, authentication needs, or error handling. No contradiction with annotations exists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads key information ('Free discovery. Returns detailed metadata...'). It avoids unnecessary words and clearly communicates the core functionality without waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (1 parameter, read-only, no output schema), the description is reasonably complete. It covers the purpose and output types (metadata, coverage, metrics, guidance). However, it could be more specific about the return format or how it differs from siblings to enhance completeness for an AI agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the parameter 'pack_id' fully documented in the schema. The description does not add any additional meaning or details about the parameter beyond what the schema provides, such as examples or usage tips. Baseline score of 3 is appropriate as the schema handles the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Returns detailed metadata, coverage, metrics, and first-query guidance for one pack.' It specifies the verb ('Returns') and resource ('detailed metadata... for one pack'), but does not explicitly distinguish it from sibling tools like 'get_catalog' or 'query_dataset', which might have overlapping functions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage by mentioning 'Free discovery' and 'first-query guidance', suggesting it's for initial exploration of a pack. However, it lacks explicit guidance on when to use this tool versus alternatives like 'get_catalog' (which might list packs) or 'query_dataset' (which might query data within packs). No exclusions or prerequisites are stated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.