Skip to main content
Glama

DaedalMap Tsunami Data

Server Details

Historical tsunami event data and structured tsunami queries from the DaedalMap MCP lane.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
xyver/daedal-map
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.5/5 across 7 of 7 tools scored.

Server CoherenceA
Disambiguation4/5

Most tools have distinct purposes targeting specific datasets (earthquakes, FX rates, tsunamis, volcanic activity), but 'get_catalog' and 'get_pack' both serve discovery functions, which could cause mild confusion. The 'query_dataset' tool is generic and overlaps with the specific get_* tools, though its description clarifies it's for direct source/pack access.

Naming Consistency5/5

All tool names follow a consistent 'get_*' or 'query_*' verb_noun pattern with snake_case, making them predictable and easy to parse. The naming convention is uniform across all seven tools, with no deviations in style or structure.

Tool Count5/5

With 7 tools, the count is well-scoped for a data query server covering multiple natural disaster and financial datasets. Each tool appears to serve a distinct query or discovery function, and the number is neither too sparse nor overwhelming for the domain.

Completeness4/5

The tool set provides comprehensive query coverage for the main datasets (earthquakes, tsunamis, volcanic activity, FX rates) and includes discovery tools for catalog and pack metadata. A minor gap is the lack of update, delete, or creation tools, but this is appropriate for a read-only data query service, and the generic 'query_dataset' offers flexibility for edge cases.

Available Tools

7 tools
get_catalogGet CatalogA
Read-only
Inspect

Free discovery. Returns the list of live agent-ready data packs available on DaedalMap.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnlyHint=true, so the agent knows this is a safe read operation. The description adds value by specifying 'live agent-ready data packs' (implying current, usable items) and 'Free discovery' (suggesting no cost or restrictions), but does not detail rate limits, authentication needs, or response format beyond the list.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded and highly concise with two clear phrases ('Free discovery' and 'Returns the list...'), each earning its place by setting usage context and specifying the output without any wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (0 parameters, read-only, no output schema), the description is mostly complete by stating the purpose and scope. However, it lacks details on the return format (e.g., structure of the list) or any limitations, which could be helpful despite the annotations covering safety.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0 parameters and 100% schema description coverage, the schema fully documents the lack of inputs. The description adds semantic context by implying no parameters are needed for this discovery operation, aligning with the schema, but does not compensate for any gaps since there are none.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Returns the list') and resource ('live agent-ready data packs available on DaedalMap'), distinguishing it from siblings like get_earthquake_events or query_dataset by focusing on catalog discovery rather than specific data retrieval or queries.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for usage ('Free discovery' implies it's for exploring available data packs), but does not explicitly state when not to use it or name alternatives among siblings, such as get_pack for specific pack details.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_earthquake_eventsGet Earthquake EventsA
Read-only
Inspect

Paid x402 tool. Queries earthquakes_events. Use event_count for aggregate counts or event metrics for raw event rows.

ParametersJSON Schema
NameRequiredDescriptionDefault
sortNoOptional sort instructions for row-returning queries.
limitNoMaximum number of rows to return. Use small limits for top-N queries such as largest event in a range.
outputNoOptional output controls such as response format hints.
filtersYesStructured filters including time ranges, region_ids, and compare clauses.
metricsYesMetric ids to return, such as 'event_count' or event attributes like 'magnitude'.
request_idNoOptional caller-supplied request id for tracing and idempotency.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The annotation 'readOnlyHint: true' already indicates this is a safe read operation. The description adds value by mentioning it's a 'Paid x402 tool,' which implies potential cost or access restrictions, and clarifies the query nature. However, it lacks details on rate limits, authentication needs, or response behavior beyond the metrics hint.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise with just two sentences, front-loaded with key information (paid tool and query purpose), and every word earns its place without redundancy or fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (6 parameters, nested objects) and lack of output schema, the description is somewhat incomplete. It doesn't explain the return format, error handling, or how results are structured, relying heavily on the input schema. However, the annotations cover safety, and the purpose is clear, making it minimally adequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema already documents all 6 parameters thoroughly. The description adds minimal semantic value by referencing 'event_count' and 'event metrics' in the context of the 'metrics' parameter, but doesn't explain other parameters like 'filters' or 'output' beyond what the schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'queries earthquakes_events' and distinguishes it from sibling tools like 'get_tsunami_events' and 'get_volcanic_activity' by specifying the earthquake domain. However, it doesn't explicitly mention what specific data is returned beyond referencing metrics, making it slightly less specific than a perfect 5.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool vs. alternatives: 'Use event_count for aggregate counts or event metrics for raw event rows.' This helps differentiate it from potential sibling tools or internal options, though it doesn't name specific sibling tools or provide when-not-to-use scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_fx_ratesGet FX RatesB
Read-only
Inspect

Free tool. Queries the currency pack using filters.region_ids plus filters.time.granularity to return daily, weekly, or monthly FX data.

ParametersJSON Schema
NameRequiredDescriptionDefault
sortNoOptional sort instructions for row-returning queries.
limitNoMaximum number of rows to return for the requested granularity and time span.
outputNoOptional output controls such as response format hints.
filtersYesStructured filters including region_ids with loc_id country codes, time range, and granularity.
metricsNoOptional metric ids. Defaults to 'local_per_usd' for FX rate queries.
request_idNoOptional caller-supplied request id for tracing and idempotency.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true, which the description doesn't contradict. The description adds context about being a 'Free tool' and specifies data granularity (daily, weekly, monthly), but doesn't disclose rate limits, authentication needs, or response format details beyond what annotations provide. It adds some value but not rich behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads key information ('Free tool', 'queries the currency pack', 'return FX data'). It could be slightly more structured but wastes no words, earning its place with clear intent.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (6 parameters, nested objects) and annotations covering read-only safety, the description is adequate but has gaps. No output schema exists, and the description doesn't explain return values or error handling. It's complete enough for basic use but lacks depth for full contextual understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all 6 parameters. The description mentions filters.region_ids and filters.time.granularity, adding minimal meaning beyond the schema's 'Structured filters including region_ids with loc_id country codes, time range, and granularity.' Baseline 3 is appropriate as the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'queries the currency pack' to 'return daily, weekly, or monthly FX data', specifying the verb (queries) and resource (currency pack/FX data). It distinguishes from siblings like get_catalog or get_earthquake_events by focusing on FX rates, but doesn't explicitly differentiate from get_pack which might be similar.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for retrieving FX data with filters, but provides no explicit guidance on when to use this tool versus alternatives like get_pack or query_dataset. It mentions 'Free tool' which hints at cost considerations, but lacks clear when/when-not scenarios or prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_packGet PackA
Read-only
Inspect

Free discovery. Returns detailed metadata, coverage, metrics, and first-query guidance for one pack.

ParametersJSON Schema
NameRequiredDescriptionDefault
pack_idYesPack identifier such as 'currency', 'earthquakes', 'volcanoes', or 'tsunamis'.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, indicating a safe read operation. The description adds context by specifying the return content (metadata, coverage, metrics, guidance) and noting 'Free discovery,' which implies no cost or restrictions. It doesn't disclose additional behavioral traits like rate limits or authentication needs, but with annotations covering safety, this is adequate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads key information ('Free discovery') and clearly states the tool's function. Every word earns its place, with no redundancy or unnecessary elaboration.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (1 parameter, read-only, no output schema), the description is reasonably complete. It specifies what the tool returns, though it could benefit from more detail on output structure or examples. With annotations covering safety, it provides adequate context for agent use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the parameter 'pack_id' fully documented in the schema. The description doesn't add any parameter-specific details beyond what the schema provides, such as examples or usage tips. Baseline 3 is appropriate when the schema handles parameter documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Returns detailed metadata, coverage, metrics, and first-query guidance for one pack.' It specifies the verb ('Returns') and resource ('detailed metadata... for one pack'), though it doesn't explicitly differentiate from sibling tools like 'get_catalog' or 'query_dataset'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context with 'Free discovery' and 'first-query guidance,' suggesting this is for initial exploration of a pack. However, it lacks explicit guidance on when to use this tool versus alternatives like 'get_catalog' (for listing packs) or 'query_dataset' (for querying data).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_tsunami_eventsGet Tsunami EventsB
Read-only
Inspect

Paid x402 tool. Queries tsunamis_events for tsunami source events and related metrics.

ParametersJSON Schema
NameRequiredDescriptionDefault
sortNoOptional sort instructions for row-returning queries.
limitNoMaximum number of rows to return. Use small limits for largest-wave or latest-event queries.
outputNoOptional output controls such as response format hints.
filtersYesStructured filters including time ranges, region_ids, and compare clauses.
metricsYesMetric ids to return, such as 'event_count', 'max_water_height_m', or event attributes.
request_idNoOptional caller-supplied request id for tracing and idempotency.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds some behavioral context beyond the readOnlyHint annotation. It specifies this queries for 'tsunami source events and related metrics', which gives domain context. However, it doesn't mention rate limits, authentication requirements, pagination behavior, or response format details. With annotations covering the read-only aspect, the description adds moderate but incomplete behavioral transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately concise at two sentences. The first sentence establishes the paid nature and core function, while the second specifies the target data and purpose. There's no wasted verbiage, though the structure could be slightly improved by front-loading the core functionality more clearly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a query tool with readOnlyHint annotation and comprehensive schema coverage, the description provides adequate but minimal context. It identifies the domain (tsunami events) and mentions it's a paid tool, but lacks information about typical use cases, response structure, or how this differs from sibling query tools. Without an output schema, some description of return values would be helpful.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema already documents all 6 parameters thoroughly. The description doesn't add any parameter-specific information beyond what's in the schema. It mentions 'metrics' and 'filters' in a general sense but provides no additional syntax, format, or usage guidance. This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'queries tsunamis_events for tsunami source events and related metrics', providing a specific verb ('queries') and resource ('tsunamis_events'). It distinguishes from some siblings like 'get_fx_rates' or 'get_pack' by specifying the tsunami domain, but doesn't explicitly differentiate from similar query tools like 'get_earthquake_events' or 'query_dataset'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides minimal usage guidance. It mentions this is a 'Paid x402 tool' which implies cost considerations, but gives no explicit guidance on when to use this tool versus alternatives like 'get_earthquake_events' or 'query_dataset'. There's no mention of prerequisites, typical use cases, or when-not-to-use scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_volcanic_activityGet Volcanic ActivityB
Read-only
Inspect

Free tool. Queries volcanoes_events for eruption records and volcanic activity metrics.

ParametersJSON Schema
NameRequiredDescriptionDefault
sortNoOptional sort instructions for row-returning queries.
limitNoMaximum number of rows to return. Use small limits for top-N eruption lookups.
outputNoOptional output controls such as response format hints.
filtersYesStructured filters including time ranges, region_ids, and compare clauses.
metricsYesMetric ids to return, such as 'event_count', 'VEI', or eruption attributes.
request_idNoOptional caller-supplied request id for tracing and idempotency.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The annotation 'readOnlyHint: true' already indicates this is a safe read operation. The description adds some context with 'Free tool' (implying no cost) and mentions querying for 'eruption records and volcanic activity metrics', which clarifies the data scope. However, it lacks details on rate limits, authentication needs, or response behavior (e.g., pagination), which would be valuable beyond the annotation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise and front-loaded with the core purpose in two sentences. 'Free tool' is a useful upfront qualifier, and the second sentence clearly states the action and target. There's no wasted text, though it could be slightly more structured (e.g., bullet points for key features) for a perfect 5.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (6 parameters, nested objects) and lack of output schema, the description is adequate but incomplete. It covers the basic purpose and data scope, but doesn't address output format, error handling, or usage nuances. With annotations providing safety info, it's minimally viable but leaves gaps for the agent to navigate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents all 6 parameters. The description doesn't add any parameter-specific details beyond what's in the schema (e.g., it doesn't explain 'metrics' or 'filters' further). This meets the baseline of 3, as the schema carries the burden, but no extra value is provided.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Queries volcanoes_events for eruption records and volcanic activity metrics.' It specifies the verb ('queries'), resource ('volcanoes_events'), and scope ('eruption records and volcanic activity metrics'). However, it doesn't explicitly differentiate from sibling tools like 'get_earthquake_events' or 'query_dataset', which would require a 5.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides minimal usage guidance. It only mentions 'Free tool' but doesn't indicate when to use this tool versus alternatives like 'get_earthquake_events' for seismic data or 'query_dataset' for general queries. There's no explicit when/when-not guidance or named alternatives, leaving the agent to infer context from tool names alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

query_datasetQuery DatasetA
Read-only
Inspect

Generic structured query for direct source_id or pack_id access using the same contract as POST /api/v1/query/dataset. Currency and volcanoes are free; earthquakes and tsunamis are paid via x402.

ParametersJSON Schema
NameRequiredDescriptionDefault
sortNoOptional sort instructions for row-returning queries.
limitNoMaximum number of rows to return for the requested source or pack.
outputNoOptional output controls such as response format hints.
filtersNoStructured filters including time, region_ids, and compare clauses.
metricsNoMetric ids to return. Use event_count for aggregate counts when supported.
pack_idNoPack id such as 'currency', 'earthquakes', 'volcanoes', or 'tsunamis'.
source_idNoConcrete source id such as 'earthquakes_events' or 'volcanoes_events'.
request_idNoOptional caller-supplied request id for tracing and idempotency.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations provide readOnlyHint=true, indicating safe read operations. The description adds valuable context beyond this: it specifies the API contract (POST /api/v1/query/dataset), mentions pricing details (free vs. paid packs via x402), and hints at data types (currency, volcanoes, earthquakes, tsunamis). This compensates for the lack of output schema and enriches behavioral understanding, though it doesn't cover all aspects like rate limits or error handling.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise with two sentences that front-load key information: the query purpose and API contract, followed by pricing details. Every sentence adds value without redundancy. It could be slightly more structured by separating usage notes, but it efficiently communicates essential points without waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (8 parameters, nested objects, no output schema) and annotations (readOnlyHint only), the description provides good contextual completeness. It covers the API contract, pricing, and data examples, which helps compensate for the lack of output schema. However, it doesn't fully address all behavioral aspects like response format or error cases, keeping it from a perfect score.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so parameters are well-documented in the schema itself. The description adds minimal parameter semantics by mentioning 'direct source_id or pack_id access' and examples like 'currency' and 'volcanoes', which loosely relate to pack_id. However, it doesn't provide significant additional meaning beyond what the schema already covers, maintaining the baseline of 3 for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool performs 'generic structured query for direct source_id or pack_id access' which specifies the verb (query) and resource (dataset). It distinguishes from siblings by mentioning direct access via IDs rather than specific endpoints like get_earthquake_events. However, it doesn't explicitly contrast with all siblings, keeping it at 4 instead of 5.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies when to use this tool by mentioning 'direct source_id or pack_id access' and the API endpoint, suggesting it's for flexible querying rather than predefined endpoints like siblings. It also hints at pricing differences for certain data packs. However, it lacks explicit guidance on when to choose this over alternatives like get_catalog or specific event tools, making it implied rather than explicit.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.