Skip to main content
Glama
tharlestsa

OpenLandMap MCP Server

by tharlestsa

find_overlapping_datasets

Identify datasets with overlapping time periods for correlation analysis in multi-variable studies. Input a collection ID and year range to find matching datasets.

Instructions

Find datasets with temporal overlap for correlation analysis.

Given a collection and a year range, finds all other collections that have data within that period. Useful for multi-variable studies.

Args: collection_id: Reference collection identifier. start_year: Start year (inclusive). end_year: End year (inclusive).

Returns: List of OverlapInfo dicts with overlapping period details.

Example: find_overlapping_datasets("organic.carbon_usda.6a1c", 2010, 2020)

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
collection_idYes
start_yearYes
end_yearYes

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions the tool 'finds all other collections that have data within that period' but doesn't disclose important behavioral traits like whether this is a read-only operation, potential rate limits, authentication needs, or what happens with invalid inputs. The description adds some context about temporal overlap but lacks comprehensive behavioral details needed for a tool with no annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded with the core purpose in the first sentence. Every section (purpose, usage context, parameters, returns, example) earns its place by adding specific value without redundancy. The structure is logical and efficient, with no wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (3 parameters, no annotations, but has output schema), the description is reasonably complete. It covers purpose, usage context, parameter semantics, return values, and provides an example. The output schema exists, so the description doesn't need to explain return values in detail. However, it could be more complete by addressing behavioral aspects like error conditions or performance characteristics.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It successfully adds meaning beyond the bare schema by explaining each parameter's purpose: 'collection_id: Reference collection identifier', 'start_year: Start year (inclusive)', 'end_year: End year (inclusive)'. The description also clarifies the temporal relationship ('within that period') and provides an example with concrete values, though it doesn't explain parameter constraints or formats beyond basic types.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Find datasets with temporal overlap for correlation analysis') and resources ('collections'). It distinguishes from siblings like 'compare_collections' or 'find_related_collections' by focusing specifically on temporal overlap rather than general comparison or relationship finding.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use the tool ('Useful for multi-variable studies') and implies usage by describing what it does. However, it doesn't explicitly state when NOT to use it or name specific alternative tools from the sibling list, such as 'compare_collections' or 'find_related_collections', which might serve similar but different purposes.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/tharlestsa/openlandmap_mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server