Skip to main content
Glama
lzinga

US Government Open Data MCP

usgs_water_sites

Search USGS water monitoring sites by state, county, or hydrologic unit to access stream, groundwater, lake, and spring data for analysis and research.

Instructions

Search for USGS water monitoring sites by state, county, or hydrologic unit. Site types: ST=stream, GW=groundwater, LK=lake, SP=spring.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
state_cdNoTwo-letter state code: 'CA', 'TX'
county_cdNoCounty FIPS code
site_typeNoSite type: ST (stream), GW (groundwater), LK (lake), SP (spring)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden for behavioral disclosure. It describes what the tool does but doesn't reveal important behavioral traits: whether this is a read-only operation, what the response format looks like, whether there are rate limits, authentication requirements, or pagination behavior. For a search tool with no annotation coverage, this represents significant gaps in behavioral transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise with only two sentences that directly communicate essential information. The first sentence states the purpose and main search parameters, while the second provides the site type codes. There's zero wasted language, and the most important information is front-loaded appropriately.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations and no output schema, the description is incomplete for a search tool. It doesn't explain what the tool returns (list of sites? site details? metadata?), how results are structured, whether there are limitations on result counts, or how to interpret the search parameters effectively. For a tool with 3 parameters and no structured output information, the description should provide more context about the expected behavior and results.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already fully documents all three parameters. The description adds minimal value beyond the schema by listing the site type codes (ST, GW, LK, SP) which are already in the enum, and mentioning 'hydrologic unit' as a search criterion (though this parameter isn't in the schema). This meets the baseline expectation when schema coverage is high.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Search for USGS water monitoring sites' with specific search criteria (state, county, hydrologic unit) and lists site type codes. It uses a specific verb ('Search') and identifies the resource ('USGS water monitoring sites'). However, it doesn't explicitly differentiate from sibling tools, which appear to be unrelated data sources rather than alternative water site search tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It mentions search parameters but doesn't indicate prerequisites, limitations, or relationships with other tools. Given the extensive list of sibling tools covering various data domains, the description fails to help an agent determine when this specific water monitoring tool is appropriate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/lzinga/us-government-open-data-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server