Skip to main content
Glama

Server Details

USGS Water MCP — wraps USGS National Water Information System (NWIS) REST services (free, no auth)

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-usgswater
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.3/5 across 3 of 3 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose: get_current retrieves instantaneous data for a specific site, get_daily provides aggregated daily data over a date range, and search_sites finds sites in a state. There is no overlap or ambiguity in functionality, making tool selection straightforward for an agent.

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern (get_current, get_daily, search_sites), using snake_case and clear verbs that describe the action. This predictability enhances usability and reduces confusion.

Tool Count4/5

With 3 tools, the server is well-scoped for accessing USGS water data, covering key operations like retrieving current and historical data and searching for sites. It is slightly lean but reasonable, as additional tools might not be necessary for basic functionality.

Completeness4/5

The tool set covers core workflows for USGS water data: searching for sites and fetching both real-time and historical streamflow data. Minor gaps exist, such as no tools for updating or deleting data (though likely not needed) or accessing other water parameters, but agents can effectively work with the provided operations.

Available Tools

3 tools
get_currentBInspect

Get current instantaneous streamflow (discharge, cfs) and gage height (ft) for a USGS monitoring site.

ParametersJSON Schema
NameRequiredDescriptionDefault
site_idYesUSGS site number (e.g., "01646500" for Potomac River at Little Falls, MD)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It states what data is retrieved but doesn't disclose behavioral traits like rate limits, authentication needs, response format, error conditions, or whether this is a read-only operation. The description doesn't contradict annotations (none exist), but it lacks essential operational context for a tool with no annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core functionality ('Get current instantaneous streamflow...'). Every word earns its place by specifying data types, units, and scope without redundancy. It's appropriately sized for a simple tool with one parameter.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (1 parameter, no nested objects) and 100% schema coverage, the description is minimally adequate. However, with no annotations and no output schema, it should ideally provide more behavioral context (e.g., response structure, error handling). The description covers the basic purpose but leaves operational details unspecified, which could hinder an agent's ability to use it correctly without trial and error.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the single parameter 'site_id' well-documented in the schema. The description doesn't add any parameter-specific information beyond what's in the schema, such as format examples or constraints. With high schema coverage, the baseline score of 3 is appropriate as the description doesn't compensate but also doesn't need to given the schema's completeness.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Get current instantaneous') and resources ('streamflow (discharge, cfs) and gage height (ft) for a USGS monitoring site'). It distinguishes from sibling tools by specifying 'current instantaneous' data rather than daily aggregates (get_daily) or site searches (search_sites). However, it doesn't explicitly contrast with siblings in the text.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by specifying 'current instantaneous' data, suggesting this tool is for real-time measurements rather than historical or search functions. However, it doesn't provide explicit guidance on when to use this versus alternatives like get_daily for daily averages or search_sites for finding sites. No exclusions or prerequisites are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_dailyAInspect

Get daily mean streamflow values for a USGS site over a date range. Dates must be in YYYY-MM-DD format.

ParametersJSON Schema
NameRequiredDescriptionDefault
endYesEnd date in YYYY-MM-DD format
startYesStart date in YYYY-MM-DD format
site_idYesUSGS site number
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It discloses the date format requirement (YYYY-MM-DD) which is useful behavioral context, but doesn't mention other important traits like rate limits, authentication needs, error handling, or what happens with invalid dates/sites. The description doesn't contradict any annotations (none exist).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences that are front-loaded with the core purpose followed by an important constraint. Every word earns its place with no redundant information or unnecessary elaboration.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a read-only data retrieval tool with no annotations and no output schema, the description provides adequate but minimal context. It covers the purpose and date format requirement, but doesn't explain what the return values look like (e.g., data structure, units) or address potential limitations. Given the tool's relative simplicity, this is minimally viable but has clear gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already fully documents all three parameters. The description adds the date format constraint (YYYY-MM-DD) which provides additional semantic context beyond the schema's parameter descriptions. This meets the baseline expectation when schema coverage is high.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get daily mean streamflow values'), resource ('for a USGS site'), and scope ('over a date range'). It distinguishes from sibling tools like 'get_current' (which likely provides current data) and 'search_sites' (which searches for sites rather than retrieving data).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for retrieving historical daily mean streamflow data, but doesn't explicitly state when to use this tool versus alternatives like 'get_current' (for real-time data) or 'search_sites' (for finding sites). No explicit when-not-to-use guidance or prerequisites are provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_sitesCInspect

Find active USGS stream-gage sites in a US state that have real-time instantaneous data.

ParametersJSON Schema
NameRequiredDescriptionDefault
stateYesTwo-letter US state abbreviation (e.g., "VA", "CA", "TX")
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions 'active' sites and 'real-time instantaneous data,' which hints at read-only behavior, but doesn't clarify permissions, rate limits, or response format. This leaves significant gaps for a tool that likely queries a database.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that efficiently conveys the tool's purpose without unnecessary words. It is front-loaded with key information, making it easy for an agent to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of annotations and output schema, the description is incomplete. It doesn't explain what the tool returns (e.g., list of sites, error handling) or behavioral aspects like data freshness or limitations, which are crucial for a search tool with real-time data.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, fully documenting the 'state' parameter. The description adds no additional parameter semantics beyond what the schema provides, such as format details or constraints, so it meets the baseline score for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Find active USGS stream-gage sites') and resources ('in a US state that have real-time instantaneous data'). It distinguishes the type of sites (active with real-time data) but doesn't explicitly differentiate from sibling tools like 'get_current' or 'get_daily', which prevents a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus the sibling tools ('get_current' and 'get_daily'). It implies usage for finding sites with real-time data but lacks explicit when/when-not instructions or alternative recommendations, leaving the agent to infer context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.