USGS Earthquakes
Server Details
Real-time earthquake events from the US Geological Survey
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.4/5 across 3 of 3 tools scored.
Each tool has a clearly distinct purpose: get_earthquake_detail retrieves specific event details by ID, get_earthquakes_near searches by geographic location with radius parameters, and get_recent_earthquakes fetches global recent events sorted by time. There is no overlap or ambiguity between these three functions.
All tool names follow a consistent verb_noun pattern with 'get_' prefix and descriptive suffixes (detail, earthquakes_near, recent_earthquakes). The naming is uniform, predictable, and clearly indicates the action and target resource.
Three tools are reasonable for a focused earthquake data server, covering key use cases: detail lookup, location-based search, and global monitoring. However, it feels slightly thin as it lacks tools for broader queries (e.g., by date range or magnitude range without location) or additional data like tsunami alerts, which might be expected in this domain.
The tool set covers essential earthquake data retrieval: specific event details, local searches, and global recent events. Minor gaps exist, such as no tools for historical data beyond recent days, filtering by region without coordinates, or accessing related data like tectonic plate information, but agents can work around these with the provided tools.
Available Tools
3 toolsget_earthquake_detailAInspect
Get full details of a specific earthquake event by its USGS event ID.
Returns comprehensive information including magnitude, location, depth,
felt reports, tsunami status, and tectonic summary when available.
Args:
event_id: The USGS event ID (e.g. 'us7000m1xh'). Get IDs from search results.| Name | Required | Description | Default |
|---|---|---|---|
| event_id | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Describes comprehensive return data fields and availability caveats ('when available'), though omits traits like idempotency or error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Compact, front-loaded purpose statement followed by return value details and Args section; no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a single-parameter lookup tool; mentions key return fields even though output schema exists, reinforcing agent understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Excellent compensation for 0% schema coverage by providing concrete example ('us7000m1xh') and provenance hint ('Get IDs from search results').
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verb (Get full details) and resource (earthquake event by USGS ID), distinguishing it from sibling list/search tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies workflow ('Get IDs from search results') but lacks explicit when/when-not guidance comparing to get_recent_earthquakes or get_earthquakes_near.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_earthquakes_nearAInspect
Get earthquakes near a specific geographic location.
Searches for earthquakes within a radius of the given coordinates.
Useful for assessing local seismic risk or investigating felt reports.
Args:
latitude: Latitude of the center point (e.g. 37.7749 for San Francisco).
longitude: Longitude of the center point (e.g. -122.4194 for San Francisco).
radius_km: Search radius in kilometers (default 100, max 20001).
min_magnitude: Minimum magnitude to include (default 1.0).
days: Number of days to look back (default 30, max 365).| Name | Required | Description | Default |
|---|---|---|---|
| days | No | ||
| latitude | Yes | ||
| longitude | Yes | ||
| radius_km | No | ||
| min_magnitude | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided; description compensates by disclosing constraints beyond schema (max 20001km radius, max 365 days) and defaults, though omits rate limits or auth requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with front-loaded purpose followed by mechanics, use cases, and Args section; every sentence adds value beyond structured data, though 'Args:' format is slightly verbose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given output schema exists and parameters are simple scalars, the description adequately covers inputs; could briefly mention coordinate system (WGS84 implied) but omissions are minor.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Excellent compensation for 0% schema description coverage: provides concrete examples (37.7749 for San Francisco), units (kilometers), and constraint boundaries not present in structured schema fields.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb-resource pair ('Get earthquakes near a specific geographic location') clearly distinguishes this location-based search from siblings 'get_recent_earthquakes' (time-based) and 'get_earthquake_detail' (specific event retrieval).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit use cases ('assessing local seismic risk or investigating felt reports') but lacks explicit 'when not to use' guidance or comparison to siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_recent_earthquakesAInspect
Get recent earthquakes worldwide above a minimum magnitude.
Returns earthquake events from the USGS catalog sorted by time (newest first).
Useful for monitoring seismic activity globally.
Args:
min_magnitude: Minimum magnitude to include (default 2.5). Use 4.5+ for significant quakes only.
days: Number of days to look back (default 7, max 30).
limit: Maximum number of earthquakes to return (default 50, max 500).| Name | Required | Description | Default |
|---|---|---|---|
| days | No | ||
| limit | No | ||
| min_magnitude | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses USGS data source, sorting order (newest first), and constraint limits (max 30 days, 500 results) not present in annotations or schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with purpose front-loaded, followed by behavior, use case, and Args section; no redundant or filler text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Thorough coverage given output schema exists; minor gap in explicit sibling differentiation, though 'worldwide' provides implicit contrast to 'near'.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Comprehensive compensation for 0% schema description coverage by explaining all 3 parameters, their defaults, and usage guidance (e.g., 4.5+ threshold).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Explicitly states it retrieves recent earthquakes worldwide above a minimum magnitude, distinguishing from sibling 'near' (location-based) and 'detail' (single event) tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides use case ('monitoring seismic activity globally') and magnitude guidance (4.5+ for significant quakes), though could explicitly contrast with location-specific sibling.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!