science
Server Details
Science MCP — free science data APIs
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-science
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.1/5 across 4 of 4 tools scored.
Each tool has a clearly distinct purpose targeting different scientific data sources: air quality, astronomy, earthquakes, and ISS location. There is no overlap in functionality, making tool selection straightforward for an agent.
All tools follow a consistent 'get_noun' pattern (e.g., get_air_quality, get_apod), using snake_case uniformly. This predictable naming scheme enhances readability and usability.
With only 4 tools, the server feels thin for a broad domain like 'science', which could encompass many more data sources or operations. However, the tools are well-scoped individually, avoiding bloat.
The toolset is severely incomplete for a science server, as it only provides read-only access to four disparate data sources without any create, update, delete, or analysis operations. There are significant gaps in covering scientific workflows or domains beyond these specific APIs.
Available Tools
4 toolsget_air_qualityBInspect
Get air quality measurements near a location from OpenAQ
| Name | Required | Description | Default |
|---|---|---|---|
| latitude | Yes | Latitude | |
| longitude | Yes | Longitude |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It states what the tool does but doesn't describe important behavioral aspects: what 'near a location' means (radius, precision), what measurements are returned (PM2.5, AQI, etc.), whether there are rate limits, authentication requirements, or error conditions. The description is functional but lacks operational context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with zero wasted words. It's front-loaded with the core purpose and includes all essential elements: action, resource, location context, and data source. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple 2-parameter tool with no annotations and no output schema, the description provides adequate basic context about what the tool does. However, it lacks important operational details that would be helpful for an AI agent: what format/units the measurements are in, what 'near' means, typical response structure, or any limitations. The description is minimally complete but could be more informative.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents both parameters (latitude, longitude) with basic descriptions. The description adds context that these parameters define 'a location' for air quality measurements, but doesn't provide additional semantic details beyond what the schema states. This meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Get') and resource ('air quality measurements') with source attribution ('from OpenAQ') and location context ('near a location'). It doesn't differentiate from siblings, which are unrelated (APOD, earthquakes, ISS location), but that's not needed here since they serve completely different domains.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided about when to use this tool versus alternatives. The description mentions 'from OpenAQ' which implies this is the data source, but there's no discussion of when to choose this over other air quality APIs or tools, nor any prerequisites or constraints for usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_apodBInspect
Get NASA Astronomy Picture of the Day
| Name | Required | Description | Default |
|---|---|---|---|
| date | No | Date in YYYY-MM-DD format (default: today) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It states what the tool does but doesn't cover key traits like rate limits, authentication needs, error handling, or what happens if an invalid date is provided. This is a significant gap for a tool with no annotation coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose without any wasted words. It's appropriately sized for a simple tool, earning a high score for conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (1 optional parameter, no output schema, no annotations), the description is minimally adequate but lacks completeness. It doesn't explain return values or potential errors, which would be helpful since there's no output schema, leaving gaps in understanding the tool's full behavior.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, with the 'date' parameter documented as 'Date in YYYY-MM-DD format (default: today)'. The description doesn't add any meaning beyond this, so it meets the baseline of 3 where the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with a specific verb ('Get') and resource ('NASA Astronomy Picture of the Day'), making it immediately understandable. However, it doesn't differentiate from sibling tools (e.g., get_air_quality, get_earthquakes), which are distinct but not directly comparable, so a 4 is appropriate rather than a 5.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives, such as whether it's for current or historical images, or if there are limitations like date ranges. It lacks explicit context or exclusions, leaving usage implied at best.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_earthquakesCInspect
Get recent earthquakes from USGS
| Name | Required | Description | Default |
|---|---|---|---|
| days | No | Look back N days (1-30, default 1) | |
| min_magnitude | No | Minimum magnitude (default 4.0) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. 'Get recent earthquakes' implies a read-only operation, but the description doesn't explicitly state this. It also doesn't mention rate limits, authentication requirements, data freshness, or what format/scope the data returns. For a data retrieval tool with zero annotation coverage, this leaves significant behavioral questions unanswered.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that communicates the core functionality without any wasted words. It's appropriately sized for a simple data retrieval tool and front-loads the essential information. Every word earns its place in this minimal description.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (2 parameters, no output schema, no annotations), the description is insufficiently complete. It doesn't explain what data is returned, in what format, or any limitations of the USGS data source. Without annotations or output schema, the agent has no information about the response structure or behavioral constraints beyond the basic purpose.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, with both parameters clearly documented in the schema itself. The description doesn't add any parameter information beyond what's already in the schema. According to the scoring rules, when schema_description_coverage is high (>80%), the baseline is 3 even with no param info in the description, which applies here.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Get') and resource ('recent earthquakes from USGS'), making the purpose immediately understandable. It doesn't distinguish from sibling tools (which are unrelated geospatial/astronomy APIs), but that's not necessary since they serve completely different domains. The description is specific enough for the agent to understand this is a data retrieval tool for earthquake information.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. While the sibling tools are unrelated (air quality, astronomy picture, ISS location), there's no mention of whether this is the primary earthquake data source, if there are other earthquake tools, or any prerequisites for use. The agent must infer usage purely from the tool name and description.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_iss_locationBInspect
Get the current location of the International Space Station
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It states what the tool does but doesn't describe how it behaves: Is it real-time or cached data? What's the update frequency? Are there rate limits? Does it require authentication? For a tool with zero annotation coverage, this is a significant gap in behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, clear sentence that states exactly what the tool does with zero wasted words. It's appropriately sized for a simple zero-parameter tool and is perfectly front-loaded with the essential information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple zero-parameter tool with no output schema, the description adequately covers the basic purpose. However, without annotations or output schema, it lacks important behavioral context about data freshness, reliability, and format. The description is complete enough for basic usage but leaves gaps that could affect agent decision-making.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has zero parameters with 100% schema description coverage, so the schema already fully documents the input requirements. The description appropriately doesn't waste space discussing parameters that don't exist. A baseline of 4 is appropriate for zero-parameter tools where the description focuses on purpose rather than input semantics.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with a specific verb ('Get') and resource ('current location of the International Space Station'). It distinguishes from siblings by focusing on ISS location rather than air quality, astronomy pictures, or earthquakes. However, it doesn't explicitly differentiate from hypothetical similar tools like 'get_iss_crew' or 'get_iss_speed', so it's not a perfect 5.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention any prerequisites, timing considerations, or comparisons to sibling tools. The agent must infer usage from the name and description alone without explicit context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!