Skip to main content
Glama

Server Details

Meteors MCP — NASA fireball, near-Earth asteroid, and close approach data

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-meteors
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.1/5 across 8 of 8 tools scored.

Server CoherenceA
Disambiguation4/5

Most tools are clearly distinct: one for NEOs, one for fireballs, one for asking questions, and three for memory management. However, 'ask_pipeworx' and 'discover_tools' have some overlap in purpose (both help find information), though 'discover_tools' is explicitly for searching the tool catalog first.

Naming Consistency3/5

Names use a mix of verbs: 'ask_', 'discover_', 'get_', 'forget', 'recall', 'remember'. While 'get_' is consistent for data retrieval tools, memory tools use varied verbs. No consistent pattern across all tools.

Tool Count4/5

8 tools is a reasonable number for a server that combines a general query tool, a few specific NASA data endpoints, and memory management. It feels well-scoped without being too thin or bloated.

Completeness3/5

For the declared purpose (Meteors), the NASA tools cover NEOs and fireballs adequately but lack other astronomical data. The memory tools provide basic CRUD. The 'ask_pipeworx' tool suggests a broader scope, but no tools for other data sources are exposed directly, leaving gaps.

Available Tools

8 tools
ask_pipeworxAInspect

Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".

ParametersJSON Schema
NameRequiredDescriptionDefault
questionYesYour question or request in natural language
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description discloses that the tool selects the best data source and fills arguments automatically, which is important behavioral context. It implies a question-answering behavior but does not detail limitations, error handling, or whether it can handle complex multi-step requests. Since no annotations are provided, the description carries the full burden, and it provides moderate transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is very concise at two sentences plus examples. It is front-loaded with the core purpose, immediately followed by how it works and examples. Every sentence adds value, and there is no redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simplicity of the tool (single parameter, no output schema), the description is fairly complete. It explains what the tool does, how it works, and provides examples. It could be improved by mentioning what types of questions are out of scope or how it handles ambiguous queries, but for the complexity level, it is adequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds significant meaning beyond the input schema: it explains that the 'question' parameter should be a natural language request, and provides examples. The schema already has 100% coverage with a description for the parameter, but the description enriches it with usage context and examples, earning a score above baseline.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: it takes a plain English question and returns an answer from the best available data source. It distinguishes itself from siblings by emphasizing natural language queries, which is unique among the listed sibling tools that are more structured (e.g., get_fireballs, get_neo_feed).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear guidance on when to use this tool: when you want to ask a question in plain English without needing to browse tools or learn schemas. It implicitly excludes structured data queries (which should use sibling tools) and gives examples of appropriate usage. However, it does not explicitly state when NOT to use it or mention alternative tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discover_toolsAInspect

Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of tools to return (default 20, max 50)
queryYesNatural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries")
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It states the tool returns 'the most relevant tools with names and descriptions,' which is transparent. However, it doesn't disclose any limitations like whether the search is semantic or keyword-based, or if it handles all 500+ tools equally. Minor gap but still good for a search tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is two sentences: first states purpose, second gives usage guidance. Efficient and front-loaded. However, could be slightly more concise by removing 'Call this FIRST' redundancy with 'FIRST' capitalization, but still clear.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, description explains return value: 'Returns the most relevant tools with names and descriptions.' It also provides context for when to use (500+ tools). For a simple search tool, this is mostly complete. Could mention if results are ranked, but not critical.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, with descriptions for both parameters. The description does not add extra meaning beyond the schema. For 'query,' the schema gives examples, and the description adds context about 'natural language.' 'limit' is well-described. Baseline 3 is appropriate since schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Search the Pipeworx tool catalog by describing what you need.' It specifies the action (search), the resource (tool catalog), and the input method (natural language description). It also distinguishes itself by instructing to call it 'FIRST' when 500+ tools are available, implying it's a discovery tool that complements the other specific tools like get_neo_feed.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use this tool: 'Call this FIRST when you have 500+ tools available and need to find the right ones for your task.' This gives clear usage context and implies alternatives (other tools) are for specific tasks after discovery.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

forgetAInspect

Delete a stored memory by key.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key to delete
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description correctly identifies this as a delete operation. However, it lacks details such as whether the deletion is permanent, if confirmation is required, or any side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, clear sentence that directly conveys the purpose without any unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (1 required parameter, no output schema), the description is adequately complete. It could optionally note that the operation is irreversible, but the current level suffices.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description mentions 'by key', which adds context to the 'key' parameter beyond the schema's 'Memory key to delete'. Since schema coverage is 100%, this is a helpful addition.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Delete' and the resource 'stored memory', and the qualifier 'by key' distinguishes it from sibling tools like 'remember' and 'recall'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for deleting a specific memory by key, but does not explicitly contrast with alternatives like 'remember' or 'recall', nor does it mention when to use this tool over others.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_close_approachesAInspect

Find near-Earth asteroids making close approaches within 0.05 AU. Returns object name, approach date, miss distance, velocity, and diameter to identify potentially hazardous objects.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of close approach records to return (default 10, max 50).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses the return fields (name, date, miss distance, velocity, diameter), which informs the agent about output. However, it does not mention side effects, rate limits, or whether results are sorted or filtered beyond the distance criterion. The distance threshold is a key behavioral detail.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with clear, front-loaded purpose. First sentence states the core functionality, second lists outputs. No filler, but could be slightly more efficient by combining them. Appropriate length for a simple tool with one optional parameter.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description compensates by listing return fields. The tool has only one optional parameter with full schema coverage, so the description is largely sufficient. However, it lacks details on pagination, sorting, or the timeframe of events (past/future), which could be important for the agent to use correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. The description does not add meaning beyond the schema for the 'limit' parameter; it does not explain how the limit affects results (e.g., ordering) or mention any other implicit parameters. No additional param info in description, so score remains at baseline.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Get', the resource 'near-Earth asteroid close approach events', and provides specific criteria (within 0.05 AU of Earth). It also lists the returned data fields, making the purpose unambiguous. Distinguishes from sibling tools like get_fireballs (different celestial phenomena) and get_neo_feed (likely a different NEO dataset).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for retrieving close approach data but does not explicitly state when to use this tool versus alternatives. No exclusion criteria or context about when not to use it are provided. The 'within 0.05 AU' constraint offers some guidance but is not compared with other tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_fireballsAInspect

Track recent meteor impacts and atmospheric explosions detected by US government sensors. Returns impact energy, radiated energy, velocity, altitude, and geographic location.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of fireball events to return (default 10, max 100).
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations are empty, so the description carries the burden. It correctly discloses that the tool returns data about fireball events with specific fields (energy, velocity, altitude, location). However, it does not mention any behavioral traits such as read-only nature, data freshness, or potential empty results.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, clearly stating the purpose and the data fields. Every sentence is informative with no wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (one optional parameter, no output schema needed), the description is adequate. It explains what data is returned, which is sufficient for an agent to decide to invoke this tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, and the description adds value by mentioning the types of data returned, but it does not elaborate on the 'limit' parameter beyond what the schema already provides (default 10, max 100).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Get') and resource ('recent bolide and fireball events'), and clearly distinguishes itself from sibling tools like 'get_close_approaches' and 'get_neo_feed' by focusing on fireballs recorded by US government sensors.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for retrieving fireball events but provides no explicit guidance on when to use this tool versus alternatives like 'get_close_approaches'. No exclusion criteria or prerequisites are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_neo_feedAInspect

Get near-Earth objects passing Earth during a date range (e.g., "2024-01-01" to "2024-12-31"). Returns asteroid names, sizes, velocities, miss distances, and hazard status.

ParametersJSON Schema
NameRequiredDescriptionDefault
end_dateYesEnd date in YYYY-MM-DD format. Maximum 7-day range from start_date (e.g. "2025-01-07").
start_dateYesStart date in YYYY-MM-DD format (e.g. "2025-01-01").
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It discloses the API source (NASA NeoWs) and what data is returned (names, sizes, velocities, etc.), but does not mention rate limits, authentication needs, or whether the operation is read-only. Without annotations, this is adequate but not comprehensive.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with no waste. The first sentence states purpose and API source; the second lists key return fields. Every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool has 2 required parameters, no output schema, and no annotations. The description covers purpose and return fields, which is sufficient for a simple date-range query. However, without output schema, the agent lacks details on return structure (e.g., pagination, max results). Adequate but minimal.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters with format and constraints. The description adds no extra meaning beyond what the schema provides, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verbs ('Get') and resources ('Near-Earth Objects') and clearly distinguishes this tool from siblings by mentioning NASA NeoWs API and listing return fields. It avoids tautology and clarifies scope (passing by Earth, date range).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implicitly states when to use (for NEO data in date range) but provides no explicit exclusions or alternatives. Sibling tools like get_close_approaches exist but are not mentioned as alternatives, leaving the agent to infer from names.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recallAInspect

Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyNoMemory key to retrieve (omit to list all keys)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description must carry full burden. It states the tool can retrieve by key or list all, which is clear. However, it does not disclose whether retrieval is destructive or any side effects, nor any access restrictions. Since it's a read operation, the behavioral disclosure is adequate but not exceptional.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is two sentences, concise and front-loaded. Every sentence adds value: first sentence defines action, second explains when to use. Could be slightly more concise by merging, but overall efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and one optional parameter, the description covers the tool's functionality well. It explains the dual behavior (retrieve vs list) and provides usage context. Could mention return format, but not necessary for a simple retrieval tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, and description adds meaningful context: explains that omitting key lists all memories. This goes beyond the schema's description of 'key' as a string.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states verb 'Retrieve' and resource 'stored memory', and distinguishes between retrieving a specific key vs listing all memories. Sibling tools include 'remember' (store) and 'forget' (delete), so this tool's purpose is well differentiated.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Description says to use for retrieving context saved earlier, which gives context but no explicit when-not-to-use or alternatives. However, the sibling tools provide implicit differentiation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rememberAInspect

Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key (e.g., "subject_property", "target_ticker", "user_preference")
valueYesValue to store (any text — findings, addresses, preferences, notes)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral transparency. It discloses persistence behavior (authenticated vs. anonymous) and the purpose of the memory, which is sufficient for a simple key-value store.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three concise sentences that are front-loaded with the core action and include essential usage context. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple key-value store with no output schema and fully described parameters, the description is complete. It covers purpose, usage, and persistence behavior.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the baseline is 3. The description adds value by explaining the purpose of storing values (intermediate findings, preferences) and the nature of the key (e.g., 'subject_property'), providing context beyond the schema's example values.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('store'), the resource ('key-value pair in your session memory'), and the scope ('session memory'), which is distinct from sibling tools like 'forget' and 'recall'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool ('save intermediate findings, user preferences, or context across tool calls') and distinguishes between authenticated and anonymous sessions, but does not mention when not to use it or alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.