spacex
Server Details
SpaceX MCP — wraps SpaceX API v4 (free, no auth)
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-spacex
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.8/5 across 6 of 6 tools scored. Lowest: 2.9/5.
Each tool has a clearly distinct purpose targeting different SpaceX resources: crew members, latest launch, next launch, past launches, rockets, and Starlink satellites. There is no overlap in functionality, making it easy for an agent to select the right tool for each query.
All tools follow a consistent verb_noun pattern with 'get_' prefix (e.g., get_crew, get_latest_launch). This uniformity makes the tool set predictable and easy to navigate without any naming deviations or mixed conventions.
With 6 tools, the server is well-scoped for providing SpaceX data, covering key areas like launches, rockets, crew, and satellites. Each tool earns its place without feeling too sparse or bloated, fitting typical expectations for a focused API.
The tool set covers core read operations for SpaceX's public data, including launches, rockets, crew, and satellites. However, there are minor gaps such as no update/delete tools (expected for read-only data) and missing specific queries like historical launch details or rocket specifications, but agents can work around these with the provided tools.
Available Tools
6 toolsget_crewAInspect
List SpaceX crew members. Returns name, agency, status, wikipedia link, and image URL for each crew member.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It mentions the return fields (name, agency, etc.), which is helpful, but lacks details on behavioral traits such as rate limits, error handling, pagination, or data freshness. For a read operation with zero annotation coverage, this is a significant gap.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose and efficiently lists return fields in a single, well-structured sentence. There is zero waste, and every part of the description adds value without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (0 parameters, no output schema, no annotations), the description is adequate but has clear gaps. It explains what is returned but lacks context on data scope (e.g., all historical crew or current), limitations, or error cases, which could be important for an AI agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0 parameters with 100% coverage, so no parameter documentation is needed. The description appropriately does not discuss parameters, earning a baseline score of 4 for not adding unnecessary information beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('List') and resource ('SpaceX crew members'), distinguishing it from sibling tools like get_rockets or get_starlink. It provides a complete picture of what the tool does without being tautological.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when information about SpaceX crew members is needed, but it does not explicitly state when to use this tool versus alternatives or provide any exclusions. There is no guidance on prerequisites or context for selection among siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_latest_launchAInspect
Get the most recent SpaceX launch. Returns launch name, date, success status, details, rocket id, and media links (webcast, article, wikipedia).
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses the return data structure (launch name, date, etc.) and media links, which is useful behavioral context. However, it doesn't mention potential limitations like rate limits, authentication needs, or error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence that efficiently states the action, resource, and return values. Every element earns its place with zero waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a zero-parameter tool with no annotations or output schema, the description provides complete context on what it does and returns. It could be slightly improved by mentioning data freshness or source, but it's largely adequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0 parameters with 100% coverage, so no parameter documentation is needed. The description appropriately focuses on output semantics without redundant parameter info, earning a baseline 4 for zero-parameter tools.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with a specific verb ('Get') and resource ('most recent SpaceX launch'), and distinguishes it from siblings like 'get_next_launch' and 'get_past_launches' by specifying 'most recent'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by specifying 'most recent' launch, which helps differentiate it from 'get_next_launch' (future) and 'get_past_launches' (historical). However, it lacks explicit when-not-to-use guidance or named alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_next_launchAInspect
Get the next upcoming SpaceX launch. Returns launch name, date, details, and rocket id.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the return values (launch name, date, details, rocket id), which adds useful context beyond basic functionality. However, it lacks details on error handling, rate limits, or data freshness, leaving some behavioral aspects unclear.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence that efficiently conveys the tool's purpose, scope, and return values without any wasted words. It is appropriately sized and front-loaded with essential information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (0 parameters, no output schema, no annotations), the description is mostly complete, covering purpose and return values. However, without an output schema, it could benefit from more detail on the format or structure of the returned data, slightly reducing completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has 0 parameters with 100% schema description coverage, so the schema already fully documents the lack of inputs. The description does not add parameter information, but this is acceptable as there are no parameters to explain, warranting a baseline score above the minimum.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get'), resource ('next upcoming SpaceX launch'), and scope ('next upcoming'), distinguishing it from siblings like get_latest_launch or get_past_launches. It explicitly identifies what makes this tool unique in the context of sibling tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by specifying 'next upcoming' launch, which helps differentiate it from get_latest_launch (which might refer to the most recent completed launch) and get_past_launches. However, it does not explicitly state when not to use this tool or name alternatives, keeping it from a perfect score.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_past_launchesAInspect
Get recent past SpaceX launches sorted by date descending. Returns name, date, success status, and details for each launch.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Number of launches to return (default 10) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses that results are sorted by date descending and includes default behavior (limit defaults to 10), adding useful context. However, it doesn't cover potential rate limits, error handling, or authentication needs, leaving gaps for a read operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose and includes key details like sorting and return fields. There is zero waste, and every part earns its place by adding value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple read tool with one optional parameter and no output schema, the description is mostly complete, covering purpose, behavior, and return format. However, it lacks details on pagination or error cases, which could be useful given the absence of annotations and output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents the 'limit' parameter with its default. The description adds no additional parameter details beyond what the schema provides, such as range constraints or examples, but doesn't need to compensate heavily given the high coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('Get recent past SpaceX launches') and resources ('SpaceX launches'), distinguishing it from siblings like get_latest_launch or get_next_launch by specifying 'past' launches. It also details the return format, making the scope explicit.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for retrieving past launches, which differentiates it from siblings like get_next_launch (future) or get_rockets (different resource). However, it lacks explicit guidance on when not to use it or direct alternatives, such as comparing to get_latest_launch for only the most recent launch.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_rocketsAInspect
List all SpaceX rockets. Returns name, type, active status, stages, boosters, cost per launch, success rate, first flight date, and description.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the return fields but doesn't cover important aspects like whether this is a read-only operation, potential rate limits, authentication needs, or error handling. For a tool with zero annotation coverage, this leaves significant behavioral gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose ('List all SpaceX rockets') followed by specific return details. Every word earns its place with zero waste or redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (0 parameters, no output schema, no annotations), the description is adequate but has clear gaps. It explains what data is returned but doesn't address behavioral aspects like read-only nature or potential constraints. For a basic list tool, it meets minimum viable standards but could be more complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has 0 parameters with 100% schema description coverage, so the baseline is 4. The description appropriately doesn't discuss parameters since none exist, and it doesn't need to compensate for any schema gaps.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('List all SpaceX rockets') and resource ('SpaceX rockets'), distinguishing it from sibling tools like get_crew or get_latest_launch. It provides a precise verb+resource combination that leaves no ambiguity about what the tool does.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description offers no guidance on when to use this tool versus alternatives like get_past_launches or get_starlink. It simply states what the tool does without mentioning any context, prerequisites, or exclusions, leaving the agent to infer usage based on tool names alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_starlinkCInspect
Get Starlink satellite info sorted by most recently launched. Returns spaceTrack data including object name, launch date, and decay date.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Number of satellites to return (default 20) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It states the tool returns sorted data and specifies the data fields (object name, launch date, decay date), but it doesn't cover important aspects like whether it's read-only, potential rate limits, error handling, or data freshness. This is a significant gap for a tool with no annotation coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise and front-loaded, stating the core purpose in the first sentence. Both sentences add value: the first defines the action and sorting, the second specifies the data returned. There's no wasted text, though it could be slightly more structured for clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (one optional parameter, no output schema, no annotations), the description is minimally adequate. It covers what the tool does and what data it returns, but it lacks context on usage versus siblings, behavioral details, and output structure. For a simple read operation, this is borderline complete but has clear gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, with the 'limit' parameter clearly documented in the schema. The description doesn't add any parameter-specific information beyond what the schema provides, such as default behavior or constraints. Since schema coverage is high, the baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: retrieving Starlink satellite information sorted by most recently launched. It specifies the resource (Starlink satellites) and the action (get info), though it doesn't explicitly differentiate from sibling tools like 'get_past_launches' or 'get_latest_launch' beyond mentioning Starlink specifically.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. The description mentions sorting by most recently launched and returning spaceTrack data, but it doesn't clarify if this is for Starlink only, how it differs from 'get_past_launches' or 'get_latest_launch', or any prerequisites. This leaves usage context unclear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!