launches
Server Details
Launches MCP — wraps Launch Library 2 API (ll.thespacedevs.com, free, no auth)
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-launches
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.3/5 across 4 of 4 tools scored.
The tools are mostly distinct with clear purposes: get_launch retrieves detailed info for a specific launch, get_past_launches and get_upcoming_launches handle time-based categories, and search_launches provides keyword-based filtering. However, get_past_launches and get_upcoming_launches could be slightly confused as they share similar structures, but their names clarify the temporal distinction.
All tool names follow a consistent verb_noun pattern with snake_case, using 'get_' for retrieval and 'search_' for filtering. This uniformity makes the set predictable and easy to understand, with no deviations in naming conventions.
With 4 tools, the count is reasonable for a launches-focused server, covering key operations like retrieval, time-based listing, and search. It's slightly lean but functional, as it lacks update or delete tools, which might be unnecessary for read-only launch data.
The tool set provides good read-only coverage for accessing launch data, including specific, past, upcoming, and searched launches. However, it lacks any create, update, or delete operations, which could be a gap if the domain expected full lifecycle management, though this may be intentional for a data query service.
Available Tools
4 toolsget_launchAInspect
Get full details for a specific launch by its Launch Library 2 ID. Returns name, net time, status, pad, rocket, mission, orbit info, video URLs, and mission patches.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Launch Library 2 launch UUID (e.g. "a6ce038e-4d89-4265-b47f-1c6ee5863f84") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It describes the return data (name, net time, etc.), which adds value beyond the input schema. However, it lacks details on error handling, rate limits, authentication needs, or whether it's a read-only operation, leaving gaps in behavioral context for the agent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose in the first sentence, followed by a concise list of returned data. Every sentence earns its place by providing essential information without redundancy, making it efficient and well-structured for quick understanding.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (1 parameter, no output schema, no annotations), the description is mostly complete. It explains what the tool does and what data it returns, but without an output schema, it could benefit from more detail on the return format or error cases. The lack of annotations means the description should cover more behavioral aspects, which it partially does but not fully.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, so the input schema already fully documents the 'id' parameter. The description does not add any additional meaning or context beyond what the schema provides, such as examples of valid IDs or usage notes, resulting in a baseline score of 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get full details') and resource ('a specific launch by its Launch Library 2 ID'), and distinguishes it from siblings by focusing on individual launch details rather than lists or searches. It explicitly mentions the data returned, which helps differentiate its purpose from the sibling tools that handle multiple launches.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by specifying 'by its Launch Library 2 ID', suggesting this tool is for retrieving details when you have a specific launch ID. However, it does not explicitly state when to use this tool versus alternatives like get_past_launches or search_launches, nor does it provide exclusions or prerequisites, leaving some ambiguity for the agent.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_past_launchesCInspect
Get past rocket launches from Launch Library 2. Returns name, net launch time, status, launch pad name and location, rocket name, and mission description.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Number of launches to return (default 10) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It discloses what data is returned (name, net launch time, etc.) but lacks behavioral context like rate limits, authentication needs, pagination behavior, error conditions, or whether this is a read-only operation. The description doesn't contradict annotations (none exist).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences: one stating the action and source, another listing returned fields. Efficiently front-loaded with core purpose. Could be slightly improved by integrating return details more smoothly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple read operation with 1 optional parameter and no output schema, the description is adequate but has gaps. It specifies the data source and return fields, but lacks context on limitations, errors, or sibling tool differentiation. Without annotations or output schema, more behavioral transparency would be beneficial.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents the 'limit' parameter with its default. The description adds no parameter-specific information beyond what's in the schema. Baseline 3 is appropriate when schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Get past rocket launches from Launch Library 2' with specific verb+resource. It distinguishes from 'get_upcoming_launches' by specifying 'past' launches, but doesn't explicitly differentiate from 'get_launch' (singular) or 'search_launches'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives. The description doesn't mention when to choose this over 'get_launch' (singular), 'get_upcoming_launches', or 'search_launches'. Only implicit context from 'past' in the name.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_upcoming_launchesCInspect
Get upcoming rocket launches from Launch Library 2. Returns name, net launch time, status, launch pad name and location, rocket name, and mission description.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Number of launches to return (default 10) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden of behavioral disclosure. It mentions the return fields (name, net launch time, etc.) but lacks details on permissions, rate limits, pagination, or error handling. For a read operation with external data, this is a significant gap in transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the purpose and key return data. It avoids redundancy but could be slightly more structured by separating usage context from output details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description provides basic purpose and return fields but misses behavioral aspects like data freshness, source reliability, or error cases. It's adequate for a simple read tool but lacks depth for robust agent use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the single parameter 'limit' documented in the schema. The description adds no parameter-specific information beyond implying a default behavior for upcoming launches, so it meets the baseline of 3 without compensating for any gaps.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Get upcoming rocket launches') and the data source ('from Launch Library 2'), distinguishing it from sibling tools like get_past_launches. It specifies the verb (get) and resource (upcoming rocket launches), though it doesn't explicitly contrast with search_launches in scope or method.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives like get_past_launches or search_launches. The description implies it's for upcoming launches but doesn't specify contexts, exclusions, or prerequisites, leaving usage decisions ambiguous.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_launchesBInspect
Search launches by keyword (rocket name, mission name, agency, etc). Returns matching launches with name, net launch time, status, pad, rocket, and mission description.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Number of results to return (default 10) | |
| query | Yes | Search keyword (e.g. "Falcon 9", "Artemis", "ISS") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It discloses the return format (matching launches with specific fields) but lacks critical behavioral details: it doesn't mention pagination, rate limits, authentication needs, error handling, or whether results are sorted. For a search tool, this is a significant gap in transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured in two sentences: the first states the purpose and search scope, the second specifies the return format. Every word earns its place with no redundancy or fluff, making it easy to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (search with two parameters), no annotations, and no output schema, the description is partially complete. It covers the basic purpose and return fields but omits behavioral aspects like result limits, sorting, or error cases. It's adequate for minimal use but lacks depth for robust agent operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents both parameters fully. The description adds minimal value beyond the schema: it mentions searchable fields (rocket name, mission name, agency) which helps interpret 'query', but doesn't provide additional syntax or format details. This meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Search launches by keyword' with specific searchable fields (rocket name, mission name, agency, etc.). It distinguishes from sibling tools by focusing on keyword search rather than retrieving specific launches (get_launch) or time-based lists (get_past_launches, get_upcoming_launches). However, it doesn't explicitly name these alternatives for full differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context through 'Search launches by keyword' and the examples, suggesting it's for finding launches matching search terms rather than retrieving by ID or time. However, it lacks explicit guidance on when to use this versus siblings like get_past_launches for chronological listings or get_launch for specific IDs, leaving some ambiguity.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!