sunrisesunset
Server Details
Sunrise-Sunset MCP — wraps the sunrisesunset.io API (free, no auth)
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-sunrisesunset
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.9/5 across 7 of 7 tools scored. Lowest: 2.9/5.
The tools have distinct primary purposes, but there is some overlap in functionality. For example, 'ask_pipeworx' and 'discover_tools' both help users find information or tools, which could cause confusion about when to use each. The memory tools ('remember', 'recall', 'forget') are clearly distinct from the time-related tools ('get_times', 'get_times_date'), but the server's overall focus is split between general querying and specific time data, leading to moderate ambiguity.
Most tools follow a consistent verb-based naming pattern (e.g., 'ask_pipeworx', 'discover_tools', 'get_times', 'get_times_date', 'remember', 'recall', 'forget'), which is clear and readable. However, 'ask_pipeworx' deviates slightly by including a brand name, while others use generic verbs. This minor inconsistency keeps the score from being perfect, but the overall naming is mostly predictable and easy to understand.
With 7 tools, the count is reasonable and well-scoped for a server that combines general-purpose querying with specific time data and memory management. It's not too heavy or too thin, though the dual focus might make it feel slightly broad. Each tool appears to serve a distinct role, and the number is appropriate for the intended functionality without being overwhelming or insufficient.
The server covers multiple domains: time data, memory storage, and general querying. For the time-related tools, there is good coverage with 'get_times' and 'get_times_date', but other astronomical or location-based features might be missing. The memory tools provide basic CRUD operations, but the querying tools ('ask_pipeworx', 'discover_tools') are more open-ended, making it hard to assess full coverage. Overall, there are minor gaps, but agents can likely work around them.
Available Tools
7 toolsask_pipeworxAInspect
Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | Your question or request in natural language |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It explains that Pipeworx 'picks the right tool, fills the arguments, and returns the result,' which provides useful context about the tool's automated behavior. However, it doesn't mention potential limitations like rate limits, authentication needs, or what happens with ambiguous questions, leaving some behavioral aspects unclear.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly front-loaded with the core functionality in the first sentence, followed by supporting details about how it works, and concludes with concrete examples. Every sentence earns its place by adding distinct value without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with one parameter and 100% schema coverage but no annotations or output schema, the description is reasonably complete about what the tool does and how to use it. However, it lacks information about return values, error handling, or any constraints on question types, which would be helpful given the absence of output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, so the schema already documents the single 'question' parameter. The description adds value by emphasizing it should be 'in plain English' and 'natural language,' and provides concrete examples that illustrate appropriate parameter usage beyond what the schema states.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Ask a question in plain English and get an answer from the best available data source.' It specifies the verb ('ask'), resource ('answer from data source'), and distinguishes from siblings by emphasizing natural language input without needing to browse tools or learn schemas. The examples further clarify the scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context on when to use this tool: for asking questions in natural language when you don't want to browse tools or learn schemas. It doesn't explicitly state when NOT to use it or name specific alternatives among siblings, but the 'no need to browse tools' implies this is the preferred option for natural language queries over manual tool selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discover_toolsAInspect
Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of tools to return (default 20, max 50) | |
| query | Yes | Natural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes the tool's behavior: it performs a search based on natural language queries, returns relevant tools with names and descriptions, and has a specific use case (large tool catalogs). However, it doesn't mention potential limitations like rate limits, authentication requirements, or error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly concise with two sentences that each serve distinct purposes: the first explains what the tool does, the second provides usage guidance. There's no wasted language, and the most important information (the search functionality) is front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (search functionality with 2 parameters) and no output schema, the description provides good context about what the tool does and when to use it. However, without annotations or output schema, it could benefit from more information about return format, error handling, or performance characteristics to be fully complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already fully documents both parameters (query and limit). The description mentions 'describing what you need' which aligns with the query parameter but doesn't add meaningful semantic information beyond what the schema provides. The baseline score of 3 reflects adequate but not enhanced parameter documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Search the Pipeworx tool catalog'), the resource ('tool catalog'), and the method ('by describing what you need'). It distinguishes this tool from its siblings (get_times, get_times_date) by focusing on tool discovery rather than time-related data retrieval.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool: 'Call this FIRST when you have 500+ tools available and need to find the right ones for your task.' This gives clear context about the appropriate scenario and distinguishes it from alternatives by positioning it as an initial discovery step.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forgetCInspect
Delete a stored memory by key.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. While 'Delete' implies a destructive mutation, the description doesn't specify whether deletion is permanent, reversible, requires specific permissions, or has side effects. It also doesn't describe what happens on success/failure or return values.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that communicates the core functionality without any wasted words. It's appropriately sized for a simple deletion tool and front-loads the essential information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive mutation tool with no annotations and no output schema, the description is insufficient. It doesn't address critical behavioral aspects like permanence, authorization requirements, error handling, or return values. The description should provide more context given the tool's destructive nature and lack of structured metadata.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the single parameter 'key' already documented as 'Memory key to delete' in the schema. The description adds no additional parameter context beyond what the schema provides, meeting the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Delete') and the target resource ('a stored memory by key'), making the purpose immediately understandable. It doesn't explicitly differentiate from sibling tools like 'recall' or 'remember', but the verb 'Delete' provides clear functional distinction.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'recall' (which likely retrieves memories) or 'remember' (which likely stores memories). There's no mention of prerequisites, error conditions, or appropriate contexts for deletion operations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_timesBInspect
Get today's sunrise, sunset, dawn, dusk, solar noon, and golden hour times for a location.
| Name | Required | Description | Default |
|---|---|---|---|
| lat | Yes | Latitude of the location (e.g., 40.7128) | |
| lng | Yes | Longitude of the location (e.g., -74.0060) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It states what data is returned but doesn't mention potential limitations (e.g., accuracy, availability for extreme locations), error conditions, or response format. While it implies a read-only operation, it lacks details about rate limits, authentication needs, or data freshness.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose. Every word earns its place by specifying the timeframe, exact data points returned, and required resource. No redundant or unnecessary information is included.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple read-only tool with 2 fully documented parameters but no output schema, the description adequately covers what data is returned. However, it lacks details about the return format (e.g., structured object vs. text), units, or timezone handling, which would be helpful given the absence of output schema and annotations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already fully documents both parameters (lat, lng) with examples. The description adds no additional parameter information beyond what's in the schema, maintaining the baseline score for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: retrieving specific astronomical times (sunrise, sunset, dawn, dusk, solar noon, golden hour) for a location. It specifies 'today's' timeframe and the resource (location), but doesn't explicitly differentiate from its sibling 'get_times_date' beyond the implied temporal scope difference.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context (getting today's times for a location) but doesn't explicitly state when to use this tool versus 'get_times_date' or provide any exclusion criteria. The temporal scope 'today's' hints at the distinction, but no direct comparison or alternative guidance is given.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_times_dateAInspect
Get sunrise, sunset, dawn, dusk, solar noon, and golden hour times for a specific date at a location.
| Name | Required | Description | Default |
|---|---|---|---|
| lat | Yes | Latitude of the location (e.g., 40.7128) | |
| lng | Yes | Longitude of the location (e.g., -74.0060) | |
| date | Yes | Date in YYYY-MM-DD format (e.g., "2024-06-21") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It describes what the tool returns but doesn't disclose behavioral traits like error conditions, rate limits, authentication needs, or whether it's a read-only operation. The description is functional but lacks operational context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the key information (what it gets) and includes all necessary details (specific times, date, location). There's zero waste, and every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (3 required parameters, no output schema, no annotations), the description is adequate but incomplete. It covers the purpose well but lacks details on behavioral aspects and output format, which are important for an agent to use it correctly without annotations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents the three parameters (lat, lng, date). The description adds no additional parameter semantics beyond what's in the schema, such as format examples or constraints. Baseline 3 is appropriate when schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get') and the exact resources returned (sunrise, sunset, dawn, dusk, solar noon, golden hour times) for a specific date and location. It distinguishes from the sibling tool 'get_times' by specifying it's for a particular date rather than a broader time range or other parameters.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by specifying 'for a specific date at a location,' but doesn't explicitly state when to use this tool versus the sibling 'get_times' or provide any exclusions or prerequisites. The guidance is present but not comprehensive.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recallAInspect
Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.
| Name | Required | Description | Default |
|---|---|---|---|
| key | No | Memory key to retrieve (omit to list all keys) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes the tool's dual behavior (retrieve by key or list all), mentions persistence across sessions, and clarifies the optional parameter behavior. However, it doesn't address potential edge cases like what happens with invalid keys or empty storage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly concise with two sentences that each earn their place. The first sentence states the core functionality with parameter guidance, and the second provides context about when to use it. No wasted words, front-loaded with essential information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter tool with good schema coverage but no output schema, the description provides strong context about functionality and usage. It could be more complete by describing the return format (e.g., what a 'memory' contains) or error conditions, but it adequately covers the core use cases given the tool's simplicity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description adds significant value by explaining the semantic meaning of omitting the key parameter ('omit to list all keys') and connecting the parameter to the tool's dual functionality, going beyond what the schema alone provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('retrieve', 'list') and resources ('previously stored memory', 'all stored memories'). It distinguishes this from sibling tools like 'remember' (which stores) and 'forget' (which removes), making the retrieval/list function explicit.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool ('retrieve context you saved earlier') and includes a clear alternative usage pattern ('omit key to list all keys'). It also implicitly distinguishes it from siblings by focusing on retrieval rather than storage or deletion.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rememberAInspect
Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key (e.g., "subject_property", "target_ticker", "user_preference") | |
| value | Yes | Value to store (any text — findings, addresses, preferences, notes) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behavioral traits: it's a storage operation (implied mutation), specifies persistence differences ('Authenticated users get persistent memory; anonymous sessions last 24 hours'), and hints at session scope. However, it doesn't cover potential errors, rate limits, or exact response format, leaving minor gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded, with two sentences that efficiently convey purpose, usage, and behavioral details without waste. Each sentence adds distinct value: the first defines the tool's function, and the second clarifies persistence rules, making it highly concise and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (2 parameters, no output schema, no annotations), the description is mostly complete. It covers purpose, usage context, and key behavioral aspects like persistence. However, it lacks details on return values or error handling, which would be helpful since there's no output schema, leaving a minor gap in full contextual understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, with both parameters ('key' and 'value') well-documented in the schema. The description adds minimal semantic value beyond the schema, only implying general use cases ('findings, addresses, preferences, notes') without specifying parameter constraints or interactions. This meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with a specific verb ('store') and resource ('key-value pair in your session memory'), distinguishing it from sibling tools like 'forget' (which likely removes) and 'recall' (which likely retrieves). It explicitly mentions what gets stored ('intermediate findings, user preferences, or context across tool calls'), making the purpose unambiguous and distinct.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context on when to use this tool ('to save intermediate findings, user preferences, or context across tool calls'), but does not explicitly state when not to use it or name alternatives. For example, it doesn't compare usage with 'recall' for retrieval or 'forget' for deletion, leaving some ambiguity in sibling tool differentiation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!