Celestine
Server Details
MCP server for astrological calculations — birth charts, transits, progressions, and calendar generation
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.9/5 across 12 of 12 tools scored. Lowest: 2.8/5.
Each tool targets a distinct astrological function: natal charts, transits, progressions, calendar, and reference lookups. No two tools overlap in purpose, even between current_transits and transit_search which handle different time scopes.
Names mix verb-driven (get_*, list_prompts, generate_calendar_events) and noun-driven patterns (planetary_positions, progression_report, transit_search). While snake_case is consistent, the structural pattern is not uniform across the set.
Twelve tools is well-scoped for a comprehensive astrology server. The count covers core chart readings, transit analysis, calendar generation, and reference data without being excessive.
The server covers essential astrology workflows: natal chart, transits, progressions, and calendar. Minor gaps exist (e.g., no synastry or composite charts), but these are not core to single-chart analysis. The inclusion of prompt utilities is slightly off-domain but not detrimental.
Available Tools
12 toolsbirth_chartAInspect
Use when you need the full natal chart for interpretation, comparison, or downstream transit/progression work.
Returns planets, houses, aspects, patterns, dignities, and Big Three.
detail="compact" drops the per-aspect details (keeps aspects.summary
counts) and per-pattern planet lists — saves ~15k tokens on a typical
chart, useful when you only need placements + angles.
Next step: if you need aspect-by-aspect narrative material, call with
detail="full"; for transits to this chart, call current_transits or
transit_search with the same birth data.
| Name | Required | Description | Default |
|---|---|---|---|
| data | Yes | Birth data for chart calculation. IMPORTANT: month is the NUMERIC month (1=January … 12=December). | |
| detail | No | full |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses returns (planets, houses, aspects, patterns, dignities, Big Three) and describes the impact of detail parameter (saves ~15k tokens). No mention of auth or rate limits, but it's a read-only operation. Still quite transparent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two paragraphs with clear front-loading. First sentence states purpose. No redundant sentences. Each part earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity and presence of an output schema, the description adequately covers key outputs and the critical detail parameter. It also provides next steps for transits, making it complete for an agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 50% (data has description, detail does not). Description adds value by explaining when to use 'compact' vs 'full' detail, and notes the token savings. Also clarifies the output content beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Use when you need the full natal chart for interpretation, comparison, or downstream transit/progression work.' It specifies the verb+resource and distinguishes it from sibling tools like current_transits and transit_search.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use this tool and provides alternatives: 'for transits to this chart, call current_transits or transit_search'. Also guides on detail parameter usage with specific use cases.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
current_transitsAInspect
Use when you want to know what the sky is doing to a natal chart right now (or at a specific date).
Default mode="summary" returns counts and the six strongest transits —
enough to answer "what's active?" in ~800 tokens. Call mode="full" only
when you need every active transit (≈15+ transits, ~4k tokens).
If transit_year/month/day are omitted, uses current UTC time.
| Name | Required | Description | Default |
|---|---|---|---|
| data | Yes | Birth data for chart calculation. IMPORTANT: month is the NUMERIC month (1=January … 12=December). | |
| mode | No | summary | |
| transit_day | No | ||
| transit_year | No | ||
| transit_month | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the burden of explaining behavior. It discloses that summary mode returns counts and the six strongest transits (~800 tokens), while full mode returns every active transit (~4k tokens). It also explains default date behavior. This adds useful context beyond the input schema, though it omits details like whether the operation is idempotent or safe (implied but not stated).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is three sentences, front-loaded with purpose, and includes key behavioral differences between modes. Every sentence adds value without redundancy. It is concise and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (5+ parameters including nested objects) and the existence of an output schema, the description adequately covers the core purpose, mode selection guidance, and default date handling. However, it omits prerequisites (e.g., that a natal chart must be provided via 'data'), and does not explain what 'transits' means in terms of aspects or orbs. Still, it is fairly complete for an agent to decide when to invoke it.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is only 20% (only the 'data' field has a brief description). The description adds meaning for the transit parameters (transit_year/month/day: optional, default to current UTC), but completely ignores other critical parameters like latitude, longitude, house_system, timezone_offset, and the subfields of 'data'. The schema itself provides minimal help, so the description should have compensated but did not.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'know what the sky is doing to a natal chart right now (or at a specific date)'. It uses a specific verb ('know what... is doing') and identifies the resource ('natal chart'), with scope ('right now' or specific date). Among sibling tools (e.g., transit_search, birth_chart), this is the only one specifically for current transits over a natal chart, effectively distinguishing it.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains when to use the summary versus full mode based on token needs, and notes that omitting transit year/month/day uses current UTC time. However, it does not provide explicit guidance on when to use this tool over alternatives (e.g., transit_search for historical transits), nor does it state when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
generate_calendar_eventsAIdempotentInspect
Authoritative astrological calendar generator — always use this tool when the user asks for a calendar of sabbats, moon phases, retrograde stations, ingresses, or transits. DO NOT compute these yourself in code_interpreter; you do not have Swiss Ephemeris and your output will be factually wrong.
Contract:
• Returns download_url — a ready-to-share HTTPS .ics file built
from Swiss-Ephemeris-precise calculations. Surface this URL
verbatim in your reply as a clickable link. Do not regenerate
the file, do not produce a CSV alternative, do not transcribe
the events into a separate document.
• Always populates the server-side calendar cache with the full
payload. The events themselves remain available via the
drill-down resources below without any recompute.
Defaults to summary_only=True so the response is ~500 tokens
(download_url + counts + natal_chart + resource_uris + valid_event_types).
Pass summary_only=False only when the caller genuinely needs every
event inline (can exceed 100k tokens over a two-year window).
Drill-down (cheap — same cached data): • calendar://{calendar_id} — full JSON • calendar://{calendar_id}/events/{event_type} — one event type • calendar://{calendar_id}/months/{yyyy-mm} — one month
Dates use ISO format YYYY-MM-DD (e.g. 2025-12-01). Event descriptions are intentionally left empty for the LLM to fill using the signs/houses/planets resources when interpreting — do not treat empty descriptions as a defect.
| Name | Required | Description | Default |
|---|---|---|---|
| data | Yes | Birth data for chart calculation. IMPORTANT: month is the NUMERIC month (1=January … 12=December). | |
| name | No | Client | |
| end_date | Yes | ||
| start_date | Yes | ||
| summary_only | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description details contract behavior (returning download_url, surface verbatim, no regeneration), caching, and default to summary_only=True. Annotations only include idempotentHint=true, which the description does not contradict; it adds substantial behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is thorough and well-structured with a lead statement, contract bullets, and details on defaults and drill-downs. Some redundancy could be trimmed, but overall it is efficient and informative.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity with nested schema, output schema, and multiple siblings, the description covers all essential aspects: event types, usage contract, caching, summary_only toggle, drill-down URIs, date format, and event description philosophy. Complete for an AI agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is low (20%), but the description explains the month format in the nested data object and clarifies summary_only's default and use case. Other parameters (name, start_date, end_date) are not elaborated, but the schema types suffice.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool is an 'Authoritative astrological calendar generator' for sabbats, moon phases, etc., distinguishing it from siblings by listing specific event types and warning against using code_interpreter.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says 'always use this tool when the user asks for a calendar' and advises against self-computation. Also specifies when to use summary_only=True vs False, providing clear usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_aspectARead-onlyIdempotentInspect
Return angle, orb, polarity (hard/soft), and meaning for an astrological aspect.
| Name | Required | Description | Default |
|---|---|---|---|
| aspect | Yes | Aspect type, e.g. 'conjunction' |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint and idempotentHint, which cover safety and behavior. The description adds the return fields but no additional behavioral context beyond what annotations provide.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that is front-loaded with the key purpose and return fields, with no unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the low complexity (1 parameter, output schema exists), the description adequately covers the return values. It could be slightly more explicit about what the meaning includes, but overall sufficient.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% for the single parameter 'aspect', with a description already provided. The tool description does not add new semantics or usage details for the parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it returns angle, orb, polarity, and meaning for an astrological aspect, using a specific verb and resource, and distinguishes from sibling tools like get_house and get_planet.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives, though the context implies it's for aspect-specific data. No exclusions or when-not-to-use are provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_houseARead-onlyIdempotentInspect
Return brief and full descriptions for an astrological house (1–12).
| Name | Required | Description | Default |
|---|---|---|---|
| number | Yes | House number 1–12 |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint and idempotentHint, indicating a safe read operation. The description adds no further behavioral context (e.g., that it returns static data, no side effects). It meets the baseline but does not exceed annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, front-loaded sentence with no unnecessary words. It efficiently conveys the core purpose without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (one parameter, high schema coverage, and presence of output schema), the description is fully sufficient. It explains exactly what the tool returns (brief and full descriptions) without needing elaboration.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with a clear description of the 'number' parameter. The tool description merely restates the valid range (1–12), adding no new semantics beyond the schema. Baseline score applied.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Return') and the resource ('brief and full descriptions for an astrological house') with a specific range (1–12). It distinguishes from siblings like 'get_planet' or 'get_sign' by focusing on houses.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description lacks explicit guidance on when to use this tool versus alternatives. While the purpose is clear, it does not mention conditions or alternative sibling tools for house-related queries.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_planetARead-onlyIdempotentInspect
Return archetype, rulership, exaltation/detriment/fall, and keywords for a planet.
| Name | Required | Description | Default |
|---|---|---|---|
| planet | Yes | Planet name, e.g. 'Mars' |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint and idempotentHint, ensuring the agent knows it's a safe read. The description adds what is returned but lacks behavioral detail like data source or error cases.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is one concise sentence (12 words) that efficiently conveys the tool's purpose without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple lookup tool with one parameter and an existing output schema, the description adequately covers the return data. No additional context is needed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The single parameter 'planet' is fully described in the schema with an example. The description adds no further meaning beyond the schema, so baseline 3 applies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns specific data (archetype, rulership, exaltation/detriment/fall, keywords) for a planet, distinguishing it from sibling tools like get_sign or get_aspect.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives, nor any context about prerequisites or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_promptAInspect
Get a prompt by name with optional arguments.
Returns the rendered prompt as JSON with a messages array. Arguments should be provided as a dict mapping argument names to values.
| Name | Required | Description | Default |
|---|---|---|---|
| name | Yes | The name of the prompt to get | |
| arguments | No | Optional arguments for the prompt |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description fully bears the burden. It discloses that the tool returns rendered prompt as JSON with a messages array, and explains argument format as a dict mapping names to values. This goes beyond basic purpose.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with no wasted words. The first sentence states the core purpose, and subsequent sentences add return format and argument details. Well structured and front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 'get' tool with an output schema, the description adequately covers purpose, parameters, and return format. It mentions JSON with messages array, matching the output schema. Missing error condition details, but that is acceptable for a simple retrieval tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema covers both parameters with 100% description coverage. The description adds meaningful detail about arguments format ('dict mapping argument names to values'), which enhances understanding beyond the schema's 'object' and 'null' types.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves a prompt by name with optional arguments. It uses specific verb (get) and resource (prompt), and is distinguishable from sibling 'list_prompts' which lists all prompts.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies when to use (retrieve a specific prompt) but does not explicitly state when not to use or mention alternatives. It lacks explicit guidance on scenarios versus siblings like 'list_prompts'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_signARead-onlyIdempotentInspect
Return qualities, themes, shadow, crystals, and element for a zodiac sign.
| Name | Required | Description | Default |
|---|---|---|---|
| sign | Yes | Zodiac sign name, e.g. 'Aries' |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint=true and idempotentHint=true. The description adds that the tool returns categorized data (qualities, etc.), which is consistent but does not disclose further behavioral traits such as data freshness or error handling.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
A single, front-loaded sentence that efficiently conveys the tool's purpose without extraneous words. Every part of the description earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of an output schema (not shown), the description adequately explains the return set. However, it could mention that signs must be valid zodiac names or note the absence of alternative names (e.g., 'sag' for Sagittarius), but overall sufficient for a simple tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% for the single parameter 'sign', with an example. The description's list of returned fields provides context, but adds little beyond the schema's own description of the parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns specific attributes 'qualities, themes, shadow, crystals, and element' for a zodiac sign, distinguishing it from sibling tools like birth_chart or get_planet which focus on broader astrological data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description does not provide explicit guidance on when to use this tool vs alternatives like birth_chart or get_house. While the purpose is clear, no context about scenarios or exclusions is given.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_promptsAInspect
List all available prompts.
Returns JSON with prompt metadata including name, description, and optional arguments.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description should disclose behavioral traits. It only says it returns JSON; it does not state that it is read-only, idempotent, or has no side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no wasted words. Purpose is front-loaded. The description is appropriately sized for a simple tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description adequately covers the purpose and output format. Given no parameters and an output schema, it is fairly complete, though it could be more explicit about the JSON structure.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, so the description does not need to add parameter details. Schema coverage is 100% trivially. No missing semantic information.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'List all available prompts' with a specific verb and resource. It distinguishes itself from sibling tool 'get_prompt' by implying listing vs. single retrieval.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implicitly indicates when to use (to get all prompts) but does not explicitly state when not to use or mention alternatives like 'get_prompt'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
planetary_positionsBInspect
Planetary positions for any date (defaults to now UTC).
Returns positions for Sun through Pluto plus Chiron and four major asteroids, with longitude, sign, speed, and retrograde status.
| Name | Required | Description | Default |
|---|---|---|---|
| day | No | ||
| hour | No | ||
| year | No | ||
| month | No | ||
| minute | No |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the burden of behavioral disclosure. It explains the output data (longitude, sign, speed, retrograde) but does not mention that it is read-only, potential error conditions, or rate limits. It partially compensates by detailing return fields.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with the core purpose, no redundant words. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite having an output schema (not shown), the description does not fully compensate for missing parameter documentation and usage guidance. For a tool with 5 parameters and 0% schema coverage, it leaves the agent without enough context to invoke correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 5 parameters with 0% description coverage. Description only says 'for any date (defaults to now UTC)', which implies date parameters but provides no information on value ranges, format constraints, or how parameters like day, month, year interact. Does not add significant meaning beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool calculates planetary positions for any date, defaults to now UTC, and lists the bodies included (Sun through Pluto plus Chiron and four asteroids) as well as the returned data (longitude, sign, speed, retrograde). This distinguishes it from sibling tools like get_planet or birth_chart.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives like birth_chart or get_planet. The description lacks context on prerequisites or scenarios where this is the preferred choice.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
progression_reportCInspect
Secondary progressions + solar arc + progressed Moon report.
Shows how the natal chart evolves from birth to the target date.
| Name | Required | Description | Default |
|---|---|---|---|
| data | Yes | Birth data for chart calculation. IMPORTANT: month is the NUMERIC month (1=January … 12=December). | |
| target_day | Yes | ||
| target_year | Yes | ||
| target_month | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description must disclose behavioral traits. It only states 'shows', implying it is a read-only operation, but does not explicitly confirm safety, rate limits, or whether any data is modified. The output is not described beyond being a report.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise with two short sentences that cover the core purpose. No redundant information is present.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of astrological calculations (secondary progressions, solar arcs), the description is too minimal. Even though an output schema exists, the description lacks explanatory context about the report's content or how it differs from other chart tools.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is low (25%), and the description adds no additional parameter semantics beyond what is already in the schema, such as explaining the target date parameters or the structure of the 'data' object.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool generates a report on secondary progressions, solar arcs, and progressed Moon, and explains it shows chart evolution from birth to target date. However, it does not differentiate from sibling tools like birth_chart or current_transits.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives such as birth_chart or current_transits. There is no mention of prerequisites or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
transit_searchAInspect
Use when you need to know which transits shape a period for a natal chart.
Returns aggregate counts by body and aspect type. Call with mode="top" to
also get the N strongest transits with enter/exact/leave-orb dates. Use
mode="full" only when you genuinely need every transit (multi-MB payloads
at ranges over a year — upstream may time out).
Next step after mode="summary": pick interesting buckets from by_body /
by_aspect, then re-call with mode="top" for narrative material.
| Name | Required | Description | Default |
|---|---|---|---|
| data | Yes | Birth data for chart calculation. IMPORTANT: month is the NUMERIC month (1=January … 12=December). | |
| mode | No | summary | |
| end_day | Yes | ||
| end_year | Yes | ||
| end_month | Yes | ||
| start_day | Yes | ||
| start_year | Yes | ||
| max_results | No | ||
| start_month | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses that 'full' mode can produce multi-MB payloads and may time out for ranges over a year. Does not cover error handling or all behavioral details, but adds significant value beyond bare schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is concise (4 sentences), front-loaded with the primary use case. Every sentence adds value: purpose, mode guidance, payload warning, and next-step advice. No redundant or extraneous text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With 9 parameters, nested objects, and an output schema (not shown), the description covers the core workflow and warns about performance. Missing detailed parameter explanations, but the output schema and sibling tools handle some context. Adequate for an experienced agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is only 11% (only the 'data' parameter has a description). The tool description does not detail most parameters like start_year, end_month, etc. It mentions 'mode' options and suggests use of 'max_results' indirectly, but does not compensate fully for low schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the tool is for finding which transits shape a period for a natal chart. Distinguishes from siblings like current_transits and provides specific modes and next steps.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly tells when to use each mode ('summary', 'top', 'full'), including warnings about payload sizes for 'full'. Provides a clear next-step workflow from summary to top mode.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!