Skip to main content
Glama

ephemeris_electional

Read-only

Identify the best times for events using electional astrology by scanning a date range to evaluate hourly planetary dignity, aspects, lunar phase, and void-of-course moon penalties, then clustering the top continuous windows.

Instructions

Find optimal planetary timing windows (electional astrology). Scans a date range to find the best times for an event based on essential dignity, aspect quality, sect, and void-of-course moon penalties. Evaluates every hour and clusters the best continuous windows.

CREDIT COST: 5 credits per call (heavy calculation).

EXAMPLE: Find the best time to launch a business in early March 2026. start_date='2026-03-01', end_date='2026-03-10', latitude=40.7128, longitude=-74.0060, avoid_voc=true, lunar_phase='waxing'

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
start_dateYesISO 8601 start date or datetime for the search window (e.g., 2026-03-01).
end_dateYesISO 8601 end date or datetime for the search window.
latitudeYesLatitude of location in decimal degrees (positive = North).
longitudeYesLongitude of location in decimal degrees (positive = East).
max_resultsNoMaximum number of top windows to return (default 5).
avoid_retrogradeNoComma-separated list of planets to avoid when retrograde (e.g., 'mercury,venus').
lunar_phaseNoFilter windows by lunar phase. Defaults to 'any'.
avoid_vocNoIf true, strictly ignores any moments where the Moon is Void of Course.
formatNoOutput format. 'llm' = compact token-efficient output (available on all tiers).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnlyHint=true and destructiveHint=false. The description adds value by disclosing the heavy calculation cost (5 credits per call) and the evaluation methodology (hourly scanning, clustering, penalties). However, it does not mention rate limits or other operational behaviors.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured into three logical sections: purpose, credit cost, and example. It is concise enough to be useful without excessive verbosity, though the example could be shortened without losing clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description explains the algorithm (evaluation per hour, clustering) but does not describe the return format or structure (beyond the format parameter). For a complex tool with 9 parameters and no output schema, more detail on the output would improve completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description does not add additional semantic details beyond the schema for individual parameters; it only mentions a subset in the example. The example provides context but no new parameter-specific meaning.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'find' and the resource 'optimal planetary timing windows' for electional astrology. It specifies the evaluation criteria (dignity, aspect quality, sect, void-of-course moon) and distinguishes from siblings by mentioning scanning a date range and clustering windows, which is not present in sibling tools like electional_moment_analysis.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides an example but does not explicitly state when to use this tool versus alternative electional tools (e.g., electional_moment_analysis, electional_aspect_search). There is no guidance on prerequisites or exclusion criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/openephemeris/openephemeris-MCP'

If you have feedback or need assistance with the MCP directory API, please join our Discord server