Skip to main content
Glama

Server Details

Vedic astrology MCP server connecting any AI to real ephemeris calculations — natal charts, Vimshottari Dasha, yogas with BPHS citations, matchmaking with Rajju/Vedha vetoes, panchanga, KP system, Lal Kitab, and numerology. OAuth 2.1. Free sandbox tier.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.5/5 across 103 of 103 tools scored. Lowest: 3.3/5.

Server CoherenceA
Disambiguation4/5

Most tools have distinct purposes reinforced by detailed 'DO NOT CONFUSE WITH' sections. However, the sheer number (103) and presence of multiple similar tools for dasha, numerology, and tarot could still cause an agent to misselect without careful reading.

Naming Consistency4/5

All tools begin with 'asterwise_' and follow a verb_noun pattern (get_, check_, draw_). Verbs are not uniform across all tools (e.g., 'analysis' used once), but the pattern is mostly consistent and predictable.

Tool Count2/5

103 tools is excessive for a single MCP server. The scope spans many domains (Vedic astrology, Western astrology, numerology, tarot, crystals, dreams, etc.), making it feel like multiple services bundled. This overwhelms the agent's selection surface.

Completeness4/5

Within each domain, coverage is thorough (e.g., multiple dasha systems, all core numerology numbers, full tarot deck with spreads, crystal database with chart-based recommendations). Minor stubs exist (e.g., number_meaning extended fields) but overall the surface is complete.

Available Tools

103 tools
asterwise_check_mobile_numberA
Read-onlyIdempotent
Inspect

Digit-strips a mobile string (keeping country code digits), reduces it with the owner's name and birth date, and returns harmonic scoring plus interpretive copy.

SECTION: WHAT THIS TOOL COVERS Accepts formats like bare ten digits, plus-country-code with spaces or hyphens; all non-digits drop out before summation and country code digits count toward totals. Compares against owner numerology via upstream rules. Not for vehicle plates (asterwise_check_vehicle_number) or business names.

SECTION: WORKFLOW BEFORE: RECOMMENDED — asterwise_get_numerology_profile — anchor Life Path before judging the line. AFTER: None.

SECTION: INPUT CONTRACT Formatting noise is ignored; only digits contribute. Country code digits are included in the reduction sum.

SECTION: OUTPUT CONTRACT data.input (string — submitted number) data.input_type (string — 'mobile') data.total (int — sum of all retained digits) data.single_digit (int — Pythagorean root, one through nine) data.is_master (bool — true when total reduces to eleven, twenty-two, or thirty-three) data.theme (string) data.favourable_for[] (string array) data.caution (string) data.harmony_score (int — one through ten)

SECTION: RESPONSE FORMAT response_format=json serialises the complete response as indented JSON — use this for programmatic parsing, typed clients, and downstream tool chaining. response_format=markdown renders the same data as a human-readable report. Both modes return identical underlying data — no fields are added, removed, or filtered by either mode.

SECTION: COMPUTE CLASS FAST_LOOKUP

SECTION: ERROR CONTRACT INVALID_PARAMS (local — caught before upstream call): None — all validation is upstream.

INVALID_PARAMS (upstream): — None — upstream rejection surfaces as MCP INTERNAL_ERROR at the tool layer.

INTERNAL_ERROR: — Any upstream API failure or timeout → MCP INTERNAL_ERROR

Edge cases: — Plus signs and punctuation do not affect digit extraction.

SECTION: DO NOT CONFUSE WITH asterwise_check_vehicle_number — plate digit rules, not SIM numbering. asterwise_get_business_name_analysis — letter Expression scan, not phone roots.

ParametersJSON Schema
NameRequiredDescriptionDefault
dateYes
nameYes
mobile_numberYes
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint=true, destructiveHint=false, openWorldHint=true, and idempotentHint=true. The description adds valuable context beyond this: it explains that formatting noise is ignored, only digits contribute, country code digits are included, and it's classified as FAST_LOOKUP. However, it doesn't detail rate limits or authentication needs, which would elevate it to a 5.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (WHAT THIS TOOL COVERS, WORKFLOW, INPUT CONTRACT, etc.), front-loading key information. Every sentence adds value—no fluff or repetition—making it efficient and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (numerology analysis with multiple inputs), the description is highly complete. It covers purpose, usage, input handling, output structure (detailed in OUTPUT CONTRACT), error handling, and sibling differentiation. With annotations and an output schema present, the description provides all necessary context without redundancy.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description must compensate. It explains that mobile_number accepts various formats (bare ten digits, plus-country-code with spaces/hyphens), name and date are used for owner numerology comparison, and response_format controls output serialization. This adds meaningful semantics, though it doesn't specify date format or name handling details.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: it digit-strips a mobile number, reduces it with owner's name and birth date, and returns harmonic scoring plus interpretive copy. It specifically distinguishes from sibling tools like asterwise_check_vehicle_number and asterwise_get_business_name_analysis, making the scope explicit.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool versus alternatives (e.g., 'Not for vehicle plates' and references to specific sibling tools). It also recommends a prerequisite tool (asterwise_get_numerology_profile) and clarifies there's no follow-up tool, offering comprehensive usage context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_check_sade_satiA
Read-onlyIdempotent
Inspect

Evaluates Saturn's seven-and-a-half-year Moon-sign cycle phases against natal data for the current day and returns intensity, upcoming cycles, and historical rows.

SECTION: WHAT THIS TOOL COVERS Sade Sati mechanics: natal Moon sign in English, three reference signs (rising/peak/setting), active flag, current phase label, intensity score/label, next occurrence metadata, all_periods[] timeline with nested phase objects, mitigation booleans, classical note. "Today" is implicit — no date parameter. Signs are English, not Sanskrit. It is not general Gochar (asterwise_get_gochar) or dasha overlay (asterwise_get_dasha_transits).

SECTION: WORKFLOW BEFORE: RECOMMENDED — asterwise_get_natal_chart — confirm Moon sign context. AFTER: asterwise_get_gochar — broader transit canvas if needed.

SECTION: INPUT CONTRACT No explicit query date — API pins to current day. BirthData global contract applies.

SECTION: OUTPUT CONTRACT data.natal_moon_sign (string — English, e.g. 'Libra') data.natal_moon_sign_index (int — 0–11) data.sade_sati_signs: rising (string) peak (string) setting (string) data.is_currently_active (bool) data.current_phase (string — 'rising', 'peak', 'setting', or null) data.current_phase_description (string or null) data.intensity_score (int — 0–10) data.intensity_label (string — 'low', 'moderate', or 'high') data.next_sade_sati: starts (string — YYYY-MM-DD) phase (string) years_away (float) data.all_periods[] — each: sade_sati_number (int) overall_start (string — YYYY-MM-DD) overall_end (string — YYYY-MM-DD) duration_years (float) phases: rising — { name, description, start, end, saturn_sign, intensity } peak — same shape setting — same shape data.mitigated_by_own_sign (bool) data.mitigated_by_exaltation (bool) data.classical_note (string)

SECTION: RESPONSE FORMAT response_format=json serialises the complete response as indented JSON — use this for programmatic parsing, typed clients, and downstream tool chaining. response_format=markdown renders the same data as a human-readable report. Both modes return identical underlying data — no fields are added, removed, or filtered by either mode.

SECTION: COMPUTE CLASS MEDIUM_COMPUTE

SECTION: ERROR CONTRACT INVALID_PARAMS (local — caught before upstream call): None — BirthData Pydantic only.

INVALID_PARAMS (upstream): — None — upstream rejection surfaces as MCP INTERNAL_ERROR at the tool layer.

INTERNAL_ERROR: — Any upstream API failure or timeout → MCP INTERNAL_ERROR

Edge cases: — English sign names throughout — do not expect Sanskrit.

SECTION: DO NOT CONFUSE WITH asterwise_get_gochar — nine-planet daily scan including sade_sati_active flag but less Sade Sati detail than this tool. asterwise_get_transits — ingress/station feed, not Moon-focused Saturn phase model.

ParametersJSON Schema
NameRequiredDescriptionDefault
birthYesBirth data for a single person.
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations provide readOnlyHint=true, destructiveHint=false, etc., covering safety. The description adds valuable context beyond annotations: it specifies that 'Today' is implicit with no date parameter, signs are in English not Sanskrit, and includes compute class (MEDIUM_COMPUTE) and error contracts. It doesn't contradict annotations, but could mention rate limits or caching behavior for a more complete 5.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (WHAT THIS TOOL COVERS, WORKFLOW, etc.), front-loading key information. Every sentence adds value—no redundancy or fluff. It efficiently covers purpose, usage, parameters, output, and distinctions in a compact format.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (astrological calculations), rich output schema, and annotations, the description is complete. It explains the tool's scope, workflow, input/output contracts, compute class, errors, and sibling distinctions. The output schema exists, so the description doesn't need to detail return values, and it adequately supplements the structured data.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 50% (birth object has descriptions, response_format lacks description). The description compensates by detailing the 'BirthData global contract' and specifying that 'response_format=json serialises... response_format=markdown renders...' This adds meaning beyond the schema, especially for response_format. However, it doesn't fully explain all parameter nuances (e.g., ayanamsa defaults), keeping it from a 5.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool 'Evaluates Saturn's seven-and-a-half-year Moon-sign cycle phases against natal data for the current day and returns intensity, upcoming cycles, and historical rows.' This is a specific verb ('evaluates') with clear resources (Saturn's cycle phases, natal data) and outputs. It distinguishes from siblings by naming 'asterwise_get_gochar' and 'asterwise_get_dasha_transits' as different tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool vs. alternatives: 'It is not general Gochar (asterwise_get_gochar) or dasha overlay (asterwise_get_dasha_transits).' It also recommends workflow steps: 'BEFORE: RECOMMENDED — asterwise_get_natal_chart — confirm Moon sign context. AFTER: asterwise_get_gochar — broader transit canvas if needed.' This clearly defines context and exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_check_vehicle_numberA
Read-onlyIdempotent
Inspect

Strips non-digits from a vehicle registration token, reduces the numeric run with owner name and birth date, and returns the same harmony schema as mobile analysis.

SECTION: WHAT THIS TOOL COVERS Handles Indian pattern plates (e.g. MH01AB1234) and international variants; only digits feed totals. Produces vehicle-specific input_type. Does not analyse phone numbers (asterwise_check_mobile_number) or business names.

SECTION: WORKFLOW BEFORE: RECOMMENDED — asterwise_get_numerology_profile — owner baseline. AFTER: None.

SECTION: INPUT CONTRACT Letters and separators are ignored; reduction uses numeric digits only.

SECTION: OUTPUT CONTRACT data.input (string — submitted plate) data.input_type (string — 'vehicle') data.total (int — sum of numeric digits) data.single_digit (int — root) data.is_master (bool) data.theme (string) data.favourable_for[] (string array) data.caution (string) data.harmony_score (int — one through ten)

SECTION: RESPONSE FORMAT response_format=json serialises the complete response as indented JSON — use this for programmatic parsing, typed clients, and downstream tool chaining. response_format=markdown renders the same data as a human-readable report. Both modes return identical underlying data — no fields are added, removed, or filtered by either mode.

SECTION: COMPUTE CLASS FAST_LOOKUP

SECTION: ERROR CONTRACT INVALID_PARAMS (local — caught before upstream call): None — all validation is upstream.

INVALID_PARAMS (upstream): — None — upstream rejection surfaces as MCP INTERNAL_ERROR at the tool layer.

INTERNAL_ERROR: — Any upstream API failure or timeout → MCP INTERNAL_ERROR

Edge cases: — Alphabetic segments are decorative for numerology here — only digits matter.

SECTION: DO NOT CONFUSE WITH asterwise_check_mobile_number — phone digit rules including country codes. asterwise_get_business_name_analysis — evaluates business Expression, not registration digits.

ParametersJSON Schema
NameRequiredDescriptionDefault
dateYes
nameYes
vehicle_numberYes
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond annotations. Annotations indicate read-only, open-world, idempotent, and non-destructive operations, but the description details specific processing rules (e.g., 'Letters and separators are ignored; reduction uses numeric digits only'), error handling (e.g., 'INVALID_PARAMS (local — caught before upstream call): None'), and performance class ('FAST_LOOKUP'). It does not contradict annotations, which already cover safety aspects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is structured with clear sections (e.g., 'WHAT THIS TOOL COVERS', 'WORKFLOW'), which aids readability, but it is verbose with redundant details. For example, the 'OUTPUT CONTRACT' and 'RESPONSE FORMAT' sections could be more concise, and some information (like 'Edge cases') repeats earlier points. It is front-loaded with core functionality but includes excessive elaboration.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (numerology analysis with multiple inputs and outputs), the description is highly complete. It covers purpose, usage guidelines, input processing rules, detailed output fields, response formats, error handling, performance, and sibling differentiation. With an output schema present, it appropriately focuses on behavioral context rather than repeating return values, making it sufficient for effective tool use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description compensates by explaining parameter semantics. It clarifies that 'vehicle_number' handles 'Indian pattern plates (e.g. MH01AB1234) and international variants' and that 'name' and 'date' are used with the vehicle number for 'reduction'. It also details 'response_format' options ('json' vs 'markdown') and their effects, adding meaning not in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Strips non-digits from a vehicle registration token, reduces the numeric run with owner name and birth date, and returns the same harmony schema as mobile analysis.' It specifies the verb (strips, reduces, returns), resource (vehicle registration), and distinguishes it from sibling tools like 'asterwise_check_mobile_number' and 'asterwise_get_business_name_analysis' in the 'DO NOT CONFUSE WITH' section.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool versus alternatives. It states 'Does not analyse phone numbers (asterwise_check_mobile_number) or business names' and includes a 'DO NOT CONFUSE WITH' section listing specific sibling tools. It also recommends a 'BEFORE' tool (asterwise_get_numerology_profile) for owner baseline, offering clear workflow context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_draw_tarot_cardsA
Read-only
Inspect

Draws N unique random cards from the 78-card deck using cryptographic randomness (Python secrets.SystemRandom). Every call is independent — there is no session state.

SECTION: WHAT THIS TOOL COVERS Random card selection for open readings, single-card daily pulls, or custom spread layouts. Uniqueness is guaranteed within a single draw — the same card cannot appear twice in one draw. The active_meaning field is pre-computed per orientation so callers do not need to branch on is_reversed.

SECTION: WORKFLOW BEFORE: None — standalone. AFTER: None — interpret drawn cards using their active_meaning and active_keywords fields.

SECTION: INPUT CONTRACT count (int 1–78, default 1) — Number of unique cards to draw. Example: 1 (daily pull), 3 (simple reading), 10 (Celtic Cross), 78 (full deck shuffle). Values outside 1–78 are rejected locally with MCP INVALID_PARAMS. allow_reversed (bool, default false) — When true, each drawn card independently has a 50% chance of reversal (cryptographically random, not seeded).

SECTION: OUTPUT CONTRACT data.cards[] — array of count objects, each: card — full card object (same shape as asterwise_get_tarot_card) is_reversed (bool) active_meaning (string — orientation-appropriate interpretation) active_keywords[] (string array) position (null — no position for free draws; use spread endpoints for positional reads) position_meaning (null) data.count (int — echoed) data.allow_reversed (bool — echoed)

SECTION: RESPONSE FORMAT response_format=json — structured draw result. response_format=markdown — human-readable card report. Both modes return identical underlying data.

SECTION: COMPUTE CLASS FAST_LOOKUP — cryptographic randomness, no ephemeris.

SECTION: ERROR CONTRACT INVALID_PARAMS (local): — count < 1 or count > 78 → MCP INVALID_PARAMS immediately. INTERNAL_ERROR: Any upstream API failure → MCP INTERNAL_ERROR

SECTION: DO NOT CONFUSE WITH asterwise_get_tarot_card_of_the_day — deterministic daily card, same for all callers. asterwise_get_tarot_three_card_spread — positional read with named positions and meanings. asterwise_get_tarot_celtic_cross — 10-card positional spread.

ParametersJSON Schema
NameRequiredDescriptionDefault
countNo
allow_reversedNo
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds significant behavioral context beyond annotations: explains cryptographic randomness, fresh draws per call, and reversal probability. No contradiction with readOnlyHint annotation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with clear sections, concise sentences each adding value. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Complete for a draw tool: covers input parameters, output structure, and random behavior. Output schema is described in detail.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Description adds meaning for 'count' and 'allow_reversed' (ranges, defaults, behavior) beyond schema. Does not mention 'response_format' parameter, but schema covers it. Good compensation for 0% schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states 'Draw N random unique cards' from a specific deck using cryptographic randomness, distinguishing it from sibling tools like get_tarot_card or get_tarot_cards.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides detailed input contract but no explicit guidance on when or when not to use this tool compared to other tarot-based siblings. Implicit context is present but not explicit.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_angel_numberA
Read-onlyIdempotent
Inspect

Lookup the meaning of a specific angel number by its sequence. Supported: 000, 111–999 (single repeating digit), 911, 1010, 1111, 1122, 1212, 1234, 2222–9999 (double repeating digit).

SECTION: WHAT THIS TOOL COVERS Returns the theme, primary message, actionable guidance, and associated life areas for a specific angel number sequence. Each sequence carries distinct meaning in modern numerological tradition. 111 = manifestation portal. 444 = angelic protection. 999 = cycle completion. 1111 = awakening gateway. 555 = transformation in progress. Pass the number as a string exactly as it appears (e.g. '444' not 444).

SECTION: WORKFLOW BEFORE: None — standalone. AFTER: None.

SECTION: INPUT CONTRACT number: string — the angel number sequence to look up. Examples: '111', '444', '1111', '911'.

SECTION: OUTPUT CONTRACT data.number (string) data.theme (string) data.message (string) data.guidance (string) data.areas[] (string array)

SECTION: RESPONSE FORMAT response_format=json — structured JSON. response_format=markdown — human-readable. Both return identical data.

SECTION: COMPUTE CLASS FAST_LOOKUP

SECTION: ERROR CONTRACT INVALID_PARAMS (upstream): Unsupported number → 404, surfaces as MCP INTERNAL_ERROR. INTERNAL_ERROR: Any upstream API failure → MCP INTERNAL_ERROR

SECTION: DO NOT CONFUSE WITH asterwise_get_angel_number_today — today's collective daily angel number. asterwise_get_angel_number_personal — personal angel number from birth date. asterwise_get_number_meaning — Pythagorean numerology meaning for 1–33; different tradition.

ParametersJSON Schema
NameRequiredDescriptionDefault
numberYes
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Description adds significant behavioral detail beyond annotations: specifies return fields (theme, message, guidance, areas), error handling, input format requirements, and even notes the numerological tradition. Annotations already indicate readOnly, idempotent, non-destructive, and description aligns and complements.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is well-structured into labeled sections and front-loaded with purpose. Some sections (e.g., COMPUTE CLASS, WORKFLOW) are minimally useful, adding slight verbosity, but overall efficient for the information content.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Tool has 2 parameters, rich output schema, and annotations; description covers everything needed: supported numbers, output structure, error contracts, and disambiguation. Agent can fully understand when and how to invoke.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 0% description coverage, so description compensates fully: explains 'number' parameter with examples and case-sensitivity, and explains 'response_format' enum values with clarity on output differences.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool looks up meaning of a specific angel number by sequence, lists supported numbers (e.g., 111, 444), and distinguishes from sibling tools in a dedicated section.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Description includes a 'DO NOT CONFUSE WITH' section that explicitly contrasts three sibling tools, explaining when to use each. Workflow sections indicate standalone operation, but lacks explicit 'when not to use' guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_angel_number_personalA
Read-onlyIdempotent
Inspect

Computes a personal angel number from a birth date using the Pythagorean Life Path as the base. Life Path 1-9 maps to the triple sequence (LP 4 → 444). Master numbers 11, 22, 33 map to 1111, 2222, 3333 respectively.

SECTION: WHAT THIS TOOL COVERS The personal angel number is the individual's primary energetic signature in angel number tradition. Derived using the digit-fusing Life Path method (same as asterwise_get_numerology_profile): all digits of the birth date are summed and reduced to a single digit or master number, then mapped to the corresponding triple or quadruple sequence. Returns the Life Path number, the angel sequence, and the full angel number interpretation.

SECTION: WORKFLOW BEFORE: RECOMMENDED — asterwise_get_numerology_profile — confirm Life Path before calling. AFTER: None.

SECTION: INPUT CONTRACT date: Birth date in YYYY-MM-DD format. Example: '1994-03-31' name (optional): Person's name for personalisation.

SECTION: OUTPUT CONTRACT data.birth_date (string) data.life_path (int — 1-9 or master 11/22/33) data.angel_number (string — e.g. '333' for LP 3) data.number (string) data.theme (string) data.message (string) data.guidance (string) data.areas[] (string array) data.name (string or null — if provided)

SECTION: RESPONSE FORMAT response_format=json — structured JSON. response_format=markdown — human-readable. Both return identical data.

SECTION: COMPUTE CLASS FAST_LOOKUP — pure digit math, no ephemeris.

SECTION: ERROR CONTRACT INVALID_PARAMS (upstream): Invalid date format → 422. INTERNAL_ERROR: Any upstream API failure → MCP INTERNAL_ERROR

SECTION: DO NOT CONFUSE WITH asterwise_get_angel_number_today — collective daily number from today's date, not birth date. asterwise_get_numerology_profile — full Pythagorean profile; this tool extracts only the Life Path → angel sequence mapping.

ParametersJSON Schema
NameRequiredDescriptionDefault
dateYes
nameNo
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint, openWorldHint, idempotentHint, and destructiveHint. The description adds useful behavioral context: it is a FAST_LOOKUP (pure digit math, no ephemeris), provides error contracts (INVALID_PARAMS, INTERNAL_ERROR), and details output fields. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (WHAT THIS TOOL COVERS, WORKFLOW, INPUT CONTRACT, OUTPUT CONTRACT, etc.). It is somewhat lengthy but each section serves a purpose. Could be slightly more concise, but it is well-organized.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity, the description covers all necessary aspects: purpose, workflow, input/output contracts with field details, error handling, compute class, and differentiation from siblings. The output schema is described in detail, so no gaps remain.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0%, so the description carries the full burden. It explains the 'date' parameter format with an example, the optional 'name' for personalization, and the 'response_format' enum with descriptions of both options. This adds substantial meaning beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool computes a personal angel number from a birth date using Pythagorean Life Path. It identifies the specific resource (birth date) and method (digit-fusing Life Path), and explicitly distinguishes from sibling tools like asterwise_get_angel_number_today and asterwise_get_numerology_profile.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance: it recommends calling asterwise_get_numerology_profile before, and includes a 'DO NOT CONFUSE WITH' section listing alternative tools with clear differentiation. This helps the agent decide when to use this tool versus siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_angel_number_todayA
Read-onlyIdempotent
Inspect

Returns today's angel number computed from the current date. All digits of the date are summed and reduced to a single digit (1-9), then the triple sequence of that digit is returned (e.g. digit 9 → angel number 999).

SECTION: WHAT THIS TOOL COVERS Angel numbers are repeated digit sequences interpreted as synchronistic messages in modern spiritual practice. The daily angel number is the same for all callers on the same date — it is a collective daily energy, not personal. Returns the angel number sequence, its theme, primary message, actionable guidance, and associated life areas. Life Path 3 → 333 (creative expression). Today's digit is derived from the date's digit sum.

SECTION: WORKFLOW BEFORE: None — standalone. AFTER: asterwise_get_angel_number_personal — for a personalised angel number from birth date.

SECTION: INPUT CONTRACT No required parameters — today's date is used automatically.

SECTION: OUTPUT CONTRACT data.date (string — YYYY-MM-DD) data.daily_digit (int — reduced digit 1-9) data.angel_number (string — e.g. '999') data.number (string — same as angel_number) data.theme (string) data.message (string) data.guidance (string) data.areas[] (string array)

SECTION: RESPONSE FORMAT response_format=json serialises the complete response as indented JSON. response_format=markdown renders a human-readable report. Both return identical data.

SECTION: COMPUTE CLASS FAST_LOOKUP — pure math, no ephemeris.

SECTION: ERROR CONTRACT INTERNAL_ERROR: Any upstream API failure → MCP INTERNAL_ERROR

SECTION: DO NOT CONFUSE WITH asterwise_get_angel_number — lookup for a specific number sequence by value. asterwise_get_angel_number_personal — personalised number from birth date Life Path.

ParametersJSON Schema
NameRequiredDescriptionDefault
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only and idempotent behavior. The description adds value by detailing the pure math computation, FAAP_LOOKUP compute class, and that no external data is needed, which goes beyond the annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections and front-loaded with purpose. However, it is somewhat lengthy with sections like 'COMPUTE CLASS' and 'ERROR CONTRACT', which are useful but could be more concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description covers all necessary aspects: input contract (no parameters required beyond response_format), output contract with field names and types, error contract, workflow, and compute class. It is fully self-contained and complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The only parameter (response_format) has an enum, and the description explains the difference between markdown and json, including that both return identical data and json is indented. Since schema coverage is 0%, the description fully compensates.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool returns today's angel number computed from the current date, explains the computation method, and distinguishes from siblings asterwise_get_angel_number and asterwise_get_angel_number_personal.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description includes a 'DO NOT CONFUSE WITH' section explicitly naming alternatives and their differences, plus a workflow section stating it is standalone and suggesting asterwise_get_angel_number_personal for a personalized query.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_ashtakavargaA
Read-onlyIdempotent
Inspect

Computes full Ashtakavarga bindu matrices, trikona and ekadhipatya reductions, and sarva totals from BirthData for transit support analysis.

SECTION: WHAT THIS TOOL COVERS BPHS Ashtakavarga: bhinna tables per contributing body (including Lagna), reduced variants, sarva and sarva_reduced arrays, after_trikona/after_ekadhipatya aggregates, birth_time_unknown flag. Threshold lore: twenty-eight or more sarva bindus supports transits; below twenty-five implies friction. Not Shadbala totals (asterwise_get_chart_strength) though that bundle duplicates this data when needed.

SECTION: WORKFLOW BEFORE: RECOMMENDED — asterwise_get_natal_chart — confirm chart before AVK study. AFTER: asterwise_get_gochar — uses AVK scores in transit rows.

SECTION: INPUT CONTRACT BirthData only.

SECTION: OUTPUT CONTRACT data.bhinna — keys Sun, Moon, Mars, Mercury, Jupiter, Venus, Saturn, Lagna → each twelve-element int array (rashi index 0=Mesha .. 11=Meena), bindu 0..8 data.bhinna_after_trikona — seven planets (no Lagna), same array shape data.bhinna_after_ekadhipatya — same after further reduction data.sarva[] — twelve ints data.sarva_reduced[] — twelve ints data.after_trikona[] — twelve ints data.after_ekadhipatya[] — twelve ints data.birth_time_unknown (bool)

SECTION: RESPONSE FORMAT response_format=json serialises the complete response as indented JSON — use this for programmatic parsing, typed clients, and downstream tool chaining. response_format=markdown renders the same data as a human-readable report. Both modes return identical underlying data — no fields are added, removed, or filtered by either mode.

SECTION: COMPUTE CLASS MEDIUM_COMPUTE

SECTION: ERROR CONTRACT INVALID_PARAMS (local — caught before upstream call): None — BirthData Pydantic only.

INVALID_PARAMS (upstream): — None — upstream rejection surfaces as MCP INTERNAL_ERROR at the tool layer.

INTERNAL_ERROR: — Any upstream API failure or timeout → MCP INTERNAL_ERROR

Edge cases: — Rahu/Ketu are not classical bhinna contributors per tradition.

SECTION: DO NOT CONFUSE WITH asterwise_get_chart_strength — primary payload is Shadbala/Vimshopaka, though it embeds AVK too. asterwise_get_gochar — applies AVK scores to transits rather than exposing raw matrices.

ParametersJSON Schema
NameRequiredDescriptionDefault
birthYesBirth data for a single person.
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint=true, destructiveHint=false, openWorldHint=true, and idempotentHint=true. The description adds valuable context beyond annotations: it discloses computational complexity (MEDIUM_COMPUTE), explains that Rahu/Ketu are not classical contributors per tradition, and provides threshold lore (28+ bindus supports transits, below 25 implies friction). No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (WHAT THIS TOOL COVERS, WORKFLOW, INPUT CONTRACT, etc.), but could be more concise. Some sections like OUTPUT CONTRACT and RESPONSE FORMAT contain detailed technical specifications that might be better in schema documentation. However, the information is well-organized and front-loaded with core purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity, rich annotations, and detailed output schema (implied by OUTPUT CONTRACT section), the description provides excellent contextual completeness. It covers purpose, workflow, input/output contracts, compute class, error handling, edge cases, and sibling tool differentiation. The output schema existence means the description doesn't need to explain return values in detail.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 50% (birth parameter has detailed descriptions, response_format lacks description). The description adds minimal parameter semantics beyond schema - it states 'BirthData only' and mentions response_format options but doesn't explain parameter meanings or usage. With moderate schema coverage, baseline 3 is appropriate as the description doesn't significantly compensate for gaps.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool computes Ashtakavarga bindu matrices, trikona and ekadhipatya reductions, and sarva totals from BirthData for transit support analysis. It specifies the exact astrological calculations performed and distinguishes from siblings like asterwise_get_chart_strength and asterwise_get_gochar by explaining what each sibling focuses on.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit workflow guidance: BEFORE recommends asterwise_get_natal_chart to confirm chart, AFTER mentions asterwise_get_gochar uses AVK scores. It also clearly distinguishes when NOT to use this tool versus asterwise_get_chart_strength and asterwise_get_gochar in the 'DO NOT CONFUSE WITH' section.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_ashtottari_dashaA
Read-onlyIdempotent
Inspect

Computes the 108-year Ashtottari Dasha tree with configurable depth (levels 1–5) and returns periods under data.periods.root with DD/MM/YYYY dates.

SECTION: WHAT THIS TOOL COVERS Provides the Ashtottari planetary sequence (eight grahas, Ketu excluded) with the same levels semantics as Vimshottari: deeper levels nest sub-periods in sub[]. Classical texts restrict use when Rahu is in Kendra/Trikona from Lagna lord (not in Lagna); this tool always returns a full timeline with no niyama_met flag — apply rules externally. It is not Vimshottari, Yogini, or Char Dasha.

SECTION: WORKFLOW BEFORE: RECOMMENDED — asterwise_get_natal_chart — chart context before choosing between Ashtottari and Vimshottari. AFTER: asterwise_get_dasha — optional Vimshottari comparison.

SECTION: INPUT CONTRACT levels: same as asterwise_get_dasha (1–5), enforced locally before the API call. Periods use data.periods.root[], not data.periods[]. Dates in periods are DD/MM/YYYY.

SECTION: OUTPUT CONTRACT data.periods.root[] — Mahadasha objects: planet (string) start_jd (float) end_jd (float) start_date (string — DD/MM/YYYY) end_date (string — DD/MM/YYYY) sub[] — Antardasha objects, same shape, nested per levels data.birth_time_unknown (bool)

SECTION: RESPONSE FORMAT response_format=json serialises the complete response as indented JSON — use this for programmatic parsing, typed clients, and downstream tool chaining. response_format=markdown renders the same data as a human-readable report. Both modes return identical underlying data — no fields are added, removed, or filtered by either mode.

SECTION: COMPUTE CLASS MEDIUM_COMPUTE (timing scales with levels similarly to asterwise_get_dasha)

SECTION: ERROR CONTRACT INVALID_PARAMS (local — caught before upstream call): — levels < 1 or levels > 5 → MCP INVALID_PARAMS

INVALID_PARAMS (upstream): None — remaining validation is upstream.

INTERNAL_ERROR: — Any upstream API failure or timeout → MCP INTERNAL_ERROR

Edge cases: — No classical applicability check in the response; full timeline is always returned.

SECTION: DO NOT CONFUSE WITH asterwise_get_dasha — standard 120-year Vimshottari with data.periods[], not Ashtottari or data.periods.root[]. asterwise_get_yogini_dasha — 36-year Yogini cycle with yogini names on each row.

ParametersJSON Schema
NameRequiredDescriptionDefault
birthYesBirth data for a single person.
levelsNo
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond what annotations provide. While annotations indicate read-only, open-world, idempotent, and non-destructive operations, the description adds that 'this tool always returns a full timeline with no niyama_met flag,' mentions 'MEDIUM_COMPUTE' timing, and details error contracts including 'INVALID_PARAMS' for levels out of range and 'INTERNAL_ERROR' for upstream failures. No contradictions with annotations exist.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is structured with clear sections but is verbose with repetitive details. Sentences like 'Dates in periods are DD/MM/YYYY' and similar formatting notes appear multiple times. While well-organized, it could be more concise by eliminating redundancy while maintaining clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity, rich annotations, and detailed output schema, the description is highly complete. It covers purpose, usage guidelines, behavioral traits, input/output contracts, error handling, and sibling distinctions. The output schema exists, so the description appropriately focuses on contextual rather than structural details.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With schema description coverage at 33%, the description adds some parameter context but doesn't fully compensate. It explains that 'levels' has 'same semantics as asterwise_get_dasha (1-5)' and that 'Periods use data.periods.root[], not data.periods[]', but doesn't elaborate on 'birth' or 'response_format' parameters beyond what the schema provides. The baseline is appropriate given partial coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'Computes the 108-year Ashtottari Dasha tree' with specific details about configurable depth (levels 1-5) and output structure. It explicitly distinguishes this from sibling tools like 'asterwise_get_dasha' (Vimshottari), 'asterwise_get_yogini_dasha', and 'asterwise_get_char_dasha' by stating 'It is not Vimshottari, Yogini, or Char Dasha.'

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool versus alternatives. It states 'Classical texts restrict use when Rahu is in Kendra/Trikona from Lagna lord (not in Lagna)' and recommends 'apply rules externally.' It also includes a 'BEFORE' section recommending 'asterwise_get_natal_chart' for context and an 'AFTER' section suggesting 'asterwise_get_dasha' for comparison.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_ayanamshaA
Read-onlyIdempotent
Inspect

Returns ayanamsha values for all four supported systems (Lahiri, Raman, KP, Tropical) for a given date.

SECTION: WHAT THIS TOOL COVERS Ayanamsha is the angular difference between the tropical and sidereal zodiacs due to the precession of the equinoxes (~50.3" per year). Returns each system's value in decimal degrees and DMS (degrees/minutes/seconds) format, with a description of each system's tradition. Lahiri (Chitrapaksha) is the Indian government standard. KP ayanamsha is used for Krishnamurti Paddhati calculations. Computed using Swiss Ephemeris DE431.

SECTION: WORKFLOW BEFORE: None — standalone reference. AFTER: None.

SECTION: INPUT CONTRACT date (optional): Date in YYYY-MM-DD format. Defaults to today.

SECTION: OUTPUT CONTRACT data.date (string) data.ayanamsha{}: lahiri, raman, kp, tropical — each: value_decimal (float), degrees (int), minutes (int), seconds (float), dms (string e.g. '24° 13' 29.82"'), description (string) data.note (string — recommendation to use Lahiri for Jyotish)

SECTION: COMPUTE CLASS FAST_LOOKUP

SECTION: ERROR CONTRACT INTERNAL_ERROR: Any upstream API failure → MCP INTERNAL_ERROR

SECTION: DO NOT CONFUSE WITH asterwise_get_natal_chart — applies Lahiri ayanamsha automatically to natal positions. asterwise_get_western_natal — uses tropical zodiac (ayanamsha = 0).

ParametersJSON Schema
NameRequiredDescriptionDefault
dateNo
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint and idempotentHint. The description adds context: compute class (FAST_LOOKUP), error contract (INTERNAL_ERROR), and output format details. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections, front-loads the purpose, and avoids unnecessary repetition. It is slightly verbose but each section adds relevant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description covers purpose, usage, output structure (including DMS format), error handling, and will not confuse with similar tools. The missing parameter semantics for 'response_format' is a minor gap, but overall the tool is well-documented.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, meaning the description adds little to parameter understanding. It only mentions the 'date' parameter with format and default, but omits 'response_format' entirely. The schema has an enum for 'response_format' but the description does not explain it.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The first sentence clearly states the tool returns ayanamsha values for four specific systems for a given date. The 'DO NOT CONFUSE WITH' section explicitly differentiates it from sibling tools that apply ayanamsha automatically, making purpose distinct.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides a 'DO NOT CONFUSE WITH' section naming two specific sibling alternatives and explaining their difference. The 'WORKFLOW' section states it is a standalone reference with no before/after steps, clarifying when to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_balance_numberB
Read-onlyIdempotent
Inspect

Calculates the Balance number from the first letter of each name part, using Pythagorean values. A three-part name yields three initials summed and reduced. The Balance number describes how a person handles emotional crises and unresolved inner conflict.

SECTION: WHAT THIS TOOL COVERS The Balance number is consulted specifically in times of stress. It does not describe everyday personality but rather the instinctive crisis-management style. A person with Balance 1 instinctively becomes self-reliant under pressure; Balance 2 seeks partnership; Balance 8 attempts to assert control.

SECTION: WORKFLOW BEFORE: None — standalone. AFTER: None.

SECTION: INPUT CONTRACT name — Full legal name as used at birth. The first letter of each space-separated part contributes one value. Example: 'Arjun Mehta' → A(1) + M(4) = 5 Example: 'James Earl Carter' → J(1) + E(5) + C(3) = 9

SECTION: OUTPUT CONTRACT data.number (int — Balance number 1–9, or master 11/22) data.is_master_number (bool)

SECTION: RESPONSE FORMAT response_format=json — structured JSON. response_format=markdown — human-readable. Both modes return identical underlying data.

SECTION: COMPUTE CLASS FAST_LOOKUP

SECTION: ERROR CONTRACT INVALID_PARAMS (local): None. INTERNAL_ERROR: upstream failure → MCP INTERNAL_ERROR

SECTION: DO NOT CONFUSE WITH asterwise_get_expression_number — uses all letters, not just initials. asterwise_get_karmic_lessons — identifies absent digits across all letters.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYes
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnlyHint=true, idempotentHint=true, and destructiveHint=false. The description adds that the tool 'indicates how a person handles stress and unresolved issues,' which is a minor interpretive context but does not significantly expand on behavioral traits beyond the annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise with two sentences plus a short output contract note. It avoids unnecessary details and is front-loaded with the core purpose, though the output contract section could be integrated more smoothly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple tool with two parameters and an output schema (implied by 'OUTPUT CONTRACT'), the description covers the basic purpose and output fields but lacks parameter details, usage context, and error scenarios. It is minimally complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, meaning the schema has no parameter descriptions. The tool description does not add any meaning about the 'name' parameter (e.g., full name format, which parts) or 'response_format' beyond the enum values in the schema, failing to compensate for the lack of schema details.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates the Balance number from the first letter of each name part, indicating how a person handles stress and unresolved issues. This specific verb and resource distinguishes it from numerous sibling tools that calculate other numerology numbers.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description does not provide guidance on when to use this tool versus alternatives like other numerology tools. No explicit context or exclusions are mentioned, leaving the agent to infer usage based on the function's name alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_biorhythmA
Read-only
Inspect

Computes physical (23-day), emotional (28-day), and intellectual (33-day) biorhythm cycles for a birth date.

SECTION: WHAT THIS TOOL COVERS Returns cycle values (-1.0 to +1.0), percentage, phase label (High/Rising/Falling/Low), and critical day flags for each cycle. Critical days occur when a cycle crosses the zero line — these represent instability and vulnerability to poor judgment or accidents. Supports single-day snapshot (days=1) and multi-day range (up to 90 days). Formula: sin(2π × t / cycle_length) where t = days since birth. Also returns composite_score (average of three cycles) and has_critical_day flag. Biorhythm is a Western concept — Vedic equivalents are Tarabala and Chandrabala (use asterwise_get_nakshatra_prediction for the Vedic equivalent).

SECTION: WORKFLOW BEFORE: None — standalone. AFTER: asterwise_get_nakshatra_prediction — for the Vedic personalized daily prediction.

SECTION: INPUT CONTRACT birth_date (required): Date of birth in YYYY-MM-DD format. target_date (optional): Date to compute for. Defaults to today. days (optional int 1-90): Number of consecutive days. Default 1.

SECTION: OUTPUT CONTRACT (single day) data.birth_date, data.target_date, data.days_since_birth (int) data.cycles{}: physical, emotional, intellectual — each: value (float -1.0 to +1.0), percentage (float), phase (string), is_critical (bool), cycle_length_days (int), description (string) data.critical_today[] (string array of cycle names that are critical) data.has_critical_day (bool) data.composite_score (float — average of three cycles) data.note (string — explains Vedic equivalents)

SECTION: OUTPUT CONTRACT (date range, days > 1) data.birth_date, data.start_date, data.end_date, data.days data.daily[] — array of day objects each with date, cycles{}, critical[], composite_score

SECTION: COMPUTE CLASS FAST_LOOKUP — pure math, no ephemeris.

SECTION: ERROR CONTRACT INVALID_PARAMS (upstream): birth_date after target_date → INTERNAL_ERROR INTERNAL_ERROR: Any upstream API failure → MCP INTERNAL_ERROR

SECTION: DO NOT CONFUSE WITH asterwise_get_nakshatra_prediction — Vedic Tarabala/Chandrabala daily prediction. asterwise_get_panchanga — Vedic daily panchanga elements, not biorhythm cycles.

ParametersJSON Schema
NameRequiredDescriptionDefault
daysNo
birth_dateYes
target_dateNo
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and destructiveHint=false, but the description adds extensive behavioral context: formula, critical day explanation, output structure (single and multi-day), error contracts, and compute class. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections and headings, making it easy to parse. However, it is relatively verbose; some redundancy exists (e.g., repeating cycle details in multiple sections). Still, every section adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (multiple cycles, date ranges, output structure, error handling, sibling differentiation), the description is remarkably complete. It covers all essential aspects despite the output schema existing, and even includes compute class and error contracts.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must fully explain parameters. It covers birth_date (format), target_date (default), and days (range/default) in the 'INPUT CONTRACT' section. However, the response_format parameter (enum: markdown/json) is not mentioned in the description, leaving a gap in parameter understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it computes three biorhythm cycles (physical, emotional, intellectual) for a birth date. It uses specific verbs and resources, and includes a 'DO NOT CONFUSE WITH' section that explicitly distinguishes it from sibling tools like asterwise_get_nakshatra_prediction and asterwise_get_panchanga.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit BEFORE/AFTER workflow guidance and a dedicated 'DO NOT CONFUSE WITH' section naming two sibling tools with their purposes. This makes it clear when to use this tool versus alternatives, such as for Western biorhythm vs. Vedic Tarabala.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_business_name_analysisA
Read-onlyIdempotent
Inspect

Reduces a business name to Expression and root digits against a founder birth date and returns thematic suitability lists plus a harmony score.

SECTION: WHAT THIS TOOL COVERS Allows numerals and punctuation in the brand string — non-letters drop before letter-value reduction. Returns business-facing favourable domains, caution copy, and scoring. Not personal spelling optimisation (asterwise_get_name_correction) nor vehicle or phone checks.

SECTION: WORKFLOW BEFORE: None — standalone. AFTER: asterwise_get_name_correction — if the entity is a person, not a brand.

SECTION: INPUT CONTRACT Special characters and digits are acceptable; reduction strips non-letters per upstream rules. No local validation on name or date.

SECTION: OUTPUT CONTRACT data.input (string — business name as submitted) data.input_type (string — 'business_name') data.expression_number (int — compound before reduction) data.single_digit (int — reduced root; equals expression_number for master totals eleven, twenty-two, thirty-three) data.is_master (bool) data.theme (string) data.favourable_for[] (string array — suitable domains) data.caution (string) data.harmony_score (int — one through ten)

SECTION: RESPONSE FORMAT response_format=json serialises the complete response as indented JSON — use this for programmatic parsing, typed clients, and downstream tool chaining. response_format=markdown renders the same data as a human-readable report. Both modes return identical underlying data — no fields are added, removed, or filtered by either mode.

SECTION: COMPUTE CLASS FAST_LOOKUP

SECTION: ERROR CONTRACT INVALID_PARAMS (local — caught before upstream call): None — all validation is upstream.

INVALID_PARAMS (upstream): — None — upstream rejection surfaces as MCP INTERNAL_ERROR at the tool layer.

INTERNAL_ERROR: — Any upstream API failure or timeout → MCP INTERNAL_ERROR

Edge cases: — Brand strings with emojis or digits still flow through upstream stripping rules.

SECTION: DO NOT CONFUSE WITH asterwise_get_name_correction — personal spelling alternatives, not corporate Expression scoring. asterwise_check_mobile_number — numeric line analysis, not brand letters.

ParametersJSON Schema
NameRequiredDescriptionDefault
dateYes
business_nameYes
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond what annotations provide. While annotations indicate read-only, open-world, idempotent, and non-destructive operations, the description details computational characteristics (FAST_LOOKUP class), input handling (strips non-letters per upstream rules, accepts special characters/digits), error contracts (upstream validation, INTERNAL_ERROR for API failures), and edge cases (emojis/digits flow through stripping). It doesn't contradict the annotations, which already cover safety aspects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (WHAT THIS TOOL COVERS, WORKFLOW, INPUT CONTRACT, etc.), making it easy to navigate. While comprehensive, it could be slightly more concise—some sections like OUTPUT CONTRACT list all fields in detail, which might be redundant given the output schema exists. However, every section adds value, and the information is front-loaded with the core purpose first.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (numerology analysis with multiple outputs) and the presence of annotations and an output schema, the description is exceptionally complete. It covers purpose, usage guidelines, behavioral details, parameter semantics, output structure, error handling, and sibling differentiation. The output schema likely documents return values, so the description appropriately focuses on explaining the analysis logic and context rather than repeating schema information.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description fully compensates by explaining all three parameters' semantics. It clarifies that 'business_name' accepts numerals/punctuation with non-letters stripped, 'date' is a founder birth date, and 'response_format' controls output presentation (json for programmatic parsing, markdown for human-readable reports). The OUTPUT CONTRACT section further details what each parameter influences in the results.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Reduces a business name to Expression and root digits against a founder birth date and returns thematic suitability lists plus a harmony score.' It specifies the exact operation (reduction against a date), distinguishes it from sibling tools (asterwise_get_name_correction for personal spelling, asterwise_check_mobile_number for numeric analysis), and identifies the specific resource (business name).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool versus alternatives. The 'DO NOT CONFUSE WITH' section names specific sibling tools and explains their different purposes (personal spelling vs. corporate Expression scoring, numeric line analysis vs. brand letters). It also includes workflow sections indicating this is standalone and when to use asterwise_get_name_correction afterward for personal entities.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_chaldean_numerologyA
Read-onlyIdempotent
Inspect

Reduces a name and birth date through the Chaldean letter-value system and returns name, birth, and combined compound analyses with themes and keywords.

SECTION: WHAT THIS TOOL COVERS Chaldean assigns letters values one through eight (nine treated as sacred/unassigned in tradition). Response includes data.system 'chaldean', echoed full_name and birth_date, plus three parallel number objects (name, birth, compound) each with raw compound, reduced root, theme, keywords, interpretation. It does not output Pythagorean Life Path blocks (asterwise_get_numerology_profile) or Lo Shu grids.

SECTION: WORKFLOW BEFORE: None — standalone. AFTER: asterwise_get_numerology_profile — compare against Pythagorean cores if needed.

SECTION: INPUT CONTRACT name and date forwarded as-is; no local validation.

SECTION: OUTPUT CONTRACT data.system (string — 'chaldean') data.full_name (string) data.birth_date (string) data.name_number: raw (int — compound) reduced (int — root) theme (string) keywords[] (string array) interpretation (string) data.birth_number — same shape as data.name_number data.compound_number — same shape; raw combines name and birth

SECTION: RESPONSE FORMAT response_format=json serialises the complete response as indented JSON — use this for programmatic parsing, typed clients, and downstream tool chaining. response_format=markdown renders the same data as a human-readable report. Both modes return identical underlying data — no fields are added, removed, or filtered by either mode.

SECTION: COMPUTE CLASS MEDIUM_COMPUTE

SECTION: ERROR CONTRACT INVALID_PARAMS (local — caught before upstream call): None — all validation is upstream.

INVALID_PARAMS (upstream): — None — upstream rejection surfaces as MCP INTERNAL_ERROR at the tool layer.

INTERNAL_ERROR: — Any upstream API failure or timeout → MCP INTERNAL_ERROR

Edge cases: — Chaldean and Pythagorean numbers disagree by design — never merge blindly.

SECTION: DO NOT CONFUSE WITH asterwise_get_numerology_profile — Pythagorean Life Path / Expression stack, not Chaldean compounds. asterwise_get_lo_shu_grid — digit placement magic square, not Chaldean name reduction.

ParametersJSON Schema
NameRequiredDescriptionDefault
dateYes
nameYes
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint=true, destructiveHint=false, openWorldHint=true, and idempotentHint=true. The description adds valuable behavioral context: it specifies the compute class (MEDIUM_COMPUTE), explains that validation is upstream with no local validation, details error handling (INVALID_PARAMS, INTERNAL_ERROR), and warns about edge cases (Chaldean/Pythagorean disagreement). This goes significantly beyond what annotations provide.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (WHAT THIS TOOL COVERS, WORKFLOW, etc.), but it's quite lengthy with some redundancy (e.g., OUTPUT CONTRACT and RESPONSE FORMAT overlap). While informative, it could be more concise by combining related sections or trimming repetitive details.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (numerology calculation with multiple outputs) and rich annotations/output schema, the description is exceptionally complete. It covers purpose, differentiation, workflow, input/output contracts, response formats, compute class, error handling, and edge cases. With an output schema present, it appropriately focuses on behavioral aspects rather than re-describing return values.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description must compensate. It explains that 'name and date forwarded as-is; no local validation' and details the 'response_format' parameter's effect (json vs markdown output modes). While it doesn't specify exact formats for name/date, it provides meaningful context about parameter handling that the schema lacks.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'reduces a name and birth date through the Chaldean letter-value system' and 'returns name, birth, and combined compound analyses with themes and keywords.' It specifically distinguishes from sibling tools like 'asterwise_get_numerology_profile' (Pythagorean) and 'asterwise_get_lo_shu_grid' (digit placement), providing explicit differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The 'DO NOT CONFUSE WITH' section explicitly names alternatives (asterwise_get_numerology_profile, asterwise_get_lo_shu_grid) and clarifies when not to use this tool. The 'WORKFLOW' section provides before/after context, stating it's standalone and can be followed by asterwise_get_numerology_profile for comparison.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_char_dashaA
Read-onlyIdempotent
Inspect

Computes Jaimini Char Dasha from birth data and returns sign lords as period rulers with ISO-dated Maha and Antar sequences plus karaka mappings.

SECTION: WHAT THIS TOOL COVERS Uses the Jaimini system where rashis (not grahas) rule time periods, including Atmakaraka, eight karakas, current Maha/Antar labels in Sanskrit signs, and a period array with nested antardashas. It does not return Vimshottari (asterwise_get_dasha), Yogini, or Ashtottari timelines.

SECTION: WORKFLOW BEFORE: RECOMMENDED — asterwise_get_natal_chart — contextualises the chart before interpreting sign-based lords. AFTER: asterwise_get_dasha — optional Vimshottari cross-check for the same native.

SECTION: INPUT CONTRACT Period start_date and end_date in data.periods[] are YYYY-MM-DD (ISO), unlike asterwise_get_dasha which uses DD/MM/YYYY in its tree. All other parameters follow the BirthData global contract.

SECTION: OUTPUT CONTRACT data.atmakaraka (string — planet name) data.start_rashi (string — Sanskrit rashi name) data.start_rashi_index (int) data.karakas{} — object mapping Jaimini karaka keys to planet names data.current_mahadasha (string — Sanskrit rashi name, e.g. 'Mithuna') data.current_antardasha (string — Sanskrit rashi name) data.periods[] — Mahadasha objects: rashi (string) rashi_index (int) years (int) start_date (string — YYYY-MM-DD ISO) end_date (string — YYYY-MM-DD ISO) antardashas[] — same shape as one level (no further nesting)

SECTION: RESPONSE FORMAT response_format=json serialises the complete response as indented JSON — use this for programmatic parsing, typed clients, and downstream tool chaining. response_format=markdown renders the same data as a human-readable report. Both modes return identical underlying data — no fields are added, removed, or filtered by either mode.

SECTION: COMPUTE CLASS MEDIUM_COMPUTE

SECTION: ERROR CONTRACT INVALID_PARAMS (local — caught before upstream call): None — all validation is upstream.

INVALID_PARAMS (upstream): — None — upstream rejection surfaces as MCP INTERNAL_ERROR at the tool layer.

INTERNAL_ERROR: — Any upstream API failure or timeout → MCP INTERNAL_ERROR

Edge cases: — Two planets at the same degree: classical tie-break rules apply upstream; no separate error.

SECTION: DO NOT CONFUSE WITH asterwise_get_dasha — Vimshottari planet lords with DD/MM/YYYY in periods[], not sign-based Jaimini Char Dasha. asterwise_get_yogini_dasha — eight Yoginis and data.periods.root[], not Jaimini signs.

ParametersJSON Schema
NameRequiredDescriptionDefault
birthYesBirth data for a single person.
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint=true, destructiveHint=false, openWorldHint=true, and idempotentHint=true, covering safety and idempotency. The description adds valuable context beyond this: it specifies compute class (MEDIUM_COMPUTE), error contracts (INVALID_PARAMS, INTERNAL_ERROR), edge cases (tie-break rules), and output format options (json vs markdown), enhancing behavioral understanding without contradicting annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (e.g., WHAT THIS TOOL COVERS, WORKFLOW, INPUT CONTRACT), making it easy to navigate. While comprehensive, some sections (like OUTPUT CONTRACT listing all fields) are detailed but necessary for clarity, keeping it front-loaded and efficient without wasted sentences.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (astrological calculations with nested outputs), the description is highly complete. It covers purpose, usage guidelines, behavioral details (compute class, errors), input/output contracts, and sibling distinctions. With annotations providing safety info and an output schema implied by the OUTPUT CONTRACT section, no significant gaps remain for effective agent use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 50%, with detailed descriptions for birth parameters but none for response_format. The description compensates by explaining the response_format parameter's effect in the 'RESPONSE FORMAT' section, clarifying that it controls serialization without altering data. However, it doesn't add semantics for birth parameters beyond the schema, keeping the score from reaching 5.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description starts with a specific verb ('Computes') and resource ('Jaimini Char Dasha from birth data'), clearly stating what the tool does. It explicitly distinguishes from siblings like asterwise_get_dasha (Vimshottari) and asterwise_get_yogini_dasha, making the purpose distinct and well-defined.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description includes explicit guidance on when to use this tool vs alternatives in the 'DO NOT CONFUSE WITH' section, naming specific sibling tools and their differences. It also provides workflow recommendations ('BEFORE' and 'AFTER' sections) and clarifies input/output contracts, offering comprehensive usage context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_chart_strengthA
Read-onlyIdempotent
Inspect

Aggregates Shadbala, Bhavbala, Vimshopaka (with per-varga contributions), embedded sixteen vargas, Ashtakavarga, karaka maps, and graha yuddha pairs from BirthData.

SECTION: WHAT THIS TOOL COVERS BPHS strength chapter outputs: six-planet Shadbala breakdowns with sthana/kala detail objects, twelve-house Bhavbala totals, Vimshopaka scores with threshold bands and per-D1..D60 fractions, full divisional chart mirror of asterwise_get_divisional_chart, Ashtakavarga mirror of asterwise_get_ashtakavarga, karaka_to_planet / planet_to_karaka, and graha_yuddha.war_pairs. It does not label named yogas (asterwise_get_yogas) or doshas (asterwise_get_doshas).

SECTION: WORKFLOW BEFORE: RECOMMENDED — asterwise_get_natal_chart — contextualises houses before reading bala tables. AFTER: asterwise_get_yogas — optional configuration pass after strength review.

SECTION: INPUT CONTRACT BirthData only; no extra toggles.

SECTION: OUTPUT CONTRACT data.shadbala — keyed by Sun..Saturn (excludes Rahu/Ketu): planet (string) sthana_bala, dig_bala, dig_bala_virupas, dig_bala_rupas, kala_bala, cheshta_bala, naisargika_bala, drik_bala, yuddhabala_adjustment, total (float), ratio (float), required_minimum (float), is_purna_bala (bool) sthana_bala_details — { uchcha, saptavargaja, ojhayugma, kendradi, drekkana, total } kala_bala_details — { nathonnatha, paksha, tribhaga, vara, hora, ayana, total } data.bhavbala — keys '1'..'12': bhavadhipati_bala, bhava_dig_bala, bhava_drik_bala, total (float) data.vimshopaka_bala — keyed by planet including Rahu/Ketu: vimshopaka_score (float, max 20) threshold (string — 'zero_capacity', 'moderate', 'good', or 'extremely_auspicious') per_varga — keyed by D1..D60: { contribution (float), fraction (float) } data.divisional_charts — same nested schema as asterwise_get_divisional_chart data.ashtakavarga — same schema as asterwise_get_ashtakavarga data.karakas — { karaka_to_planet{}, planet_to_karaka{} } data.graha_yuddha — { war_pairs[] } Threshold guide: 15+ extremely_auspicious, 10–15 good, 5–10 moderate, below 5 zero_capacity.

SECTION: RESPONSE FORMAT response_format=json serialises the complete response as indented JSON — use this for programmatic parsing, typed clients, and downstream tool chaining. response_format=markdown renders the same data as a human-readable report. Both modes return identical underlying data — no fields are added, removed, or filtered by either mode.

SECTION: COMPUTE CLASS MEDIUM_COMPUTE (~800ms)

SECTION: ERROR CONTRACT INVALID_PARAMS (local — caught before upstream call): None — BirthData only; Pydantic handles field bounds.

INVALID_PARAMS (upstream): — None — upstream rejection surfaces as MCP INTERNAL_ERROR at the tool layer.

INTERNAL_ERROR: — Any upstream API failure or timeout → MCP INTERNAL_ERROR

Edge cases: — Large payload due to embedded vargas and Ashtakavarga duplicates.

SECTION: DO NOT CONFUSE WITH asterwise_get_yogas — boolean yoga catalogue, not numeric bala. asterwise_get_ashtakavarga — standalone AVK when strength bundle is not needed.

ParametersJSON Schema
NameRequiredDescriptionDefault
birthYesBirth data for a single person.
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint=true, destructiveHint=false, etc., but the description adds valuable context: it discloses compute class ('MEDIUM_COMPUTE (~800ms)'), error handling details (INVALID_PARAMS vs INTERNAL_ERROR), and edge cases ('Large payload due to embedded vargas and Ashtakavarga duplicates'). This goes beyond annotations without contradicting them.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (WHAT THIS TOOL COVERS, WORKFLOW, etc.), making it easy to scan. Every sentence adds value—no redundant information. It's appropriately sized for a complex tool, with front-loaded purpose and detailed sections for depth.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity, rich output schema (detailed in description), and annotations, the description is highly complete. It covers purpose, usage, behavior, output structure, errors, and sibling distinctions. The output schema is fully described in the description, so no additional return value explanation is needed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 50%, but the description adds minimal parameter semantics beyond the schema. It states 'BirthData only; no extra toggles' and mentions the response_format parameter's effect on output serialization. However, it doesn't explain the birth parameter's components or provide additional context for the two parameters, so it only partially compensates for the schema coverage gap.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool 'aggregates' multiple strength metrics from BirthData, listing specific components like Shadbala, Bhavbala, Vimshopaka, etc. It clearly distinguishes from siblings by specifying what it does NOT cover (yogas or doshas), making the purpose specific and differentiated.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description includes explicit workflow guidance: 'BEFORE: RECOMMENDED — asterwise_get_natal_chart' and 'AFTER: asterwise_get_yogas — optional configuration pass after strength review.' It also clearly distinguishes from alternatives like asterwise_get_yogas and asterwise_get_ashtakavarga, providing clear when-to-use and when-not-to-use instructions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_choghadiyaA
Read-onlyIdempotent
Inspect

Splits a solar day into sixteen Choghadiya segments from sunrise/sunset at a location and labels each slot's quality, ruler, and local clock bounds.

SECTION: WHAT THIS TOOL COVERS Computes eight day and eight night Choghadiya periods with type (auspicious, highly auspicious, inauspicious), ruling planet, suitability text, and is_current flags. Boundaries follow actual sunrise/sunset for the timezone, so DST is implicit. It does not rank multi-day windows for named activities (asterwise_get_muhurta) or return full Panchanga limbs (asterwise_get_panchanga).

SECTION: WORKFLOW BEFORE: None — this tool is standalone. AFTER: asterwise_get_rahu_kaal — optional inauspicious band overlay for the same date.

SECTION: INPUT CONTRACT LocationInput enforces YYYY-MM-DD date and lat/lon ranges locally. All parameters are defined in the tool schema.

SECTION: OUTPUT CONTRACT data.date (string) data.sunrise (string — HH:MM local) data.sunset (string — HH:MM local) data.day_choghadiya[] — eight objects: period (int — 1–8) name (string) type (string — 'auspicious', 'highly auspicious', or 'inauspicious') ruling_planet (string) suitable_for (string) start (string — HH:MM local) end (string — HH:MM local) is_current (bool) data.night_choghadiya[] — eight objects with the same fields

SECTION: RESPONSE FORMAT response_format=json serialises the complete response as indented JSON — use this for programmatic parsing, typed clients, and downstream tool chaining. response_format=markdown renders the same data as a human-readable report. Both modes return identical underlying data — no fields are added, removed, or filtered by either mode.

SECTION: COMPUTE CLASS FAST_LOOKUP

SECTION: ERROR CONTRACT INVALID_PARAMS (local — caught before upstream call): — Invalid LocationInput date or coordinates → MCP INVALID_PARAMS

INVALID_PARAMS (upstream): — None — upstream rejection surfaces as MCP INTERNAL_ERROR at the tool layer.

INTERNAL_ERROR: — Any upstream API failure or timeout → MCP INTERNAL_ERROR

Edge cases: — Slots track sunrise/sunset, not fixed civil slices.

SECTION: DO NOT CONFUSE WITH asterwise_get_hora — twenty-four planetary horas, not sixteen Choghadiya. asterwise_get_muhurta — scored windows across a date range for named activities.

ParametersJSON Schema
NameRequiredDescriptionDefault
locationYesFor tools that need location but not birth time.

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond the annotations. While annotations indicate read-only, open-world, idempotent, and non-destructive operations, the description adds that boundaries follow actual sunrise/sunset with DST implicit, slots track sunrise/sunset not fixed civil slices, and it's classified as FAST_LOOKUP. It also details error contracts (INVALID_PARAMS, INTERNAL_ERROR) and edge cases, providing comprehensive behavioral transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (WHAT THIS TOOL COVERS, WORKFLOW, INPUT CONTRACT, etc.), making it easy to navigate. While comprehensive, some sections like OUTPUT CONTRACT and RESPONSE FORMAT are detailed but necessary for clarity. It avoids redundancy and is front-loaded with the core purpose, though it could be slightly more concise in the output details given the presence of an output schema.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity, rich annotations, and output schema, the description is highly complete. It covers purpose, usage guidelines, behavioral traits, input/output contracts, error handling, and sibling distinctions. The output schema exists, so the description doesn't need to explain return values in detail, but it still provides useful context on data structure and response formats, ensuring the agent has all necessary information.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description mentions that LocationInput enforces YYYY-MM-DD date and lat/lon ranges locally, and all parameters are defined in the tool schema. It adds minimal semantic context beyond the schema, such as noting DST is implicit due to timezone handling, but doesn't elaborate on parameter usage or constraints beyond what's already documented in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: it splits a solar day into sixteen Choghadiya segments from sunrise/sunset at a location and labels each slot's quality, ruler, and local clock bounds. It specifically distinguishes this tool from sibling tools like asterwise_get_muhurta (multi-day windows for named activities) and asterwise_get_panchanga (full Panchanga limbs), providing clear differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool versus alternatives. It states that this tool is standalone (no prerequisites) and mentions asterwise_get_rahu_kaal as an optional follow-up. It also clearly distinguishes it from asterwise_get_hora (planetary horas) and asterwise_get_muhurta (scored windows for activities), giving clear alternatives and exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_compatibilityA
Read-onlyIdempotent
Inspect

Scores North Indian Ashtakoota (36-point Guna Milan) for two charts and returns koota breakdown, dosha flags, classical vetoes, mangal cross-check, and narrative guidance.

SECTION: WHAT THIS TOOL COVERS Implements BPHS-style Ashtakoota with eight weighted kootas (Varna through Nadi), dosha booleans and cancellations, Rajju/Vedha veto objects, supplementary Mahendra and Stree Deergha checks, mangal compatibility, and a structured narrative. It is not Dashakoot (asterwise_get_dashakoot), Tamil Porutham (asterwise_get_porutham), or twelve-koota Thirumana (asterwise_get_thirumana_porutham).

SECTION: WORKFLOW BEFORE: RECOMMENDED — asterwise_get_natal_chart per person — understand charts before interpreting scores. AFTER: asterwise_get_papasamyam — optional malefic balance overlay.

SECTION: INPUT CONTRACT Two BirthData objects follow the global contract (unknown midnight time accepted without flag). All scoring is computed upstream from those payloads.

SECTION: OUTPUT CONTRACT data.total_score (float — out of 36) data.breakdown{} — keys Varna, Vashya, Tara, Yoni, GrahaMaitri, Gana, Bhakoot, Nadi — each value float score data.compatibility_level (string — e.g. 'Average', 'Good') data.doshas{}: varna_dosha (bool) bhakoot_dosha (bool) nadi_dosha (bool) bhakoot_dosha_type (string or null) data.dosha_cancellations{}: varna_dosha_cancelled (bool) bhakoot_dosha_cancelled (bool) nadi_dosha_cancelled (bool) data.analysis: major_doshas (string or object per upstream) cancelled_doshas (string or object per upstream) recommendation (string) data.classical_vetoes: has_veto (bool) vedha — { present (bool), description (string) } rajju — { present (bool), description (string), rajju_type (string — 'Siro', 'Kantha', 'Udara', 'Kati', or 'Pada') } veto_note (string) data.mangal_compatibility: person_a_manglik (bool) person_a_severity (string or value per upstream) person_b_manglik (bool) person_b_severity (string or value per upstream) match_status (string) description (string) data.supplementary_checks: mahendra — { is_auspicious (bool), distance (int), description (string) } stree_deergha — { distance (int), quality (string), is_favorable (bool), description (string) } data.compatibility_narrative: overall (string) strengths[] (string array) concerns[] — each { veto_type, severity, description, nakshatra_pair or body_part } recommendation (string) data.birth_time_unknown (bool)

SECTION: RESPONSE FORMAT response_format=json serialises the complete response as indented JSON — use this for programmatic parsing, typed clients, and downstream tool chaining. response_format=markdown renders the same data as a human-readable report. Both modes return identical underlying data — no fields are added, removed, or filtered by either mode.

SECTION: COMPUTE CLASS MEDIUM_COMPUTE

SECTION: ERROR CONTRACT INVALID_PARAMS (local — caught before upstream call): None — all validation is upstream.

INVALID_PARAMS (upstream): — None — malformed birth payloads surface as MCP INTERNAL_ERROR at the tool layer.

INTERNAL_ERROR: — Any upstream API failure or timeout → MCP INTERNAL_ERROR

Edge cases: — Rajju and Vedha conditions may be present alongside high scores — read classical_vetoes before conclusions.

SECTION: DO NOT CONFUSE WITH asterwise_get_dashakoot — ten-point South Indian extension, not 36-point Ashtakoota. asterwise_get_porutham — Tamil ten-porutham pass/fail grid, different schema.

ParametersJSON Schema
NameRequiredDescriptionDefault
person1YesBirth data for a single person.
person2YesBirth data for a single person.
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, destructiveHint=false, openWorldHint=true, and idempotentHint=true. The description adds valuable context beyond this: it specifies the compute class (MEDIUM_COMPUTE), explains that unknown midnight time is accepted, describes the two output formats (json/markdown), and details error handling including edge cases about vetoes. No contradiction with annotations exists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections but is quite lengthy. While each section serves a purpose, some redundancy exists (e.g., output details appear in both 'OUTPUT CONTRACT' and 'RESPONSE FORMAT'). The front-loaded purpose statement is excellent, but subsequent sections could be more concise while maintaining clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (astrological compatibility scoring with multiple components), rich output schema, and comprehensive annotations, the description provides exceptional completeness. It covers purpose, differentiation, workflow, input/output contracts, response formats, compute class, error handling, and specific warnings. With an output schema present, it appropriately doesn't need to explain return values in detail.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 67% schema description coverage (2 of 3 parameters well-documented in schema), the description adds meaningful context. The 'INPUT CONTRACT' section explains that two BirthData objects follow a global contract and that unknown midnight time is accepted. While it doesn't detail individual parameter semantics beyond what the schema provides, it clarifies the overall input structure and constraints.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description immediately states the specific verb ('Scores') and resource ('North Indian Ashtakoota (36-point Guna Milan) for two charts') with clear output details. It explicitly distinguishes from three named sibling tools (asterwise_get_dashakoot, asterwise_get_porutham, asterwise_get_thirumana_porutham) in the 'WHAT THIS TOOL COVERS' section, providing precise differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The 'WORKFLOW' section provides explicit before/after recommendations (asterwise_get_natal_chart before, asterwise_get_papasamyam after). The 'DO NOT CONFUSE WITH' section names three specific alternatives with explanations of why they're different. The 'Edge cases' note warns about vetoes invalidating high scores.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_crystalA
Read-onlyIdempotent
Inspect

Lookup a specific crystal by slug or name (case-insensitive). Returns full detail including dual Vedic/Western planetary assignments, all healing properties, and any safety cautions.

SECTION: WHAT THIS TOOL COVERS Returns one crystal entry from the 50-crystal database. Accepts URL-safe slugs (e.g. 'blue-sapphire', 'rose-quartz') or display names (e.g. 'Blue Sapphire', 'Rose Quartz'). The caution field carries critical safety information — Blue Sapphire and Hessonite Garnet carry CRITICAL cautions about Jyotish use without qualified practitioner assessment. Malachite has a CRITICAL toxicity caution. Always surface the caution field to end users.

SECTION: WORKFLOW BEFORE: None — standalone or after asterwise_get_gemstone_recommendations. AFTER: None.

SECTION: INPUT CONTRACT name: Crystal slug or display name. Examples: 'amethyst', 'blue-sapphire', 'Cat's Eye Chrysoberyl'

SECTION: OUTPUT CONTRACT Same shape as each crystal in asterwise_get_crystals — full single crystal object.

SECTION: RESPONSE FORMAT response_format=json — single crystal object. response_format=markdown — formatted detail card. Both return identical data.

SECTION: COMPUTE CLASS FAST_LOOKUP

SECTION: ERROR CONTRACT INVALID_PARAMS (upstream): Unknown crystal name → 404, surfaces as MCP INTERNAL_ERROR. INTERNAL_ERROR: Any upstream API failure → MCP INTERNAL_ERROR

SECTION: DO NOT CONFUSE WITH asterwise_get_crystals — full 50-crystal catalogue. asterwise_get_crystal_by_planet — all crystals for a Vedic planet. asterwise_get_gemstone_recommendations — natal chart-based gem recommendations (Ratna Shastra), different from this database.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYes
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint, idempotentHint, openWorldHint, and non-destructive. The description adds critical behavioral details: the caution field carries safety warnings for specific crystals, and the error contract documents 404 for unknown names, which goes beyond what annotations provide.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections and front-loaded purpose. While it is somewhat lengthy, all content is relevant and aids understanding. Minor deduction for including seemingly extraneous metadata like 'COMPUTE CLASS: FAST_LOOKUP'.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (2 params) and rich annotations, the description covers everything: input contract, output contract, error contract, workflow, and differentiation from siblings. It is fully sufficient for correct invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 0% description coverage, but the description fully explains both parameters: 'name' with slug/display name examples and 'response_format' with markdown/json options. This compensates completely for the lack of schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool looks up a specific crystal by slug or name, returning full details including planetary assignments, healing properties, and cautions. It explicitly distinguishes from sibling tools like asterwise_get_crystals and asterwise_get_crystal_by_planet.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides a 'DO NOT CONFUSE WITH' section listing alternatives, a workflow section stating it can be standalone or after gemstone recommendations, and explicit before/after context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_crystal_by_planetA
Read-onlyIdempotent
Inspect

Returns all crystals associated with a specific Vedic planet. Results are sorted with primary Navaratna gems first, then Uparatna substitutes. Only classically verified Vedic assignments are returned — crystals with no classical text support are excluded.

SECTION: WHAT THIS TOOL COVERS Filters the crystal database by vedic_planet field. Only returns crystals where vedic_correspondence is 'navaratna' or 'uparatna' — none_classical crystals are not returned here because they have no actual Vedic planetary assignment. Useful for Jyotish practitioners recommending remedial gems. Navaratna gems appear first. Valid planets: Sun, Moon, Mars, Mercury, Jupiter, Venus, Saturn, Rahu, Ketu.

SECTION: WORKFLOW BEFORE: RECOMMENDED — asterwise_get_natal_chart — identify the planet needing remediation. AFTER: asterwise_get_gemstone_recommendations — for chart-specific gem safety assessment.

SECTION: INPUT CONTRACT planet: One of Sun, Moon, Mars, Mercury, Jupiter, Venus, Saturn, Rahu, Ketu.

SECTION: OUTPUT CONTRACT data.total (int) data.crystals[] — same shape as asterwise_get_crystals, sorted Navaratna first.

SECTION: RESPONSE FORMAT response_format=json — filtered crystal array. response_format=markdown — formatted list. Both return identical data.

SECTION: COMPUTE CLASS FAST_LOOKUP

SECTION: ERROR CONTRACT INVALID_PARAMS (upstream): Unknown planet → 404. INTERNAL_ERROR: Any upstream API failure → MCP INTERNAL_ERROR

SECTION: DO NOT CONFUSE WITH asterwise_get_gemstone_recommendations — natal chart Ratna Shastra recommendation with contraindications; use for actual gem prescription, not just listing. asterwise_get_crystals — all 50 crystals including Western-only ones.

ParametersJSON Schema
NameRequiredDescriptionDefault
planetYes
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint, idempotentHint true. The description adds details on sorting, filtering criteria, exclusion of none_classical crystals, error contracts, compute class (FAST_LOOKUP), and response format behavior. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with clear sections (WHAT THIS TOOL COVERS, WORKFLOW, etc.), front-loaded with core purpose. Slightly verbose but each section adds value. No superfluous content.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Covers all aspects: input constraints, output shape (data.total, data.crystals), sorting order, error handling, workflow integration, and differentiation from siblings. Output schema exists so return format details are adequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, but the 'INPUT CONTRACT' section fully explains the planet parameter (valid values) and response_format (effects on output). This compensates completely for the missing schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it returns crystals associated with a specific Vedic planet, with explicit sorting and filtering. It distinguishes itself from siblings like asterwise_get_crystals and asterwise_get_gemstone_recommendations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit usage context ('Useful for Jyotish practitioners'), workflow recommendations (before/after tools), and a dedicated 'DO NOT CONFUSE WITH' section naming alternative tools and their purposes.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_crystal_recommendationsA
Read-onlyIdempotent
Inspect

Recommends crystals based on zodiac sign, chakra, or intention keyword. At least one filter is required. Returns crystals that match the most criteria first.

SECTION: WHAT THIS TOOL COVERS Scoring: zodiac match scores 3, chakra match scores 2, keyword match scores 1. Crystals matching multiple filters rank highest. Returns up to limit results (default 5, max 20). Valid chakras: Root, Sacral, Solar Plexus, Heart, Throat, Third Eye, Crown. Valid zodiac signs: English Western zodiac names (Aries, Taurus, etc.). Intention keyword is matched against each crystal's keywords[] list (partial match). Not a Jyotish prescription — does not account for natal chart or planetary periods. For chart-based gem prescription use asterwise_get_gemstone_recommendations.

SECTION: WORKFLOW BEFORE: None — standalone for consumer apps. AFTER: asterwise_get_crystal — get full detail on any recommended crystal.

SECTION: INPUT CONTRACT At least one of: zodiac_sign, chakra, intention must be provided. zodiac_sign (optional): English zodiac sign, e.g. 'Taurus', 'Scorpio'. chakra (optional): One of Root, Sacral, Solar Plexus, Heart, Throat, Third Eye, Crown. intention (optional): Keyword string, e.g. 'protection', 'abundance', 'love'. limit (optional int, default 5, max 20): Maximum results to return.

SECTION: OUTPUT CONTRACT data.total (int — number returned) data.filters_applied{} — the filters used data.crystals[] — matched crystals sorted by score descending

SECTION: RESPONSE FORMAT response_format=json — recommendation object. response_format=markdown — formatted recommendations. Both return identical data.

SECTION: COMPUTE CLASS FAST_LOOKUP

SECTION: ERROR CONTRACT INVALID_PARAMS (upstream): No filters provided → 422. Invalid chakra name → 422. INTERNAL_ERROR: Any upstream API failure → MCP INTERNAL_ERROR

SECTION: DO NOT CONFUSE WITH asterwise_get_crystal_by_planet — Vedic planet filter only. asterwise_get_gemstone_recommendations — natal chart Ratna Shastra prescription with contraindications.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
chakraNo
intentionNo
zodiac_signNo
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds scoring logic, limit defaults, exact valid values, partial match behavior, error contracts, and compute class. All annotations (readOnlyHint, etc.) are consistent and description enriches beyond them.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-organized into sections; all content is useful but slightly lengthy. However, each sentence earns its place given the complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Covers purpose, filters, scoring, output format, errors, workflow, and differentiation. No missing information for agent to use correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Despite 0% schema description coverage, the description fully explains each parameter with values, constraints, and inter-dependencies (e.g., at least one filter required).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states 'Recommends crystals based on zodiac sign, chakra, or intention keyword.' and explicitly distinguishes from siblings like asterwise_get_crystal_by_planet and asterwise_get_gemstone_recommendations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

States requirement of at least one filter, provides workflow with before/after tools, and includes 'DO NOT CONFUSE WITH' section that explicitly tells when to use alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_crystal_recommendations_natalA
Read-onlyIdempotent
Inspect

Recommends crystals from a Vedic natal chart using classical Ratna Shastra house lordship rules per BPHS and Mani Mala. This is the only API that derives crystal recommendations from a computed natal chart — not from zodiac sign or chakra preference.

SECTION: WHAT THIS TOOL COVERS Classical rules applied (BPHS + Mani Mala, cross-verified against two sources):

  1. INCLUSION RULE — A planet is recommended only if it lords at least one Trikona house (1st, 5th, or 9th). If a planet lords both a Trikona and a Dusthana (6th, 8th, or 12th), the Trikona lordship prevails and the planet is still recommended. This is the BPHS dual-lordship rule — violating it produces wrong exclusions.

  2. EXCLUSION RULE — A planet that does not lord any Trikona house is contraindicated. This includes pure Dusthana lords, pure Kendra lords (4th, 7th, 10th), and pure neutral lords.

  3. SCORING — Crystals are scored by which Trikona lordship their planet holds: Lagna lord (1st) Navaratna +5, Uparatna +4 — primary Life Stone Yogakaraka Navaratna +5, Uparatna +4 — lords both a non-1st Kendra AND a non-1st Trikona simultaneously 9th lord Navaratna +4, Uparatna +3 — Fortune Stone (Bhagyesh) 5th lord Navaratna +3, Uparatna +2 — Lucky Stone (Panchamesh)

  4. YOGAKARAKA — A planet that lords both a non-1st Kendra (4th, 7th, or 10th) AND a non-1st Trikona (5th or 9th) is the supreme benefic for that Lagna. Example: Mars for Cancer Lagna (lords 5th and 10th).

  5. DANGEROUS COMBINATIONS — Pairs of recommended crystals from enemy planet camps are flagged in warnings[] per Mani Mala: Saturn+Sun, Saturn+Mars, Jupiter+Venus, Moon+Rahu, Moon+Ketu.

  6. CLASSICAL VEDIC ONLY — Crystals with vedic_correspondence='none_classical' (Labradorite, Amazonite, Black Obsidian, etc.) are never returned. No classical text assigns these stones to planets.

SECTION: WORKFLOW BEFORE: None — this tool internally computes the natal chart. No separate natal chart call required. AFTER: asterwise_get_crystal — get full detail (hardness, origins, affirmation, full caution text) on any recommended crystal by slug. AFTER: asterwise_get_remedies — broader Parashari remedial programme alongside gem recommendations.

SECTION: INPUT CONTRACT Standard BirthData (date, time, lat, lon, timezone, ayanamsa). Defaults to Lahiri ayanamsa. time (required): Ascendant (Lagna) is time-sensitive. Inaccurate birth time changes the Lagna → changes all house lords → changes recommendations entirely.

SECTION: OUTPUT CONTRACT data.natal_context{} — chart factors used for recommendations: lagna_sign (string — Ascendant sign in English, e.g. 'Libra') lagna_lord (string — classical lord of the 1st house per BPHS) fifth_sign (string — 5th house sign in English) fifth_lord (string — lord of the 5th house, Panchamesh) ninth_sign (string — 9th house sign in English) ninth_lord (string — lord of the 9th house, Bhagyesh) yogakaraka (string or null — Yogakaraka planet name if one exists for this Lagna; null if none) contraindicated_lords (string array — planets that do not lord any Trikona; their gems are contraindicated) ayanamsa (string — ayanamsa used, e.g. 'lahiri') data.total (int — number of crystals returned, up to 5) data.crystals[] — recommended crystals sorted by match_score descending: slug, name, colors[], hardness_mohs (float), chakras[], element, zodiac_signs[] vedic_planet (string — the planet this crystal corresponds to) vedic_correspondence (string — always 'navaratna' or 'uparatna'; none_classical never appears) western_planet (string or null) keywords[], healing_physical, healing_emotional, healing_spiritual, description origins[], affirmation, caution (string or null — always surface this to end users) match_score (int — classical Ratna Shastra score; higher = stronger classical basis) match_reasons (string array — which house lordship triggered this recommendation) warnings (string array — dangerous combination warnings per Mani Mala; may be empty) data.classical_note (string — methodology and classical source note)

SECTION: RESPONSE FORMAT response_format=json serialises the complete response as indented JSON — use this for programmatic parsing, typed clients, and downstream tool chaining. response_format=markdown renders the same data as a human-readable report. Both modes return identical underlying data — no fields are added, removed, or filtered by either mode.

SECTION: COMPUTE CLASS MEDIUM_COMPUTE — full natal chart computation + crystal scoring pass.

SECTION: ERROR CONTRACT INVALID_PARAMS (local — caught before upstream call): BirthData Pydantic violations → MCP INVALID_PARAMS INVALID_PARAMS (upstream): Dates before 1800 or after 2100 → MCP INTERNAL_ERROR INTERNAL_ERROR: Any upstream API failure or timeout → MCP INTERNAL_ERROR Edge cases: — An empty crystals[] is valid when no classically verified Vedic gem corresponds to the Trikona lords of this specific chart. — Blue Sapphire (Saturn) and Hessonite (Rahu) carry CRITICAL cautions in their caution field — always surface this to end users before advising wear. — Rahu and Ketu do not own signs in the Parashari system — they never appear as lagna_lord, fifth_lord, or ninth_lord.

SECTION: DO NOT CONFUSE WITH asterwise_get_gemstone_recommendations — also a chart-based gem endpoint but uses a different engine (Atmakaraka + role-based prescription vs house lordship scoring); returns gem names not crystal database entries; does not include match_score or match_reasons. asterwise_get_crystal_recommendations — recommends crystals by zodiac sign, chakra, or intention keyword (no natal chart computation; Western metaphysical matching, not classical Jyotish). asterwise_get_crystal_by_planet — lists all crystals for a Vedic planet without house context — use this for reference, not prescription.

ParametersJSON Schema
NameRequiredDescriptionDefault
birthYesBirth data for a single person.
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint, openWorldHint, idempotentHint, and destructiveHint. The description adds extensive behavioral context: classical inclusion/exclusion rules, scoring, dangerous combinations, output contract details, error contract with edge cases (e.g., empty crystals[], critical cautions for Blue Sapphire and Hessonite), and compute class. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections and headings, but is very lengthy and includes detailed classical rules (e.g., inclusion/exclusion, scoring) that may be more than necessary for tool selection. While thorough, it is not concise, as many sentences could be trimmed without losing essential guidance.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of Vedic astrology, the description is exceptionally complete. It covers input contract, output contract (including all fields), error contract with edge cases, workflow before/after, and differentiation from siblings. The output schema is present, so return values are covered. All critical aspects for correct usage are addressed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds meaning beyond the input schema: it states 'Standard BirthData (date, time, lat, lon, timezone, ayanamsa). Defaults to Lahiri ayanamsa' and emphasizes that 'time (required): Ascendant (Lagna) is time-sensitive'. It also explains response_format options. Although schema descriptions cover 50%, the description compensates with context about time sensitivity and defaults.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it recommends crystals from a Vedic natal chart using classical Ratna Shastra rules per BPHS and Mani Mala. It explicitly distinguishes itself from siblings: 'not from zodiac sign or chakra preference' and includes a 'DO NOT CONFUSE WITH' section that names alternative tools and their differences.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides a dedicated 'WORKFLOW' section explaining BEFORE and AFTER steps, and a 'DO NOT CONFUSE WITH' section that explicitly tells when to use this tool vs alternatives (e.g., asterwise_get_gemstone_recommendations returns gem names not crystal database entries). It also includes guidance on when not to use (e.g., inaccurate birth time changes results).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_crystalsA
Read-onlyIdempotent
Inspect

Returns all 50 crystals in the database sorted alphabetically. Each entry includes chakra associations, elemental correspondences, Vedic and Western planetary assignments, physical/emotional/spiritual healing properties, geographic origins, affirmations, and safety cautions.

SECTION: WHAT THIS TOOL COVERS Dual-tradition crystal database distinguishing classical Vedic assignments from modern Western metaphysical ones. vedic_correspondence field is always one of: 'navaratna' (primary classical gem per BPHS — one of the nine planetary gems), 'uparatna' (classical substitute gem), or 'none_classical' (no Vedic text assigns this stone — Western tradition only). The nine Navaratna: Ruby (Sun), Pearl (Moon), Red Coral (Mars), Emerald (Mercury), Yellow Sapphire (Jupiter), Diamond / White Sapphire (Venus), Blue Sapphire (Saturn), Hessonite Garnet (Rahu), Cat's Eye Chrysoberyl (Ketu). Crystals like Labradorite and Amazonite are marked none_classical — they were unknown in ancient India. Does not compute natal chart recommendations (asterwise_get_crystal_recommendations).

SECTION: WORKFLOW BEFORE: None — standalone catalogue. AFTER: asterwise_get_crystal_by_planet — filter by Vedic planet for remedial use.

SECTION: INPUT CONTRACT No required parameters.

SECTION: OUTPUT CONTRACT data.total (int — 50) data.crystals[] — 50 objects each: slug, name, colors[], hardness_mohs (float) chakras[] (string array) element (string) zodiac_signs[] (string array) vedic_planet (string or null) vedic_correspondence (string — 'navaratna'|'uparatna'|'none_classical') western_planet (string or null) keywords[] (string array) healing_physical, healing_emotional, healing_spiritual (strings) description (string) origins[] (string array) affirmation (string) caution (string or null)

SECTION: RESPONSE FORMAT response_format=json — full 50-crystal array. response_format=markdown — formatted catalogue. Both return identical data.

SECTION: COMPUTE CLASS FAST_LOOKUP — static database, no ephemeris.

SECTION: ERROR CONTRACT INTERNAL_ERROR: Any upstream API failure → MCP INTERNAL_ERROR

SECTION: DO NOT CONFUSE WITH asterwise_get_crystal — single crystal detail by name. asterwise_get_crystal_by_planet — filter by Vedic planetary correspondence. asterwise_get_crystal_recommendations — recommendations by zodiac/chakra/intention.

ParametersJSON Schema
NameRequiredDescriptionDefault
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Description exceeds annotation coverage with details like 'FAST_LOOKUP — static database, no ephemeris', 'INTERNAL_ERROR' error contract, and explicit output contract. Annotations already provide readOnlyHint/idempotentHint/destructiveHint, and the description adds meaningful behavioral context without contradiction.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While the description is lengthy, it is well-structured with clear section headers (WHAT THIS TOOL COVERS, WORKFLOW, INPUT CONTRACT, OUTPUT CONTRACT, etc.). Some redundancy exists (e.g., repeating the 50-crystal count multiple times), but every section adds unique value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of an output schema, the description remains complete by covering input, output, error handling, compute class, and sibling differentiation. It does not rely on the output schema to explain return values but still provides comprehensive context for accurate tool selection and usage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The sole parameter 'response_format' is fully explained: 'response_format=json — full 50-crystal array. response_format=markdown — formatted catalogue. Both return identical data.' This adds semantic meaning beyond the schema's enum listing, compensating for the 0% schema description coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states it returns all 50 crystals sorted alphabetically with detailed fields. Explicitly distinguishes from sibling tools like asterwise_get_crystal, asterwise_get_crystal_by_planet, and asterwise_get_crystal_recommendations in the 'DO NOT CONFUSE WITH' section.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit workflow guidance with BEFORE (none) and AFTER (asterwise_get_crystal_by_planet) sections. Includes a dedicated 'DO NOT CONFUSE WITH' section listing alternatives and their purposes, making it clear when to use this tool versus others.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_dashaA
Read-onlyIdempotent
Inspect

Computes Vimshottari Dasha from birth data and returns hierarchical period trees plus current Maha/Antar interpretation blocks.

SECTION: WHAT THIS TOOL COVERS Computes the classical Parashari Vimshottari timeline from the Moon's birth nakshatra: Mahadasha and nested sub-periods up to the depth set by levels, with Julian and calendar boundaries and optional modern summaries. It returns data.periods[] and data.interpretation for the active periods. It does not compute Jaimini Char Dasha, Yogini Dasha, Ashtottari, or transit correlations; use the dedicated tools for those systems.

SECTION: WORKFLOW BEFORE: RECOMMENDED — asterwise_get_natal_chart — establishes chart and Moon context before interpreting Dasha lords. AFTER: asterwise_get_dasha_transits — correlates active Dasha lords with transits for the same birth data.

SECTION: INPUT CONTRACT levels (int, default 3, max 5): tree depth — 1 = Mahadasha only; 2 adds Antardasha; 3 Pratyantar; 4 Sookshma; 5 Prana (much larger payload). Response dates in periods[] use DD/MM/YYYY, not ISO. BirthData fields follow global contract (date YYYY-MM-DD, time HH:MM; time='00:00' is accepted without flag — lagna-sensitive timing may be wrong if birth time is unknown).

SECTION: OUTPUT CONTRACT data.periods[] — array of Mahadasha objects: planet (string) start_jd (float) end_jd (float) start_date (string — DD/MM/YYYY, not ISO) end_date (string — DD/MM/YYYY) modern_summary (string or null) sub[] — array of Antardasha objects with the same shape; sub=null at deepest level data.interpretation.current_mahadasha: planet (string) start_date (string) end_date (string) duration_years (float) modern_summary (string or null) favorable_conditions[] (string array) favorable_results[] (string array) unfavorable_conditions[] (string array) unfavorable_results[] (string array) timing_note (string) data.interpretation.current_antardasha — same fields as current_mahadasha plus mahadasha_planet (string) data.birth_time_unknown (bool)

SECTION: RESPONSE FORMAT response_format=json serialises the complete response as indented JSON — use this for programmatic parsing, typed clients, and downstream tool chaining. response_format=markdown renders the same data as a human-readable report. Both modes return identical underlying data — no fields are added, removed, or filtered by either mode.

SECTION: COMPUTE CLASS MEDIUM_COMPUTE (~100ms at levels=1, ~1500ms at levels=5)

SECTION: ERROR CONTRACT INVALID_PARAMS (local — caught before upstream call): — levels < 1 or levels > 5 → MCP INVALID_PARAMS

INVALID_PARAMS (upstream): — None — BirthData validation is upstream beyond Pydantic field constraints.

INTERNAL_ERROR: — Any upstream API failure or timeout → MCP INTERNAL_ERROR

Edge cases: — Period start_date/end_date strings are DD/MM/YYYY; do not parse as ISO.

SECTION: DO NOT CONFUSE WITH asterwise_get_char_dasha — Jaimini sign-based periods with ISO dates on periods[], not planet-based Vimshottari. asterwise_get_yogini_dasha — 36-year eight-Yogini cycle with data.periods.root[], not Vimshottari. asterwise_get_ashtottari_dasha — 108-year alternative tree with data.periods.root[] and same levels semantics as this tool.

ParametersJSON Schema
NameRequiredDescriptionDefault
birthYesBirth data for a single person.
levelsNo
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds significant behavioral context beyond what annotations provide. While annotations indicate readOnly, non-destructive, idempotent operations, the description adds: compute class ('MEDIUM_COMPUTE (~100ms at levels=1, ~1500ms at levels=5)'), error contract details, date format specifics ('DD/MM/YYYY, not ISO'), and edge case handling. It doesn't contradict annotations and provides rich operational context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (WHAT THIS TOOL COVERS, WORKFLOW, INPUT CONTRACT, etc.) that make information easy to locate. Every sentence serves a specific purpose—no wasted words. The information is front-loaded with the core purpose, followed by detailed sections for those needing deeper understanding.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (astrological calculations with hierarchical outputs), the description provides comprehensive context. It covers purpose, workflow integration, input semantics, output structure (though an output schema exists), performance characteristics, error handling, and differentiation from similar tools. The combination of detailed description and existing output schema makes this complete for agent understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With only 33% schema description coverage, the description compensates substantially. It explains the 'levels' parameter in detail (depth meaning, default, max, payload implications), clarifies BirthData field semantics (date/time formats, '00:00' handling, lagna-sensitive timing implications), and explains the 'response_format' parameter's effect on output presentation. This adds crucial meaning beyond the minimal schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states 'Computes Vimshottari Dasha from birth data and returns hierarchical period trees plus current Maha/Antar interpretation blocks.' This provides a specific verb ('computes'), resource ('Vimshottari Dasha'), and output details. It clearly distinguishes from sibling tools like asterwise_get_char_dasha, asterwise_get_yogini_dasha, and asterwise_get_ashtottari_dasha by specifying what it does NOT compute.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description includes explicit workflow guidance: 'BEFORE: RECOMMENDED — asterwise_get_natal_chart — establishes chart and Moon context before interpreting Dasha lords' and 'AFTER: asterwise_get_dasha_transits — correlates active Dasha lords with transits for the same birth data.' It also clearly states when NOT to use this tool in the 'WHAT THIS TOOL COVERS' section, naming specific alternative tools for other Dasha systems.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_dashakootA
Read-onlyIdempotent
Inspect

Computes the ten-koota Dashakoot grid for two charts, converts it to a ten-point score with percentage, and exposes boolean dosha flags plus supplementary diagnostics.

SECTION: WHAT THIS TOOL COVERS South Indian Kannada/Telugu style matching: each koota scores 0.0 or 1.0 with max_per_koota 1.0, includes Nadi/Bhakoot/Rajju/Vedha blocks, Mahendra/Stree Deergha supplements, and Rajju/Vedha detail objects. It is not Ashtakoota 36-point (asterwise_get_compatibility) or Tamil porutham pass grid (asterwise_get_porutham).

SECTION: WORKFLOW BEFORE: RECOMMENDED — asterwise_get_natal_chart for each native — context before regional scoring. AFTER: asterwise_get_papasamyam — optional malefic differential.

SECTION: INPUT CONTRACT Two BirthData objects per global contract.

SECTION: OUTPUT CONTRACT data.total_score (float — out of 10) data.max_score (float — 10.0) data.percentage (float) data.compatibility_level (string) data.breakdown{} — keys Dina, Gana, Mahendra, StreeDeergha, Yoni, Rasi, RasiAdhipati, Vashya, Rajju, Vedha — each float 0.0 or 1.0 data.max_per_koota{} — each value 1.0 data.doshas{}: nadi_dosha (bool) nadi_cancelled (bool) bhakoot_dosha (bool) bhakoot_cancelled (bool) rajju_dosha (bool) rajju_group_boy (string or value per upstream) rajju_group_girl (string or value per upstream) vedha_dosha (bool) vedha_pair (string or value per upstream) data.supplementary: mahendra (object) stree_deergha (object) rajju — { boy_rajju, girl_rajju, same_rajju (bool), is_dosha (bool), description (string) } vedha — { has_vedha (bool), boy_nakshatra (int), girl_nakshatra (int), description (string) } dina_inclusive_count (int)

SECTION: RESPONSE FORMAT response_format=json serialises the complete response as indented JSON — use this for programmatic parsing, typed clients, and downstream tool chaining. response_format=markdown renders the same data as a human-readable report. Both modes return identical underlying data — no fields are added, removed, or filtered by either mode.

SECTION: COMPUTE CLASS MEDIUM_COMPUTE

SECTION: ERROR CONTRACT INVALID_PARAMS (local — caught before upstream call): None — all validation is upstream.

INVALID_PARAMS (upstream): — None — upstream rejection surfaces as MCP INTERNAL_ERROR at the tool layer.

INTERNAL_ERROR: — Any upstream API failure or timeout → MCP INTERNAL_ERROR

Edge cases: — Read rajju/vedha supplementary objects before trusting the headline score.

SECTION: DO NOT CONFUSE WITH asterwise_get_compatibility — North Indian 36-point Ashtakoota with different breakdown keys. asterwise_get_porutham — Tamil ten-porutham passed counts, not Dashakoot floats.

ParametersJSON Schema
NameRequiredDescriptionDefault
person1YesBirth data for a single person.
person2YesBirth data for a single person.
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, destructiveHint=false, openWorldHint=true, and idempotentHint=true, covering safety and idempotency. The description adds valuable context beyond annotations: it specifies the 'COMPUTE CLASS' as 'MEDIUM_COMPUTE' (indicating resource intensity), details error contracts (INVALID_PARAMS, INTERNAL_ERROR), and advises to 'Read rajju/vedha supplementary objects before trusting the headline score' for edge cases. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (WHAT THIS TOOL COVERS, WORKFLOW, INPUT CONTRACT, OUTPUT CONTRACT, etc.), making it easy to scan. It is appropriately sized for a complex tool, though some sections (like OUTPUT CONTRACT) are detailed but necessary. Every sentence adds value, with no redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (computational astrology matching), rich output schema detailed in 'OUTPUT CONTRACT', and annotations covering safety, the description is highly complete. It explains regional style, scoring methodology, workflow recommendations, error handling, compute class, and distinctions from siblings. The output schema eliminates need to describe return values in the description, and the description adds all necessary contextual information.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 67%, with detailed descriptions for person1/person2 fields (lat, lon, date, time, ayanamsa, timezone, person_name) but no description for response_format. The description compensates by explaining in 'RESPONSE FORMAT' that 'response_format=json serialises... response_format=markdown renders...', adding semantic meaning not in the schema. However, it doesn't fully document all parameters beyond what the schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool computes a 'ten-koota Dashakoot grid for two charts' and converts it to a 'ten-point score with percentage', distinguishing it from sibling tools like 'asterwise_get_compatibility' (North Indian 36-point) and 'asterwise_get_porutham' (Tamil pass grid). It specifies the regional style (South Indian Kannada/Telugu) and scoring method (0.0 or 1.0 per koota).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance in the 'WORKFLOW' section: 'BEFORE: RECOMMENDED — asterwise_get_natal_chart for each native' and 'AFTER: asterwise_get_papasamyam — optional malefic differential.' It also includes a 'DO NOT CONFUSE WITH' section naming specific sibling tools and explaining their differences, giving clear alternatives and exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_dasha_transitsA
Read-onlyIdempotent
Inspect

Combines active Vimshottari lords with today's transits and returns scored correlations plus transit longitudes and houses from Moon and Lagna.

SECTION: WHAT THIS TOOL COVERS Builds a snapshot for the current calendar day (no date parameter): active Mahadasha, Antardasha, and Pratyantar; transiting planet positions; pairwise dasha–transit correlations with scores; and a filtered list of stronger correlations. It does not return full Dasha trees (use asterwise_get_dasha), ingress calendars (asterwise_get_transits), or standalone Gochar without Dasha context (asterwise_get_gochar).

SECTION: WORKFLOW BEFORE: RECOMMENDED — asterwise_get_natal_chart — same birth data should be understood before interpreting houses and lords. AFTER: asterwise_get_gochar — optional broader transit snapshot without dasha scoring.

SECTION: INPUT CONTRACT No date field — "today" is fixed by the API. All parameters are otherwise defined in the tool schema. BirthData follows the global contract (unknown birth time: time='00:00' accepted without detection).

SECTION: OUTPUT CONTRACT data.target_date (string — YYYY-MM-DD, today) data.active_dasha: start_date (string) end_date (string) maha — { planet (string), start_date (string), end_date (string) } antar — { planet (string), start_date (string), end_date (string) } pratyantar — { planet (string), start_date (string), end_date (string) } data.transit_positions{} — keyed by planet name: rashi_index (int) rashi (string) is_retrograde (bool) house_from_moon (int) house_from_lagna (int) data.correlations[] — each object: dasha_level (string) dasha_lord (string) transit_planet (string) aspect_type (string) score (int — 1=mild, 2=moderate, 3=high) natal_rashi (string) transit_rashi (string) is_retrograde (bool) significance (string) data.periods_of_significance[] — same shape as correlations[] filtered to score ≥ 2

SECTION: RESPONSE FORMAT response_format=json serialises the complete response as indented JSON — use this for programmatic parsing, typed clients, and downstream tool chaining. response_format=markdown renders the same data as a human-readable report. Both modes return identical underlying data — no fields are added, removed, or filtered by either mode.

SECTION: COMPUTE CLASS MEDIUM_COMPUTE

SECTION: ERROR CONTRACT INVALID_PARAMS (local — caught before upstream call): None — all validation is upstream.

INVALID_PARAMS (upstream): — None — upstream rejection surfaces as MCP INTERNAL_ERROR at the tool layer.

INTERNAL_ERROR: — Any upstream API failure or timeout → MCP INTERNAL_ERROR

Edge cases: — Target date is always "today"; past/future analysis is not supported by this tool.

SECTION: DO NOT CONFUSE WITH asterwise_get_gochar — full nine-planet Gochar with AVK and vedha fields, without dasha–transit correlation scores. asterwise_get_transits — ingress and station lists over a chosen range, not today's dasha snapshot. asterwise_get_dasha — full Vimshottari tree without transit overlay.

ParametersJSON Schema
NameRequiredDescriptionDefault
birthYesBirth data for a single person.
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond the annotations. While annotations indicate read-only, open-world, idempotent, and non-destructive operations, the description clarifies that the tool uses a fixed 'today' date (no date parameter), mentions compute class (MEDIUM_COMPUTE), and details error handling (INVALID_PARAMS, INTERNAL_ERROR). It also explains the response format options (json vs markdown) and edge cases like past/future analysis not being supported. No contradictions with annotations exist.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (e.g., WHAT THIS TOOL COVERS, WORKFLOW, INPUT CONTRACT), making it easy to scan. Each sentence adds value without redundancy. It efficiently covers purpose, usage, parameters, output, format, compute class, errors, and sibling distinctions in a compact yet comprehensive manner.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (astrological calculations with multiple outputs), rich annotations, and detailed output schema, the description is highly complete. It explains the tool's scope, workflow, input constraints (fixed date), output structure (with examples), response formats, compute class, error handling, and explicit sibling distinctions. The output schema is detailed, so the description doesn't need to re-explain return values, focusing instead on contextual guidance.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With schema description coverage at 50% (only 'birth' object has descriptions, 'response_format' lacks one), the description compensates well. The 'INPUT CONTRACT' section clarifies that 'today' is fixed by the API and all parameters are defined in the schema, and it provides context for BirthData (e.g., 'unknown birth time: time='00:00' accepted without detection'). While it doesn't detail individual parameters beyond the schema, it adds meaningful usage context that aids understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Combines active Vimshottari lords with today's transits and returns scored correlations plus transit longitudes and houses from Moon and Lagna.' It specifies the exact verb ('combines'), resources ('active Vimshottari lords', 'today's transits'), and output ('scored correlations', 'transit longitudes and houses'). It explicitly distinguishes from siblings like asterwise_get_dasha, asterwise_get_transits, and asterwise_get_gochar in the 'DO NOT CONFUSE WITH' section.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool versus alternatives. The 'WHAT THIS TOOL COVERS' section states what it does and what it doesn't do (e.g., 'no date parameter', 'does not return full Dasha trees'). The 'WORKFLOW' section recommends using asterwise_get_natal_chart before this tool and asterwise_get_gochar after. The 'DO NOT CONFUSE WITH' section explicitly names alternative tools and their different purposes.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_divisional_chartA
Read-onlyIdempotent
Inspect

Computes divisional (varga) chart positions from BirthData; pass chart_type for one varga, or omit chart_type for all sixteen.

SECTION: WHAT THIS TOOL COVERS When chart_type is provided, returns only that one divisional chart. When chart_type is omitted, returns all sixteen standard Parashari divisional charts (D1, D2, D3, D4, D7, D9, D10, D12, D16, D20, D24, D27, D30, D40, D45, D60). D30 omits Sun and Moon per convention. Does not return Shadbala (asterwise_get_chart_strength) or radix-only graha drishti (asterwise_get_natal_chart).

SECTION: WORKFLOW BEFORE: RECOMMENDED — asterwise_get_natal_chart — anchor D1 before reading higher vargas. AFTER: None.

SECTION: INPUT CONTRACT chart_type enum is enforced locally (Pydantic). BirthData follows the global contract.

SECTION: OUTPUT CONTRACT When chart_type is provided: data — object with a single key (the requested chart, e.g. 'D9'): planet_name (Sun..Ketu) → { sign (string), sign_num (int), degree (float) } When chart_type is omitted: data — object with keys D1, D2, D3, D4, D7, D9, D10, D12, D16, D20, D24, D27, D30, D40, D45, D60 — each: planet_name → { sign, sign_num, degree } (D30 excludes Sun and Moon entries.)

SECTION: RESPONSE FORMAT response_format=json serialises the complete response as indented JSON — use this for programmatic parsing, typed clients, and downstream tool chaining. response_format=markdown renders the same data as a human-readable report. Both modes return identical underlying data — no fields are added, removed, or filtered by either mode.

SECTION: COMPUTE CLASS MEDIUM_COMPUTE

SECTION: ERROR CONTRACT INVALID_PARAMS (local — caught before upstream call): — Invalid chart_type enum → MCP INVALID_PARAMS (via Pydantic)

INVALID_PARAMS (upstream): — None — unknown ayanamsa for a varga surfaces as MCP INTERNAL_ERROR at the tool layer.

INTERNAL_ERROR: — Any upstream API failure or timeout → MCP INTERNAL_ERROR

SECTION: DO NOT CONFUSE WITH asterwise_get_natal_chart — radix chart with houses and drishti, not the full varga dictionary. asterwise_get_chart_strength — embeds vargas inside strength metrics, different primary payload.

ParametersJSON Schema
NameRequiredDescriptionDefault
birthYesBirth data for a single person.
chart_typeNo
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint=true, destructiveHint=false, openWorldHint=true, and idempotentHint=true. The description adds valuable behavioral context beyond annotations: it explains that D30 omits Sun and Moon per convention, mentions that wrong ayanamsa may be rejected upstream, clarifies that all sixteen charts appear even when only one chart_type is requested, and provides error contract details. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (WHAT THIS TOOL COVERS, WORKFLOW, INPUT CONTRACT, etc.) and every sentence adds value. It's appropriately sized for a complex tool with many behavioral nuances, with no redundant or wasted content.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (computes 16 charts), rich annotations, and presence of an output schema, the description provides excellent completeness. It covers purpose, workflow, input/output contracts, error handling, sibling differentiation, and behavioral nuances. The output schema exists, so the description appropriately focuses on explaining the structure and usage rather than repeating return value details.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With only 33% schema description coverage (birth parameter has rich descriptions but chart_type and response_format lack descriptions), the description compensates well. It explains chart_type 'selects the analytical focus' and that response_format offers json for programmatic parsing or markdown for human-readable reports. While it doesn't detail every parameter, it adds meaningful context beyond the sparse schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'computes all sixteen divisional charts from BirthData' with specific output details (D1-D60 blocks keyed by planet). It explicitly distinguishes from siblings like asterwise_get_natal_chart (radix chart only) and asterwise_get_chart_strength (strength metrics), providing clear differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit workflow guidance: 'BEFORE: RECOMMENDED — asterwise_get_natal_chart — anchor D1 before reading higher vargas' and 'AFTER: None.' It also includes a 'DO NOT CONFUSE WITH' section naming specific alternatives with their purposes, giving clear when-to-use and when-not-to-use guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_doshasA
Read-onlyIdempotent
Inspect

Scores twelve fixed dosha buckets from birth data and returns presence flags, typed detail objects, optional summaries, and remedy lines per dosha.

SECTION: WHAT THIS TOOL COVERS Maps mangal_dosha, shani_dosha, shrapit_dosha, grahan_dosha, kala_sarpa, guru_chandal, kemadruma_dosha, paap_kartari, pitru_dosha, gandmool_dosha, nadi_dosha, and gulika — each with present, types[], details{}, interpretation_summary, keywords, remedies[]. It does not replace compatibility Nadi scoring (asterwise_get_compatibility) or Shadbala (asterwise_get_chart_strength). Cancellation arrays inside details (e.g. mangal_dosha) must be read before treating a dosha as final.

SECTION: WORKFLOW BEFORE: RECOMMENDED — asterwise_get_natal_chart — chart familiarity before dosha interpretation. AFTER: asterwise_get_remedies — classical remedial suggestions after dosha review.

SECTION: INPUT CONTRACT BirthData follows the global contract. Unknown birth time at midnight is accepted silently.

SECTION: OUTPUT CONTRACT data.doshas — object with exactly twelve keys: mangal_dosha, shani_dosha, shrapit_dosha, grahan_dosha, kala_sarpa, guru_chandal, kemadruma_dosha, paap_kartari, pitru_dosha, gandmool_dosha, nadi_dosha, gulika Each value: present (bool) types[] (string array) details{} (object — mangal_dosha includes mars_house, from_moon, from_venus, d9_present, severity, cancellations[]; gulika includes longitude, sign_index, sign_name, house, dosha_sensitive; other keys vary) interpretation_summary (string or null) keywords (array or null) remedies[] (string array or null) data.birth_time_unknown (bool)

SECTION: RESPONSE FORMAT response_format=json serialises the complete response as indented JSON — use this for programmatic parsing, typed clients, and downstream tool chaining. response_format=markdown renders the same data as a human-readable report. Both modes return identical underlying data — no fields are added, removed, or filtered by either mode.

SECTION: COMPUTE CLASS MEDIUM_COMPUTE

SECTION: ERROR CONTRACT INVALID_PARAMS (local — caught before upstream call): None — all validation is upstream.

INVALID_PARAMS (upstream): — Dates before 1800 or after 2100 → MCP INTERNAL_ERROR at the tool layer.

INTERNAL_ERROR: — Any upstream API failure or timeout → MCP INTERNAL_ERROR

Edge cases: — Many doshas include cancellation lists inside details — read before concluding severity.

SECTION: DO NOT CONFUSE WITH asterwise_get_chart_strength — Shadbala/Vimshopaka power metrics, not dosha booleans. asterwise_get_compatibility — pair scoring including nadi_dosha flags, not the twelve natal dosha buckets.

ParametersJSON Schema
NameRequiredDescriptionDefault
birthYesBirth data for a single person.
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations provide readOnlyHint=true, openWorldHint=true, idempotentHint=true, and destructiveHint=false. The description adds valuable context beyond this: it specifies the tool covers 'MEDIUM_COMPUTE' class, details error contracts (e.g., date range limits, cancellation arrays), and explains that 'Unknown birth time at midnight is accepted silently.' This enriches behavioral understanding without contradicting annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (e.g., WHAT THIS TOOL COVERS, WORKFLOW, INPUT CONTRACT), each containing focused, necessary information. Every sentence serves a purpose, such as clarifying scope, guiding usage, or detailing output, with no redundant or verbose content, making it highly efficient and front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity, rich annotations, and the presence of an output schema, the description is highly complete. It covers purpose, usage guidelines, behavioral details (compute class, errors), input/output contracts, and distinctions from siblings. The output schema existence means the description doesn't need to explain return values in depth, and it effectively supplements structured data with necessary context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 50%, with detailed descriptions for birth parameters but none for response_format. The description adds minimal parameter semantics beyond the schema: it mentions 'BirthData follows the global contract' and that response_format affects output serialization. However, it doesn't fully compensate for the lack of schema description for response_format, keeping it at baseline level given partial schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool 'Scores twelve fixed dosha buckets from birth data and returns presence flags, typed detail objects, optional summaries, and remedy lines per dosha.' It lists all twelve specific dosha names and clearly distinguishes this tool from sibling tools like asterwise_get_compatibility and asterwise_get_chart_strength, making the purpose highly specific and differentiated.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description includes explicit 'SECTION: WORKFLOW' with 'BEFORE' and 'AFTER' recommendations (asterwise_get_natal_chart and asterwise_get_remedies), and 'SECTION: DO NOT CONFUSE WITH' that names specific sibling tools and explains when not to use this tool (e.g., for compatibility or Shadbala metrics). This provides clear, actionable guidance on when to use this tool versus alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_dream_symbolA
Read-onlyIdempotent
Inspect

Lookup a specific dream symbol by slug or name (case-insensitive). Returns full dual-tradition interpretation including Jungian archetype, Vedic Swapna Shastra meaning with auspiciousness, context variants, and related symbols.

SECTION: WHAT THIS TOOL COVERS Single symbol lookup with complete detail. Use for dream journaling apps, AI-powered dream interpretation (the themes[] field is designed for synthesis), and cross-tradition comparison. Notable tradition conflicts: Snake (Western=transformation; Vedic=partial — white snake=auspicious, black chasing=inauspicious). Elephant=auspicious both traditions (Ganesha). Crow=inauspicious both traditions (Yama's messenger). Wedding=conflict (West=union; Vedic=inauspicious per Charaka Samhita).

SECTION: WORKFLOW BEFORE: None — standalone. AFTER: None.

SECTION: INPUT CONTRACT name: Symbol slug or display name. Examples: 'snake', 'eagle', 'childhood-home', 'lotus', 'black-dog'

SECTION: OUTPUT CONTRACT Same shape as each symbol in asterwise_get_dream_symbols — full single symbol object.

SECTION: RESPONSE FORMAT response_format=json — single symbol object. response_format=markdown — formatted interpretation card. Both return identical data.

SECTION: COMPUTE CLASS FAST_LOOKUP

SECTION: ERROR CONTRACT INVALID_PARAMS (upstream): Unknown symbol → 404, surfaces as MCP INTERNAL_ERROR. INTERNAL_ERROR: Any upstream API failure → MCP INTERNAL_ERROR

SECTION: DO NOT CONFUSE WITH asterwise_get_dream_symbols — full database listing with optional category filter.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYes
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint, idempotentHint, etc. The description adds extensive behavioral context: error contract (INVALID_PARAMS leads to 404, MCP INTERNAL_ERROR), compute class FAST_LOOKUP, output format details, and that it returns full dual-tradition interpretation. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (WHAT THIS TOOL COVERS, WORKFLOW, etc.). It is front-loaded with the core purpose. Some sections like WORKFLOW are minimal but not wasteful. A bit verbose but still efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (dual-tradition, error handling, output shape), the description covers all necessary aspects: input contract, output contract, response format, workflow, error contract, and differentiation. Output schema exists, so return values need not be detailed. Complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0%, so the description fully compensates. It explains 'name' as symbol slug or display name with examples, and describes 'response_format' enumeration and its implications in the 'RESPONSE FORMAT' section. Adds meaning beyond schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool does a lookup of a specific dream symbol by slug or name, returning full dual-tradition interpretation. It distinguishes from sibling 'asterwise_get_dream_symbols' which lists all symbols.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description includes a 'DO NOT CONFUSE WITH' section explicitly naming the alternative tool, and a 'WHAT THIS TOOL COVERS' section listing use cases. However, it does not explicitly state when to avoid using this tool beyond the sibling differentiation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_dream_symbolsA
Read-onlyIdempotent
Inspect

Returns dream symbols from the database with dual-tradition interpretation: Jungian/Western psychological analysis and classical Vedic Swapna Shastra meaning. 500 symbols across 8 categories. Optionally filter by category.

SECTION: WHAT THIS TOOL COVERS Each symbol includes: Jungian meaning and archetype (Shadow, Self, Anima, Animus, Great Mother, Wise Old Man, Hero, Trickster, Persona), Vedic Swapna Shastra meaning with Shubha/Ashubha (auspicious/inauspicious) classification, source text (Agni Purana, Charaka Samhita, Atharva Veda, or traditional folk Swapna Shastra), traditions_agree field flagging where East and West conflict, emotional tone, 2-3 context variants, and related symbol slugs. The traditions_agree='conflict' entries are significant — e.g. Owl (West=wisdom; Vedic=inauspicious, death omen per Agni Purana), Wedding (West=union; Vedic=inauspicious, Charaka Samhita warns illness), Gold (West=the Self; Vedic=financial loss warning per Charaka Samhita). Valid categories: animals, nature, people, places, objects, actions, body, abstract.

SECTION: WORKFLOW BEFORE: None — standalone. AFTER: asterwise_get_dream_symbol — get full detail for a specific symbol.

SECTION: INPUT CONTRACT category (optional): One of animals, nature, people, places, objects, actions, body, abstract. Omit for all 500 symbols.

SECTION: OUTPUT CONTRACT data.total (int) data.category_filter (string or null) data.symbols[] — each: slug (string) name (string) category (string) jungian_meaning (string) jungian_archetype (string) vedic_meaning (string) vedic_auspicious (bool or null — null = mixed/context-dependent) vedic_source (string) traditions_agree (string — 'agree'|'conflict'|'partial') emotional_tone (string) themes[] (string array — for AI synthesis) context_variants[] — { context (string), meaning (string) } related_symbols[] (string array of slugs)

SECTION: RESPONSE FORMAT response_format=json — symbol array. response_format=markdown — formatted catalogue. Both return identical data.

SECTION: COMPUTE CLASS FAST_LOOKUP — static database.

SECTION: ERROR CONTRACT INVALID_PARAMS (upstream): Invalid category → 422. INTERNAL_ERROR: Any upstream API failure → MCP INTERNAL_ERROR

SECTION: DO NOT CONFUSE WITH asterwise_get_dream_symbol — single symbol detail by name.

ParametersJSON Schema
NameRequiredDescriptionDefault
categoryNo
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds significant behavioral context beyond the annotations: it reveals that the tool performs a FAST_LOOKUP on a static database, details error contracts (INVALID_PARAMS, INTERNAL_ERROR), and explains the response format options (JSON vs markdown). No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with sections and front-loaded with purpose. While thorough and clear, it is slightly longer than necessary; however, every sentence adds value, and the structure aids readability. The slight verbosity prevents a perfect score.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description covers all aspects: purpose, parameters, output contract with field details, error contracts, workflow, response format, and distinction from sibling. Given the presence of an output schema and the tool's complexity, the description is fully complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Despite 0% schema description coverage, the description fully explains both parameters: the category parameter lists all valid values (animals, nature, people, places, objects, actions, body, abstract) and notes it is optional, and the response_format parameter is described as choosing between JSON and markdown.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it returns dream symbols with dual-tradition interpretation, specifies 500 symbols across 8 categories, and distinguishes from the sibling tool asterwise_get_dream_symbol for single symbol details. The verb is specific and resource clearly defined.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description includes a 'WORKFLOW' section stating it is standalone and recommends using asterwise_get_dream_symbol afterward for detail, and a 'DO NOT CONFUSE WITH' section explicitly differentiating from the sibling. It also explains optional filtering by category, providing clear when-to-use guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_expression_numberA
Read-onlyIdempotent
Inspect

Calculates the Expression (Destiny) number from the full name using Pythagorean letter values. Reduces each name part separately before summing — this is the Goodwin/Balliett per-part method which preserves the vibrational weight of compound numbers within each name segment. Master numbers 11, 22, 33 are preserved and not further reduced.

SECTION: WHAT THIS TOOL COVERS The Expression number describes the natural talents, abilities, and shortcomings a person brought into this life — what they are capable of becoming. It differs from the Soul Urge (inner motivation) and Personality (outer mask). Uses all letters A–Z mapped to Pythagorean values 1–9. Not Chaldean (asterwise_get_chaldean_numerology).

SECTION: WORKFLOW BEFORE: None — this tool is standalone. AFTER: asterwise_get_soul_urge_number — complete the core trinity (Expression, Soul Urge, Personality).

SECTION: INPUT CONTRACT name — Full legal name as used at birth. Include all name parts separated by spaces. Example: 'Arjun Mehta', 'Sofia Rossi', 'James Carter' Format: string, any case (uppercase/lowercase both accepted) Constraint: at least one alphabetic character required

SECTION: OUTPUT CONTRACT data.number (int — Expression number; may be 11, 22, or 33 for master numbers) data.is_master_number (bool — true when number is 11, 22, or 33) data.karmic_debt_number (int or null — the karmic debt root if present; null otherwise)

SECTION: RESPONSE FORMAT response_format=json serialises the complete response as indented JSON — use this for programmatic parsing, typed clients, and downstream tool chaining. response_format=markdown renders the same data as a human-readable report. Both modes return identical underlying data — no fields are added, removed, or filtered by either mode.

SECTION: COMPUTE CLASS FAST_LOOKUP

SECTION: ERROR CONTRACT INVALID_PARAMS (local — caught before upstream call): None — validation is upstream. INTERNAL_ERROR: — Any upstream API failure or timeout → MCP INTERNAL_ERROR

SECTION: DO NOT CONFUSE WITH asterwise_get_soul_urge_number — vowels only, not all letters. asterwise_get_personality_number — consonants only, not all letters. asterwise_get_numerology_profile — returns Expression plus all other core numbers, pinnacles, challenges, and lucky numbers in one call. asterwise_get_chaldean_numerology — different letter-value system (Chaldean 1–8).

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYes
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations show readOnlyHint and idempotentHint. Description adds method details, master number handling, and output contract, enhancing transparency beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Structured with sections (OUTPUT CONTRACT, DO NOT CONFUSE WITH), but could be more concise. Each section adds value without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given output schema, description provides output contract. Covers method and master numbers. Adequate for a focused calculation tool; no missing critical context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0%, requiring description to explain parameters. Description only mentions 'full name' but does not specify format for 'name' or explain 'response_format' options.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states verb 'Calculates' and resource 'Expression number', specifies the method (Goodwin method) and distinguishes from sibling with 'DO NOT CONFUSE WITH' section.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides a sibling differentiation section but lacks explicit 'when to use' guidance. Adequate but minimal contextual advice.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_festival_calendarA
Read-onlyIdempotent
Inspect

Computes all major Hindu festival dates for a given year and location. Returns 20 pan-Hindu festivals including solar sankrantis (Makar Sankranti, Vaisakhi) and tithi-based festivals (Diwali, Holi, Dussehra, Janmashtami, Ganesh Chaturthi, Ram Navami, and 12 others).

SECTION: WHAT THIS TOOL COVERS All dates are astronomically computed — no hardcoded calendar dates. Solar festivals (Makar Sankranti, Vaisakhi) use exact Swiss Ephemeris Sun ingress into Lahiri sidereal signs. Tithi festivals (all others) use Sun-Moon elongation at local sunrise: elongation = (moon_lon - sun_lon) % 360, tithi_index = int(elongation / 12), indices 0-14 = Shukla Paksha tithi 1-15, indices 15-29 = Krishna Paksha tithi 1-15. Location is required for sunrise-based tithi determination — the same astronomical event may fall on different calendar dates at different locations.

SECTION: WORKFLOW BEFORE: None — standalone. AFTER: asterwise_get_panchanga — drill into full Panchanga detail for any specific festival date.

SECTION: INPUT CONTRACT year: integer 1900-2100. Either location (city name) OR latitude + longitude + timezone must be provided.

SECTION: OUTPUT CONTRACT data.year (int) data.timezone (string — IANA timezone used) data.total (int — number of festivals found) data.festivals[] — chronologically sorted: name (string — festival name) date (string — YYYY-MM-DD) type (string — 'solar' or 'tithi') description (string — classical basis, e.g. which tithi of which lunar month) significance (string — cultural and religious significance)

SECTION: RESPONSE FORMAT response_format=json serialises the complete response as indented JSON — use this for programmatic parsing, typed clients, and downstream tool chaining. response_format=markdown renders the same data as a human-readable report. Both modes return identical underlying data — no fields are added, removed, or filtered by either mode.

SECTION: COMPUTE CLASS SLOW_COMPUTE — scans all 365 days of the year per tithi festival (up to 20 date scans).

SECTION: ERROR CONTRACT INVALID_PARAMS (local): None. INTERNAL_ERROR: Any upstream API failure or timeout → MCP INTERNAL_ERROR Edge cases: — Sunrise-based tithi may differ by one day from printed almanac calendars (which use midnight or fixed-time rules). — Rare years where a tithi is skipped may cause a festival to not be found (returns total < 20).

SECTION: DO NOT CONFUSE WITH asterwise_get_panchanga_calendar — full Panchanga for every day of a month; not festival-specific. asterwise_get_muhurta — finds auspicious windows for activities; not a festival calendar.

ParametersJSON Schema
NameRequiredDescriptionDefault
yearYes
latitudeNo
locationNo
timezoneNo
longitudeNo
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Describes complex calculation methodology (Swiss Ephemeris, sunrise-based tithi determination) and edge cases (tithi skipping, almanac differences). Annotations already indicate read-only, idempotent, non-destructive, and open-world. The description adds significant behavioral context beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with clear sections (WHAT, WORKFLOW, INPUT, OUTPUT, etc.). While somewhat lengthy, each section adds necessary information. Could be slightly more concise (e.g., tithi calculation formula might be simplified), but overall effective.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Covers all relevant aspects: input validation, output contract with fields, error scenarios, compute class, and relationships to sibling tools. Output contract details compensate for the absence of explicit output schema text. Completely adequate for a complex tool with 6 parameters and no schema descriptions.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Despite 0% schema description coverage, the description details input contract: year range (1900-2100), requirement for location or lat+lon+timezone, and explanation of response_format options (json vs markdown). This fully compensates for missing schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it computes major Hindu festival dates for a given year and location, listing specific festivals and distinguishing between solar and tithi-based types. The 'DO NOT CONFUSE WITH' section explicitly names sibling tools (asterwise_get_panchanga_calendar and asterwise_get_muhurta) and their different purposes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit workflow with 'BEFORE: None — standalone' and 'AFTER: asterwise_get_panchanga' for further detail. The 'DO NOT CONFUSE WITH' section gives clear alternatives and when to use them. Error contract and compute class (SLOW_COMPUTE) also guide usage expectations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_gemstone_recommendationsA
Read-onlyIdempotent
Inspect

Computes Ratna-style gemstone picks and cautions from the natal chart and returns primary, role-based stones, secondary options, contraindications, and a safety note.

SECTION: WHAT THIS TOOL COVERS Surfaces data.primary, yogakaraka/fifth/ninth/atmakaraka gems, secondary, contraindicated[], and note. Gemstone may be an empty string in primary when dual lordship withholds a recommendation. It does not emit mantra or fasting guidance (asterwise_get_remedies) or Lal Kitab totkas (asterwise_get_lal_kitab_remedies). The same mineral may appear in secondary and contraindicated with different reasons — read reason fields.

SECTION: WORKFLOW BEFORE: RECOMMENDED — asterwise_get_natal_chart — confirm chart before wearing advice. AFTER: asterwise_get_remedies — broader remedial programme if needed.

SECTION: INPUT CONTRACT BirthData follows the global contract.

SECTION: OUTPUT CONTRACT data.primary: planet (string) reason (string) gemstone (string — may be empty when withheld for dual lordship) substitute_gemstone (string) metal (string) colour (string) note (string) caution (string) data.yogakaraka_gem — { planet (string), gemstone (string) } data.fifth_lord_gem — { planet (string), gemstone (string) } data.ninth_lord_gem — { planet (string), gemstone (string) } data.atmakaraka_gem — { planet (string), gemstone (string) } data.secondary — { planet (string), gemstone (string) } data.contraindicated[] — { planet (string), gemstone (string), reason (string) } data.note (string — safety disclaimer)

SECTION: RESPONSE FORMAT response_format=json serialises the complete response as indented JSON — use this for programmatic parsing, typed clients, and downstream tool chaining. response_format=markdown renders the same data as a human-readable report. Both modes return identical underlying data — no fields are added, removed, or filtered by either mode.

SECTION: COMPUTE CLASS MEDIUM_COMPUTE

SECTION: ERROR CONTRACT INVALID_PARAMS (local — caught before upstream call): None — all validation is upstream.

INVALID_PARAMS (upstream): — Dates before 1800 or after 2100 → MCP INTERNAL_ERROR at the tool layer.

INTERNAL_ERROR: — Any upstream API failure or timeout → MCP INTERNAL_ERROR

Edge cases: — Gemstone duplication across secondary and contraindicated is intentional when roles conflict.

SECTION: DO NOT CONFUSE WITH asterwise_get_remedies — mantras, fasting, charity rows, not a gem matrix. asterwise_get_lal_kitab_remedies — Lal Kitab actions, not classical Ratna picks.

ParametersJSON Schema
NameRequiredDescriptionDefault
birthYesBirth data for a single person.
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, destructiveHint=false, openWorldHint=true, and idempotentHint=true. The description adds valuable behavioral context beyond annotations: it explains that gemstone may be empty in primary due to dual lordship, warns about intentional gemstone duplication across secondary and contraindicated, and specifies the compute class as MEDIUM_COMPUTE. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (WHAT THIS TOOL COVERS, WORKFLOW, etc.), but it's verbose with redundant details like full output contract and error specifications that could be in structured fields. Some sentences (e.g., Edge cases about duplication) are necessary, but overall it could be more concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity, rich annotations, and detailed output schema, the description is highly complete. It covers purpose, usage guidelines, behavioral nuances, input/output contracts, error handling, and sibling distinctions. The output schema exists, so the description appropriately focuses on contextual information rather than repeating return values.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 50%, with detailed descriptions for birth parameters but none for response_format. The description adds minimal parameter semantics beyond the schema, mainly referencing the global contract for BirthData and explaining the two response_format options. It doesn't fully compensate for the schema gap, but the baseline is appropriate given the schema does substantial documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool computes Ratna-style gemstone recommendations from a natal chart and specifies the exact output structure (primary, role-based stones, secondary options, contraindications, safety note). It distinguishes from siblings by explicitly naming asterwise_get_remedies and asterwise_get_lal_kitab_remedies as different tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit workflow guidance: BEFORE recommends asterwise_get_natal_chart to confirm the chart, and AFTER suggests asterwise_get_remedies for broader remedial programs. It also clearly states what the tool does NOT cover (mantra, fasting, Lal Kitab totkas), distinguishing it from alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_ghat_chakraA
Read-onlyIdempotent
Inspect

Returns the four Ghatak (inauspicious) timing parameters for a native based on their Janma Rasi (natal Moon sign).

SECTION: WHAT THIS TOOL COVERS Ghat Chakra identifies the four parameters that are persistently inauspicious for a native based on their Janma Rasi:

  1. Ghatak Masa — the lunar month to avoid for major events

  2. Ghatak Tithi — the lunar day group (Nanda/Bhadra/Jaya/Rikta/Purna)

  3. Ghatak Vara — the weekday to avoid for major events

  4. Ghatak Nakshatra — the transit Moon nakshatra to avoid When transit periods align with these parameters, the native should avoid starting new ventures, surgery, travel, or auspicious ceremonies. When multiple Ghatak parameters coincide simultaneously, the period is considered extremely inauspicious. Source: Muhurta Chintamani Ch.1 (Shubhashubha Prakarana); Phaladeepika Ch.26 (Gocharaphala).

SECTION: WORKFLOW BEFORE: None — birth data computes everything needed. AFTER: asterwise_get_nakshatra_prediction — for today's personalized daily auspiciousness score.

SECTION: INPUT CONTRACT birth — BirthData (date, time, lat, lon, timezone). Moon sign is computed from birth data.

SECTION: OUTPUT CONTRACT data.janma_rasi (string — Sanskrit Moon sign name) data.janma_rasi_index (int 0-11) data.ghatak_parameters{}: masa{}: name (string), description (string) tithi{}: group (string), tithi_numbers[] (int array), description (string) vara{}: day (string), description (string) nakshatra{}: name (string), description (string) data.guidance (string) data.avoidance_guidance[] (string array) data.classical_source (string)

SECTION: COMPUTE CLASS MEDIUM_COMPUTE — natal chart for Moon sign.

SECTION: ERROR CONTRACT INVALID_PARAMS (local): BirthData Pydantic violations → MCP INVALID_PARAMS INTERNAL_ERROR: Any upstream API failure → MCP INTERNAL_ERROR

SECTION: DO NOT CONFUSE WITH asterwise_get_nakshatra_prediction — personalised daily Tarabala score, not static Ghatak parameters. asterwise_get_panchanga — daily panchanga elements, not Ghat Chakra lookup.

ParametersJSON Schema
NameRequiredDescriptionDefault
birthYesBirth data for a single person.
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, destructiveHint=false. The description adds compute class, input/output contracts, error contract, and classical sources, which provide additional context beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (WHAT, WORKFLOW, INPUT/OUTPUT CONTRACT, etc.). It is somewhat verbose but each section adds value, justifying its length.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a complex tool with nested input and output, the description covers input contract, output contract (including fields), error contract, compute class, workflow, and sources. It is thorough and leaves no gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is high (most fields have descriptions). The description's input contract section restates the BirthData structure but adds little new meaning beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool returns four Ghatak timing parameters based on Janma Rasi, and distinguishes it from sibling tools like asterwise_get_nakshatra_prediction and asterwise_get_panchanga in the 'DO NOT CONFUSE WITH' section.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use (avoiding new ventures when Ghatak parameters align), workflow (before nothing, after nakshatra prediction), and explicitly names siblings to avoid confusion.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_gocharA
Read-onlyIdempotent
Inspect

Computes Gochar against the natal Moon and Lagna and returns per-planet transit longitudes, houses, AVK scores, vedha flags, and a roll-up summary.

SECTION: WHAT THIS TOOL COVERS Produces a transit snapshot: natal Moon and ascendant signs, nine graha transit rows with nakshatra/pada, houses from Moon and Lagna, favourability flags, optional Ashtakavarga bindu (null for Rahu/Ketu), vedha state, interpretation strings, themes, and quality labels, plus summary counts and sade sati / chandra ashtama flags. It does not list ingress events over a range (asterwise_get_transits), correlate with Vimshottari (asterwise_get_dasha_transits), or return Panchanga elements.

SECTION: WORKFLOW BEFORE: RECOMMENDED — asterwise_get_natal_chart — anchors what "natal" means for the same birth record. AFTER: asterwise_get_dasha_transits — adds dasha-lord correlation for today.

SECTION: INPUT CONTRACT target_date (string, optional — YYYY-MM-DD): date to compute transits for; defaults to today if omitted. BirthData follows the global contract (time='00:00' accepted without unknown-time detection).

SECTION: OUTPUT CONTRACT data.natal: moon_sign (string) moon_sign_index (int) ascendant_sign (string) ascendant_sign_index (int) data.target_date (string — YYYY-MM-DD, today) data.transits[] — nine objects (Sun through Ketu): planet (string) transit_sign (string) transit_sign_index (int) transit_degree (float) is_retrograde (bool) nakshatra (string) nakshatra_pada (int) house_from_moon (int — 1–12) house_from_lagna (int — 1–12) is_favorable_from_moon (bool) is_favorable_from_lagna (bool) bindu_override (bool) vedha_active (bool) vedha_blocking_planet (string or null) ashtakavarga_score (int or null — null for Rahu and Ketu) interpretation (string) themes[] (string array) quality (string — 'favorable' or 'unfavorable') data.summary: favorable_count (int) unfavorable_count (int) vedha_blocked_count (int) overall_score (int) sade_sati_active (bool) sade_sati_phase (string or null) sade_sati_interpretation (object or null — populated when sade_sati_active is true; contains phase-specific prose for rising, peak, or setting phase of Sade Sati) chandra_ashtama_active (bool)

SECTION: RESPONSE FORMAT response_format=json serialises the complete response as indented JSON — use this for programmatic parsing, typed clients, and downstream tool chaining. response_format=markdown renders the same data as a human-readable report. Both modes return identical underlying data — no fields are added, removed, or filtered by either mode.

SECTION: COMPUTE CLASS MEDIUM_COMPUTE

SECTION: ERROR CONTRACT INVALID_PARAMS (local — caught before upstream call): — target_date must match YYYY-MM-DD when provided (Pydantic pattern on the tool schema) → MCP INVALID_PARAMS

INVALID_PARAMS (upstream): — None — upstream rejection surfaces as MCP INTERNAL_ERROR at the tool layer.

INTERNAL_ERROR: — Any upstream API failure or timeout → MCP INTERNAL_ERROR

Edge cases: — ashtakavarga_score is null for Rahu and Ketu.

SECTION: DO NOT CONFUSE WITH asterwise_get_transits — ingress and station tables for a chosen date window, not a single-day Gochar snapshot. asterwise_get_dasha_transits — scores how transits meet active dasha lords, not the full nine-planet Gochar row set.

ParametersJSON Schema
NameRequiredDescriptionDefault
birthYesBirth data for a single person.
target_dateNo
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover read-only, open-world, idempotent, and non-destructive hints, so the bar is lower. The description adds valuable context beyond annotations: it specifies compute class ('MEDIUM_COMPUTE'), details error contracts (e.g., 'INVALID_PARAMS', 'INTERNAL_ERROR'), and mentions edge cases (e.g., 'ashtakavarga_score is null for Rahu and Ketu'). However, it doesn't explicitly address rate limits or authentication needs, though annotations imply safety.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (e.g., 'WHAT THIS TOOL COVERS', 'WORKFLOW'), each sentence adds value without redundancy. It is front-loaded with the core purpose, and the structured format enhances readability and efficiency for an AI agent.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity, rich output schema, and annotations, the description is highly complete. It covers purpose, usage guidelines, input/output contracts, error handling, and sibling differentiation. The output schema exists, so the description doesn't need to explain return values in detail, and it provides all necessary contextual information for effective tool use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is low at 33%, but the description compensates well. 'SECTION: INPUT CONTRACT' explains the optional target_date parameter with format and default behavior, and clarifies BirthData follows a global contract. While it doesn't detail all schema properties like ayanamsa or timezone, it adds meaningful context for key inputs, raising it above the baseline.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description starts with a clear verb ('Computes') and specifies the exact resource ('Gochar against the natal Moon and Lagna'), then lists the comprehensive return data. It explicitly distinguishes from siblings like asterwise_get_transits and asterwise_get_dasha_transits, making the purpose specific and differentiated.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The 'SECTION: WHAT THIS TOOL COVERS' explicitly states what it does not do (e.g., 'does not list ingress events over a range'), and 'SECTION: DO NOT CONFUSE WITH' names alternatives. 'SECTION: WORKFLOW' provides clear before/after recommendations (e.g., 'BEFORE: RECOMMENDED — asterwise_get_natal_chart'), giving explicit guidance on when to use this tool versus others.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_horaA
Read-onlyIdempotent
Inspect

Builds the twenty-four planetary Horas between successive sunrises for a location date and tags each hour with ruler, quality text, and whether it is current.

SECTION: WHAT THIS TOOL COVERS Classical hora muhurta: sequence restarts from the weekday lord, spans day and night until next sunrise, and exposes start/end in local time. It is not a natal divisional chart, not Choghadiya (asterwise_get_choghadiya), and not full Panchanga (asterwise_get_panchanga).

SECTION: WORKFLOW BEFORE: None — this tool is standalone. AFTER: asterwise_get_choghadiya — alternative same-day slot system.

SECTION: INPUT CONTRACT LocationInput date/coordinate rules apply locally (YYYY-MM-DD, bounded lat/lon).

SECTION: OUTPUT CONTRACT data.date (string) data.sunrise (string — HH:MM) data.next_sunrise (string — HH:MM) data.horas[] — twenty-four objects: hora (int — 1–24) ruling_planet (string) start (string — HH:MM local) end (string — HH:MM local) quality (string — suitable activities description) is_current (bool)

SECTION: RESPONSE FORMAT response_format=json serialises the complete response as indented JSON — use this for programmatic parsing, typed clients, and downstream tool chaining. response_format=markdown renders the same data as a human-readable report. Both modes return identical underlying data — no fields are added, removed, or filtered by either mode.

SECTION: COMPUTE CLASS FAST_LOOKUP

SECTION: ERROR CONTRACT INVALID_PARAMS (local — caught before upstream call): — Invalid LocationInput fields → MCP INVALID_PARAMS

INVALID_PARAMS (upstream): — None — upstream rejection surfaces as MCP INTERNAL_ERROR at the tool layer.

INTERNAL_ERROR: — Any upstream API failure or timeout → MCP INTERNAL_ERROR

Edge cases: — Horas bridge midnight until the next sunrise reference.

SECTION: DO NOT CONFUSE WITH asterwise_get_choghadiya — sixteen Choghadiya segments, not twenty-four Horas. asterwise_get_natal_chart — natal analysis, not hourly muhurta tables.

ParametersJSON Schema
NameRequiredDescriptionDefault
locationYesFor tools that need location but not birth time.

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint=true, destructiveHint=false, openWorldHint=true, and idempotentHint=true. The description adds valuable behavioral context beyond annotations: it explains the hora sequence logic ('sequence restarts from the weekday lord, spans day and night until next sunrise'), mentions edge cases ('Horas bridge midnight until the next sunrise reference'), and specifies the compute class ('FAST_LOOKUP'). No contradiction with annotations exists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (WHAT THIS TOOL COVERS, WORKFLOW, INPUT CONTRACT, etc.) that make it easy to scan. While comprehensive, every section serves a purpose - distinguishing from siblings, explaining workflow, documenting contracts, and preventing confusion. Some sections could be more concise, but overall it's efficiently organized.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has comprehensive annotations, 100% schema coverage, and an output schema (implied by the detailed OUTPUT CONTRACT section), the description is complete. It covers purpose, differentiation from siblings, behavioral context, input/output contracts, error handling, and edge cases. The existence of an output schema means the description doesn't need to explain return values in detail.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already fully documents all parameters. The description adds minimal parameter semantics beyond the schema - it mentions 'LocationInput date/coordinate rules apply locally (YYYY-MM-DD, bounded lat/lon)' which restates what's in the schema. The baseline of 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'Builds the twenty-four planetary Horas between successive sunrises for a location date and tags each hour with ruler, quality text, and whether it is current.' This is a specific verb ('Builds') + resource ('twenty-four planetary Horas') with precise scope. It explicitly distinguishes from siblings like asterwise_get_choghadiya and asterwise_get_natal_chart in the 'DO NOT CONFUSE WITH' section.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool versus alternatives. The 'SECTION: WHAT THIS TOOL COVERS' states it's 'not Choghadiya (asterwise_get_choghadiya), and not full Panchanga (asterwise_get_panchanga).' The 'SECTION: DO NOT CONFUSE WITH' section explicitly names two sibling tools and explains the differences. The 'SECTION: WORKFLOW' clarifies it's standalone with no prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_horoscopeA
Read-onlyIdempotent
Inspect

Fetches an AI-synthesised Moon-sign horoscope for a chosen horizon and returns structured guidance fields plus metadata about the model and period.

SECTION: WHAT THIS TOOL COVERS Calls the upstream horoscope service for a lunar sign (English or Sanskrit input accepted; response normalises moon_sign to lowercase English) and a period of daily, weekly, monthly, or yearly. It returns narrative and checklist-style content for life areas, remedy, and timing flavour text. It does not compute a personal natal chart, divisional charts, or dasha — only sign-level transit-flavoured copy tied to the requested horizon.

SECTION: WORKFLOW BEFORE: None — this tool is standalone. AFTER: asterwise_get_natal_chart — if the user needs a personalised chart beyond sign-general copy.

SECTION: INPUT CONTRACT period is constrained to the tool schema enum (daily, weekly, monthly, yearly). moon_sign accepts Sanskrit (Tula, Vrischika, Karka, Simha, Kanya, Dhanu, Makara, Kumbha, Meena, Mesha, Vrishabha, Mithuna) or English (Libra, Scorpio, Cancer, Leo, Virgo, Sagittarius, Capricorn, Aquarius, Pisces, Aries, Taurus, Gemini); resolution is upstream. response_format selects JSON vs markdown rendering only.

SECTION: OUTPUT CONTRACT data.content: do[] (string array) body (string) love (string) avoid[] (string array) money (string) career (string) remedy (string) headline (string) narrative (string) open_loop (string) data.model_used (string — AI model version label) data.generated_at (string — ISO UTC) data.period_key (string — YYYY-MM-DD for daily; identifier for other horizons) data.horizon (string — 'daily', 'weekly', 'monthly', or 'yearly') data.moon_sign (string — lowercase English, e.g. 'libra')

SECTION: RESPONSE FORMAT response_format=json serialises the complete response as indented JSON — use this for programmatic parsing, typed clients, and downstream tool chaining. response_format=markdown renders the same data as a human-readable report. Both modes return identical underlying data — no fields are added, removed, or filtered by either mode.

SECTION: COMPUTE CLASS FAST_LOOKUP

SECTION: ERROR CONTRACT INVALID_PARAMS (local — caught before upstream call): — Invalid period enum or other Pydantic field violations on the tool schema → MCP INVALID_PARAMS

INVALID_PARAMS (upstream): — Unknown or unsupported moon_sign → MCP INTERNAL_ERROR at the tool layer (upstream rejection).

INTERNAL_ERROR: — Any upstream API failure or timeout → MCP INTERNAL_ERROR

Edge cases: — Sign-level content only; not a substitute for birth-chart analysis.

SECTION: DO NOT CONFUSE WITH asterwise_get_natal_chart — full personalised sidereal chart from birth data, not Moon-sign editorial copy. asterwise_get_gochar — nine-planet transit snapshot vs natal chart for today, not AI horoscope prose.

ParametersJSON Schema
NameRequiredDescriptionDefault
periodYes
moon_signYes
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds substantial behavioral context beyond the annotations. While annotations indicate read-only, open-world, idempotent, and non-destructive operations, the description provides critical details: it's a 'FAST_LOOKUP' compute class, explains the upstream service call, describes error handling (INVALID_PARAMS and INTERNAL_ERROR scenarios), clarifies that it only provides 'sign-level content' not birth-chart analysis, and details the response format options (JSON vs markdown) and their identical underlying data.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (WHAT THIS TOOL COVERS, WORKFLOW, INPUT CONTRACT, etc.) that make information easy to find. While comprehensive, it's appropriately sized for a tool with complex behavior and 0% schema coverage. Some sections could be slightly more concise, but every sentence adds value beyond structured fields.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (3 parameters with 0% schema coverage, multiple sibling tools, and rich output), the description is exceptionally complete. It covers purpose, usage guidelines, input semantics, output structure, response formats, compute class, error handling, and sibling differentiation. With an output schema present, it appropriately focuses on explaining the meaning and use of output fields rather than just listing them.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description fully compensates by providing comprehensive parameter semantics. 'SECTION: INPUT CONTRACT' details: period is constrained to the enum values, moon_sign accepts both Sanskrit and English names with normalization to lowercase English, and response_format selects JSON vs markdown rendering. It also explains resolution is upstream and what each format provides, adding crucial context not in the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'fetches an AI-synthesised Moon-sign horoscope' with specific details about what it returns (structured guidance fields plus metadata). It explicitly distinguishes this from sibling tools like 'asterwise_get_natal_chart' and 'asterwise_get_gochar' by explaining this provides 'sign-level transit-flavoured copy' rather than personalized charts or planetary transit snapshots.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance in multiple sections: 'SECTION: WHAT THIS TOOL COVERS' explains what it does and doesn't do (no natal chart, divisional charts, or dasha), 'SECTION: WORKFLOW' specifies when to use this tool (standalone) and when to use 'asterwise_get_natal_chart' instead, and 'SECTION: DO NOT CONFUSE WITH' clearly differentiates from two specific sibling tools with their purposes.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_karmic_lessonsA
Read-onlyIdempotent
Inspect

Identifies karmic lessons by scanning all letter values in the full name and finding which digits 1–9 are absent. Each missing digit represents an area where experience is thin and development is needed in this lifetime.

SECTION: WHAT THIS TOOL COVERS Every letter in the name maps to a digit 1–9. Digits that never appear indicate karmic lessons — life areas the soul has not yet fully explored. A person missing the digit 4 (associated with system-building, discipline, endurance) will find practical organisation a recurring challenge. Missing multiple digits intensifies the developmental pressure in those areas. Digit zero is not used — only 1 through 9.

SECTION: WORKFLOW BEFORE: None — standalone. AFTER: asterwise_get_numerology_profile — see karmic lessons alongside all core numbers.

SECTION: INPUT CONTRACT name — Full legal name as used at birth. Example: 'Arjun Mehta' — scan all 9 letters for their Pythagorean digit values. Letters present: A=1, R=9, J=1, U=3, N=5, M=4, E=5, H=8, T=2, A=1 Digits present: {1,2,3,4,5,8,9} → Missing: {6,7}

SECTION: OUTPUT CONTRACT data.karmic_lessons (int array — missing digits 1–9, sorted ascending; empty if none missing) data.has_karmic_lessons (bool — false when all nine digits are present)

SECTION: RESPONSE FORMAT response_format=json — structured JSON. response_format=markdown — human-readable. Both modes return identical underlying data.

SECTION: COMPUTE CLASS FAST_LOOKUP

SECTION: ERROR CONTRACT INVALID_PARAMS (local): None. INTERNAL_ERROR: upstream failure → MCP INTERNAL_ERROR

SECTION: DO NOT CONFUSE WITH asterwise_get_balance_number — uses only first letters (initials), not all letters. asterwise_get_expression_number — reduces all letters to a single number; does not scan for absent digits.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYes
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only, idempotent, non-destructive behavior. The description adds value by explaining the algorithmic logic (missing digits) and providing an output contract. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise, front-loading the purpose in the first sentence, and uses a clear output contract section. No unnecessary information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description covers the tool's logic and output structure thoroughly, enabling an AI to understand expected inputs and outputs. Lacks details on error handling or edge cases, but this is adequate given the tool's simplicity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description does not explicitly describe the 'name' or 'response_format' parameters. With 0% schema description coverage, the description only indirectly clarifies usage via the output contract. It adds minimal value beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool identifies karmic lessons by finding absent digits 1-9 from name's letter values, specifying the exact resource and action. It distinguishes itself from sibling tools like get_expression_number or get_soul_urge_number.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

There is no explicit instruction on when to use this tool versus other asterwise tools, such as when to prefer other numerology tools. However, the purpose is clear enough for an AI to infer usage context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_kp_chartA
Read-onlyIdempotent
Inspect

Builds a KP natal chart with sub-lords on grahas and twelve cusps from BirthData using the KP ayanamsa in the response.

SECTION: WHAT THIS TOOL COVERS Krishnamurti Paddhati charting: lagna row, planet rows with nakshatra_index and sub_lord, house_cusps with matching lords. Recommend BirthData.ayanamsa='kp' for coherent physics — any enum value is accepted locally and forwarded. Accurate birth time matters; midnight placeholder yields unreliable sub-lords for event timing. Not BPHS radix (asterwise_get_natal_chart) nor prashna (asterwise_get_prashna_chart).

SECTION: WORKFLOW BEFORE: RECOMMENDED — cross-check birth record before trusting sub-lords. AFTER: asterwise_get_kp_significators — house-level significator chains.

SECTION: INPUT CONTRACT ayanamsa choice is not forced locally — mismatched settings still post to upstream. time='00:00' is accepted without warning.

SECTION: OUTPUT CONTRACT data.ayanamsa (string — 'kp') data.lagna: rashi (string) rashi_index (int) longitude (float) nakshatra_lord (string) sub_lord (string) data.planets{} — Sun..Ketu: longitude (float) rashi_index (int) rashi (string) degree (float) is_retrograde (bool) house (int) nakshatra_index (int — 0–26) nakshatra_lord (string) sub_lord (string) data.house_cusps{} — keys '1'..'12': longitude (float) rashi_index (int) rashi (string) nakshatra_lord (string) sub_lord (string)

SECTION: RESPONSE FORMAT response_format=json serialises the complete response as indented JSON — use this for programmatic parsing, typed clients, and downstream tool chaining. response_format=markdown renders the same data as a human-readable report. Both modes return identical underlying data — no fields are added, removed, or filtered by either mode.

SECTION: COMPUTE CLASS MEDIUM_COMPUTE

SECTION: ERROR CONTRACT INVALID_PARAMS (local — caught before upstream call): None — BirthData Pydantic only.

INVALID_PARAMS (upstream): — None — upstream rejection surfaces as MCP INTERNAL_ERROR at the tool layer.

INTERNAL_ERROR: — Any upstream API failure or timeout → MCP INTERNAL_ERROR

Edge cases: — Sub-lord chains degrade when true birth time unknown.

SECTION: DO NOT CONFUSE WITH asterwise_get_natal_chart — Parashari bundle without KP sub-lords. asterwise_get_kp_ruling_planets — live moment rulers, not natal cusps.

ParametersJSON Schema
NameRequiredDescriptionDefault
birthYesBirth data for a single person.
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond the annotations: it explains that accurate birth time matters for reliable sub-lords, describes how midnight placeholders affect results, mentions that mismatched ayanamsa settings are forwarded upstream without local enforcement, and notes that sub-lord chains degrade with unknown birth times. While annotations cover safety (readOnly, non-destructive), the description provides practical implementation details without contradicting them.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (WHAT THIS TOOL COVERS, WORKFLOW, INPUT CONTRACT, etc.) that make it easy to scan. While comprehensive, every section adds value: distinguishing from siblings, providing workflow guidance, clarifying input/output behavior, and warning about edge cases. Some redundancy exists (output details appear in both OUTPUT CONTRACT and RESPONSE FORMAT), but overall it's efficiently organized.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (astrological calculations with specific KP methodology), rich annotations, and detailed output schema, the description provides excellent contextual completeness. It covers the tool's purpose, when to use it, input considerations, output structure, error handling, sibling differentiation, and practical warnings. The presence of an output schema means the description doesn't need to explain return values in detail, allowing it to focus on higher-level context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With only 50% schema description coverage (birth object has detailed descriptions but response_format lacks description), the description compensates well. It explains the purpose of the ayanamsa parameter ('kp' recommended for coherent physics), clarifies that time='00:00' is accepted without warning, and provides context about birth data requirements. The 'INPUT CONTRACT' section adds semantic understanding beyond the basic schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'Builds a KP natal chart with sub-lords on grahas and twelve cusps from BirthData using the KP ayanamsa', specifying the exact astrological system (Krishnamurti Paddhati), what it calculates (sub-lords for planets and cusps), and the required input (BirthData). It explicitly distinguishes from siblings asterwise_get_natal_chart and asterwise_get_prashna_chart in the 'DO NOT CONFUSE WITH' section.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool versus alternatives: it recommends using this for KP charting (not BPHS radix or prashna), suggests cross-checking birth records before trusting sub-lords, and mentions asterwise_get_kp_significators as a follow-up tool. It also warns about time='00:00' yielding unreliable sub-lords and recommends ayanamsa='kp' for coherent physics.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_kp_ruling_planetsA
Read-onlyIdempotent
Inspect

Computes KP ruling planets for the instantaneous chart at lat/lon with no birth data and returns day lord, Moon/Ascendant lord chains, and a deduplicated ruling_planets list.

SECTION: WHAT THIS TOOL COVERS Current-moment KP snapshot: target_utc, day_lord, Moon and Ascendant tuples with sign/nakshatra/sub lords, ruling_planets[] unique names. Not natal positions (asterwise_get_kp_chart) and not house significators (asterwise_get_kp_significators). Coordinate sanity is upstream — not locally validated floats beyond whatever FastMCP passes.

SECTION: WORKFLOW BEFORE: None — this tool is standalone. AFTER: asterwise_get_kp_chart — if natal confirmation is needed afterwards.

SECTION: INPUT CONTRACT lat and lon only; no date parameter — "now" is implicit on the server clock.

SECTION: OUTPUT CONTRACT data.ayanamsa (string — 'kp') data.target_utc (string — ISO UTC) data.day_lord (string — planet name) data.moon: longitude (float) rashi (string) sign_lord (string) nakshatra_lord (string) sub_lord (string) data.ascendant — same fields as data.moon data.ruling_planets[] (string array — unique names, deduplicated)

SECTION: RESPONSE FORMAT response_format=json serialises the complete response as indented JSON — use this for programmatic parsing, typed clients, and downstream tool chaining. response_format=markdown renders the same data as a human-readable report. Both modes return identical underlying data — no fields are added, removed, or filtered by either mode.

SECTION: COMPUTE CLASS FAST_LOOKUP

SECTION: ERROR CONTRACT INVALID_PARAMS (local — caught before upstream call): None — lat/lon not range-checked locally.

INVALID_PARAMS (upstream): — None — coordinate errors surface as MCP INTERNAL_ERROR at the tool layer.

INTERNAL_ERROR: — Any upstream API failure or timeout → MCP INTERNAL_ERROR

Edge cases: — Represents instantaneous sky — differs from natal stored charts.

SECTION: DO NOT CONFUSE WITH asterwise_get_kp_chart — needs BirthData and returns full natal KP cusps. asterwise_get_prashna_chart — horary keyword workflow, not ruling-planet snapshot.

ParametersJSON Schema
NameRequiredDescriptionDefault
latYes
lonYes
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint=true, openWorldHint=true, idempotentHint=true, and destructiveHint=false. The description adds valuable context beyond this: it specifies the tool uses server clock for 'now', notes coordinate validation is upstream, describes edge cases (instantaneous vs. natal), and details error handling (e.g., INTERNAL_ERROR for upstream failures). This enriches behavioral understanding without contradicting annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (e.g., WHAT THIS TOOL COVERS, WORKFLOW, INPUT CONTRACT), making it easy to scan. Each sentence adds value, such as distinguishing from siblings or explaining output fields, with no redundant information. The front-loaded summary immediately states the core purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (astrological computations), the description is highly complete. It covers purpose, usage, input semantics, output structure (detailed in OUTPUT CONTRACT), error handling, and sibling differentiation. With annotations covering safety and an output schema likely detailing return values, the description fills all necessary contextual gaps without over-explaining structured data.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It explains 'lat and lon only; no date parameter — "now" is implicit on the server clock', clarifying the temporal aspect. For 'response_format', it details the differences between 'json' and 'markdown' modes. However, it doesn't specify expected formats or ranges for lat/lon beyond noting they're 'not range-checked locally', leaving some ambiguity.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description starts with a clear verb ('Computes') and resource ('KP ruling planets'), specifying it's for an instantaneous chart with lat/lon and no birth data. It explicitly distinguishes from siblings like 'asterwise_get_kp_chart' (needs birth data) and 'asterwise_get_kp_significators' (house significators), making the purpose specific and well-differentiated.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The 'WHAT THIS TOOL COVERS' section details what it does and does not cover, while 'DO NOT CONFUSE WITH' explicitly lists alternative tools and when to use them (e.g., 'asterwise_get_kp_chart — needs BirthData'). The 'WORKFLOW' section provides clear before/after guidance, making usage context explicit.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_kp_significatorsA
Read-onlyIdempotent
Inspect

Computes KP significator chains for all houses or one optional house from BirthData and returns house tables plus planet-tier reverse indexes.

SECTION: WHAT THIS TOOL COVERS For each house string key '1'..'12', lists sign_lord, occupants, nakshatra lord chains, and unions; planet_significators{} exposes tiered house lists per graha. Optional house_number filters upstream when provided. Out-of-range house integers are not validated locally — upstream handles errors as MCP INTERNAL_ERROR. Requires same birth tuple as asterwise_get_kp_chart for coherent analysis.

SECTION: WORKFLOW BEFORE: RECOMMENDED — asterwise_get_kp_chart — establish cusps before significators. AFTER: None.

SECTION: INPUT CONTRACT house_number optional int; omit for all twelve. Values outside 1..12 are validated upstream only.

SECTION: OUTPUT CONTRACT data.ayanamsa (string — 'kp') data.significators{} — keys '1'..'12': house (int) sign_lord (string) occupants[] (string array) nak_of_occupants[] (string array) nak_of_lord[] (string array) all_significators[] (string array) data.planet_significators{} — per planet: tier1_houses[] (int array) tier2_houses[] (int array) tier3_houses[] (int array) tier4_houses[] (int array) all_significators[] (int array)

SECTION: RESPONSE FORMAT response_format=json serialises the complete response as indented JSON — use this for programmatic parsing, typed clients, and downstream tool chaining. response_format=markdown renders the same data as a human-readable report. Both modes return identical underlying data — no fields are added, removed, or filtered by either mode.

SECTION: COMPUTE CLASS MEDIUM_COMPUTE

SECTION: ERROR CONTRACT INVALID_PARAMS (local — caught before upstream call): None — house_number not bounds-checked here.

INVALID_PARAMS (upstream): — None — invalid house indices surface as MCP INTERNAL_ERROR at the tool layer.

INTERNAL_ERROR: — Any upstream API failure or timeout → MCP INTERNAL_ERROR

Edge cases: — Filtering to one house still returns planet_significators{} for context.

SECTION: DO NOT CONFUSE WITH asterwise_get_kp_chart — cusps and sub-lords, not tiered significator unions. asterwise_get_natal_chart — Parashari drishti matrices differ from KP significator tiers.

ParametersJSON Schema
NameRequiredDescriptionDefault
birthYesBirth data for a single person.
house_numberNo
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, destructiveHint=false, openWorldHint=true, and idempotentHint=true. The description adds valuable context beyond this: it specifies that 'Out-of-range house integers are not validated locally — upstream handles errors as MCP INTERNAL_ERROR' and 'Filtering to one house still returns planet_significators{} for context.' It also mentions 'MEDIUM_COMPUTE' class and error handling details, which are not covered by annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (WHAT THIS TOOL COVERS, WORKFLOW, INPUT CONTRACT, etc.), making it easy to scan. Each sentence adds value, such as explaining output structure, error handling, and sibling tool distinctions, with no redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity, the description is highly complete. It covers purpose, usage guidelines, input/output contracts, error handling, and sibling distinctions. With annotations providing safety and idempotency hints, and an output schema detailing the return structure, the description fills in necessary contextual gaps effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 33%, which is low. The description adds some parameter context: 'house_number optional int; omit for all twelve. Values outside 1..12 are validated upstream only' and mentions 'Requires same birth tuple as asterwise_get_kp_chart for coherent analysis.' However, it doesn't fully compensate for the low schema coverage, as it doesn't explain the 'birth' object details or 'response_format' beyond what's in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'computes KP significator chains for all houses or one optional house from BirthData' and 'returns house tables plus planet-tier reverse indexes.' It distinguishes from siblings by explicitly naming 'asterwise_get_kp_chart' and 'asterwise_get_natal_chart' and explaining how they differ in function.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance: 'BEFORE: RECOMMENDED — asterwise_get_kp_chart — establish cusps before significators' and 'DO NOT CONFUSE WITH' sections that clarify when to use this tool versus alternatives. It also specifies that it 'Requires same birth tuple as asterwise_get_kp_chart for coherent analysis.'

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_lal_kitab_chartA
Read-onlyIdempotent
Inspect

Produces the Lal Kitab house and planet schema plus Rin (debt) flags from BirthData using Lal Kitab placement rules distinct from Parashari BPHS.

SECTION: WHAT THIS TOOL COVERS Returns data.system 'lal_kitab', ayanamsa, planets{} with lk_house and pucca/kachcha flags, twelve houses{} with occupants and significations, and rin_analysis with boolean debts, active_rins[], and rin_remedies[] rows. Do not merge these houses with asterwise_get_natal_chart Bhava Chalit without explicit user intent — frameworks differ.

SECTION: WORKFLOW BEFORE: None — standalone for Lal Kitab queries. AFTER: asterwise_get_lal_kitab_remedies — practical totkas aligned to this chart.

SECTION: INPUT CONTRACT BirthData global contract; mixing interpretive systems in prose is a caller concern, not validated here.

SECTION: OUTPUT CONTRACT data.system (string — 'lal_kitab') data.ayanamsa (string) data.planets{} — Sun..Ketu: longitude (float) rashi_index (int) rashi (string) lk_house (int — 1–12) house_lord (string) is_retrograde (bool) pucca_ghar (bool) kachcha_ghar (bool) uchcha (bool) neecha (bool) pucca_house (int) kachcha_house (int) data.houses{} — keys '1'..'12': house (int) rashi_index (int) rashi (string) lord (string) occupants[] (string array) signification (string) has_benefic (bool) has_malefic (bool) data.rin_analysis: pitru_rin, matru_rin, bhai_rin, stri_rin, dev_rin (bool) active_rins[] (string array) rin_remedies[] — { rin (string), planet (string), totka (string) }

SECTION: RESPONSE FORMAT response_format=json serialises the complete response as indented JSON — use this for programmatic parsing, typed clients, and downstream tool chaining. response_format=markdown renders the same data as a human-readable report. Both modes return identical underlying data — no fields are added, removed, or filtered by either mode.

SECTION: COMPUTE CLASS MEDIUM_COMPUTE

SECTION: ERROR CONTRACT INVALID_PARAMS (local — caught before upstream call): None — BirthData Pydantic only.

INVALID_PARAMS (upstream): — None — upstream rejection surfaces as MCP INTERNAL_ERROR at the tool layer.

INTERNAL_ERROR: — Any upstream API failure or timeout → MCP INTERNAL_ERROR

Edge cases: — Lal Kitab houses are not interchangeable with BPHS cusps.

SECTION: DO NOT CONFUSE WITH asterwise_get_natal_chart — Parashari radix, not Lal Kitab lk_house logic. asterwise_get_lal_kitab_remedies — remedy list without full chart geometry.

ParametersJSON Schema
NameRequiredDescriptionDefault
birthYesBirth data for a single person.
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond the annotations. While annotations (readOnlyHint=true, destructiveHint=false) indicate a safe read operation, the description provides additional details: it specifies the compute class (MEDIUM_COMPUTE), describes the output format options (json vs markdown), explains error handling (INVALID_PARAMS, INTERNAL_ERROR), and warns about edge cases ('Lal Kitab houses are not interchangeable with BPHS cusps'). No contradiction with annotations exists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (WHAT THIS TOOL COVERS, WORKFLOW, INPUT CONTRACT, etc.), making it easy to scan. Every sentence serves a specific purpose—explaining scope, distinguishing from siblings, describing workflow, or clarifying contracts—with no redundant information. The information is front-loaded with the core purpose in the first sentence.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (astrological calculations with multiple output components), the description provides comprehensive context. It details the exact output structure, specifies compute requirements, explains error handling, and distinguishes from related tools. With annotations covering safety aspects and an output schema presumably detailing the response structure, the description fills all necessary gaps without duplicating structured data.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With only 2 parameters (birth and response_format) and 50% schema description coverage, the description adds meaningful context. It states 'BirthData global contract; mixing interpretive systems in prose is a caller concern, not validated here', clarifying that parameter validation follows a standard pattern. While it doesn't detail individual parameter fields, it provides high-level guidance about the input contract that complements the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'Produces the Lal Kitab house and planet schema plus Rin (debt) flags from BirthData', specifying the exact verb ('produces'), resource ('Lal Kitab house and planet schema'), and output components. It explicitly distinguishes this from sibling tools like 'asterwise_get_natal_chart' (Parashari BPHS) and 'asterwise_get_lal_kitab_remedies' (remedy list only), providing clear differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description includes explicit guidance on when to use this tool versus alternatives. It states 'Do not merge these houses with asterwise_get_natal_chart Bhava Chalit without explicit user intent — frameworks differ' and distinguishes it from 'asterwise_get_lal_kitab_remedies'. It also specifies 'BEFORE: None — standalone for Lal Kitab queries' and 'AFTER: asterwise_get_lal_kitab_remedies', providing clear workflow context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_lal_kitab_remediesA
Read-onlyIdempotent
Inspect

Lists Lal Kitab style totkas per stressed planet from BirthData with priority tiers and typed action rows (remedy, donation, keep, avoid).

SECTION: WHAT THIS TOOL COVERS Outputs data.system, ayanamsa, and remedies[] entries tying planets to lk context and nested remedies[] instructions. Distinct from Parashari mantra/gem rows (asterwise_get_remedies). Best interpreted alongside asterwise_get_lal_kitab_chart for house context.

SECTION: WORKFLOW BEFORE: RECOMMENDED — asterwise_get_lal_kitab_chart — see chart before applying totkas. AFTER: None.

SECTION: INPUT CONTRACT BirthData only.

SECTION: OUTPUT CONTRACT data.system (string — 'lal_kitab') data.ayanamsa (string) data.remedies[] — each: planet (string) lk_house (int) rashi (string) pucca_ghar (bool) kachcha_ghar (bool) uchcha (bool) neecha (bool) priority (string — 'high', 'medium', or 'low') remedies[] — { type (string — 'remedy', 'donation', 'keep', or 'avoid'), action (string) }

SECTION: RESPONSE FORMAT response_format=json serialises the complete response as indented JSON — use this for programmatic parsing, typed clients, and downstream tool chaining. response_format=markdown renders the same data as a human-readable report. Both modes return identical underlying data — no fields are added, removed, or filtered by either mode.

SECTION: COMPUTE CLASS MEDIUM_COMPUTE

SECTION: ERROR CONTRACT INVALID_PARAMS (local — caught before upstream call): None — BirthData Pydantic only.

INVALID_PARAMS (upstream): — None — upstream rejection surfaces as MCP INTERNAL_ERROR at the tool layer.

INTERNAL_ERROR: — Any upstream API failure or timeout → MCP INTERNAL_ERROR

Edge cases: — Empty remedies[] possible when no graha needs attention — still success if upstream returns so.

SECTION: DO NOT CONFUSE WITH asterwise_get_remedies — BPHS mantra/gem prescriptions, not Lal Kitab totkas. asterwise_get_gemstone_recommendations — classical Ratna focus, not household remedies.

ParametersJSON Schema
NameRequiredDescriptionDefault
birthYesBirth data for a single person.
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint=true, openWorldHint=true, idempotentHint=true, and destructiveHint=false, covering safety and idempotency. The description adds valuable context beyond this: it specifies the compute class (MEDIUM_COMPUTE), details error contracts (INVALID_PARAMS, INTERNAL_ERROR), and explains edge cases like empty remedies[] arrays. It does not contradict the annotations, which indicate a safe, non-destructive read operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (e.g., WHAT THIS TOOL COVERS, WORKFLOW, INPUT CONTRACT), making it easy to scan. Each sentence adds value, such as distinguishing from siblings or explaining output formats, with no redundant information. It is appropriately sized for the tool's complexity, providing necessary details without verbosity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (with nested output structures) and the presence of annotations and an output schema, the description is highly complete. It covers purpose, usage guidelines, behavioral context (compute class, errors), parameter details, and output structure. The output schema is referenced in the 'OUTPUT CONTRACT' section, so the description need not explain return values in depth, focusing instead on interpretation and workflow.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 50%, with the 'birth' object well-documented but 'response_format' lacking description. The description compensates by detailing the 'response_format' parameter in the 'RESPONSE FORMAT' section, explaining that 'json' serializes for programmatic use and 'markdown' renders human-readable reports, with both returning identical data. This adds meaningful semantics beyond the schema's enum definition.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'Lists Lal Kitab style totkas per stressed planet from BirthData with priority tiers and typed action rows.' It specifies the exact resource (Lal Kitab totkas), the input source (BirthData), and the output structure. It explicitly distinguishes from sibling tools like 'asterwise_get_remedies' and 'asterwise_get_gemstone_recommendations' by noting they cover different remedy systems.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool versus alternatives. It states 'Distinct from Parashari mantra/gem rows (asterwise_get_remedies)' and 'Best interpreted alongside asterwise_get_lal_kitab_chart for house context.' It also includes a 'WORKFLOW' section recommending to use 'asterwise_get_lal_kitab_chart' before applying totkas and a 'DO NOT CONFUSE WITH' section listing specific sibling tools to avoid confusion.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_lo_shu_gridA
Read-onlyIdempotent
Inspect

Derives a Lo Shu three-by-three frequency grid from birth-date digits and annotates planes, missing or repeated digits, and per-digit traits.

SECTION: WHAT THIS TOOL COVERS Chinese Lo Shu analysis: counts how often each digit one through nine appears in the date string, lays counts into the classical magic-square positions, and adds plane_analysis plus number_analysis entries keyed by digit strings '1'..'9'. Zero digits are ignored for placement. It does not compute Pythagorean Life Path (asterwise_get_numerology_profile) or Chaldean compounds.

SECTION: WORKFLOW BEFORE: None — standalone. AFTER: None.

SECTION: INPUT CONTRACT date string only; validated upstream.

SECTION: OUTPUT CONTRACT data.birth_date (string) data.grid — three-by-three nested int array (row-major): row positions map to numbers [4,9,2], [3,5,7], [8,1,6] respectively; cell value = count of that digit in the date (0 if absent) data.present_numbers[] (int array) data.missing_numbers[] (int array) data.repeated_numbers[] (int array — digits appearing at least twice) data.plane_analysis: thought_plane — { numbers[] (int array), description (string), complete (bool) } will_plane — same shape action_plane — same shape golden_yod — same shape silver_yod — same shape data.number_analysis{} — keys '1' through '9' (string keys): count (int) plane (string) trait (string) status (string — 'missing', 'present', or 'strong') note (string)

SECTION: RESPONSE FORMAT response_format=json serialises the complete response as indented JSON — use this for programmatic parsing, typed clients, and downstream tool chaining. response_format=markdown renders the same data as a human-readable report. Both modes return identical underlying data — no fields are added, removed, or filtered by either mode.

SECTION: COMPUTE CLASS MEDIUM_COMPUTE

SECTION: ERROR CONTRACT INVALID_PARAMS (local — caught before upstream call): None — all validation is upstream.

INVALID_PARAMS (upstream): — None — upstream rejection surfaces as MCP INTERNAL_ERROR at the tool layer.

INTERNAL_ERROR: — Any upstream API failure or timeout → MCP INTERNAL_ERROR

Edge cases: — Zeros in ISO dates are skipped — only digits one through nine populate the grid.

SECTION: DO NOT CONFUSE WITH asterwise_get_numerology_profile — letter-based Western numbers, not digit-frequency Lo Shu. asterwise_get_name_correction — spelling harmonics, not birth-date grids.

ParametersJSON Schema
NameRequiredDescriptionDefault
dateYes
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint=true, openWorldHint=true, idempotentHint=true, and destructiveHint=false. The description adds valuable context beyond this: it specifies the tool ignores zero digits, details the compute class (MEDIUM_COMPUTE), and outlines error handling (upstream validation, INTERNAL_ERROR cases). However, it doesn't describe rate limits or auth needs, keeping it from a perfect score.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (WHAT THIS TOOL COVERS, WORKFLOW, etc.), each sentence adds value, and it's front-loaded with the core purpose. There's no redundant information, and the structured format enhances readability without unnecessary verbosity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity, rich output schema detailed in the description, and comprehensive annotations, the description is complete. It fully explains the output structure (grid, planes, number analysis), error handling, compute class, and distinctions from siblings. The presence of an output schema means return values are adequately documented.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description compensates well. It explains 'date string only; validated upstream' and details the response_format options (json vs markdown) and their implications. While it doesn't specify date format examples, it clarifies that validation occurs upstream and both formats return identical data.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool 'Derives a Lo Shu three-by-three frequency grid from birth-date digits' and provides detailed coverage of what it analyzes (digit frequencies, plane analysis, number traits). It clearly distinguishes from sibling tools like 'asterwise_get_numerology_profile' and 'asterwise_get_name_correction' by specifying what it does NOT compute.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The 'SECTION: DO NOT CONFUSE WITH' explicitly names alternative tools and explains when NOT to use this tool (e.g., for Pythagorean Life Path or Chaldean compounds). The 'SECTION: WHAT THIS TOOL COVERS' further clarifies scope, and 'SECTION: WORKFLOW' indicates it's standalone with no prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_lucky_numbersA
Read-onlyIdempotent
Inspect

Fetches condensed lucky-number guidance for a name and birth date including primary and secondary picks, power number, interpretation, and a date_specific flag.

SECTION: WHAT THIS TOOL COVERS Thin numerology endpoint mirroring lucky_numbers[] from the full profile but omitting pinnacles, challenges, and long interpretations. data.date_specific is always false (profile-derived). Use when payload size matters. It is not the full profile (asterwise_get_numerology_profile) nor Lo Shu counts (asterwise_get_lo_shu_grid).

SECTION: WORKFLOW BEFORE: None — standalone. AFTER: asterwise_get_numerology_profile — if deeper context is required.

SECTION: INPUT CONTRACT name and date forwarded upstream without local checks.

SECTION: OUTPUT CONTRACT data.lucky_numbers[] (int array — primary and secondary values) data.power_number (int — Life Path anchor) data.date_specific (bool — always false; derived from profile) data.interpretation (string)

SECTION: RESPONSE FORMAT response_format=json serialises the complete response as indented JSON — use this for programmatic parsing, typed clients, and downstream tool chaining. response_format=markdown renders the same data as a human-readable report. Both modes return identical underlying data — no fields are added, removed, or filtered by either mode.

SECTION: COMPUTE CLASS FAST_LOOKUP

SECTION: ERROR CONTRACT INVALID_PARAMS (local — caught before upstream call): None — all validation is upstream.

INVALID_PARAMS (upstream): — None — upstream rejection surfaces as MCP INTERNAL_ERROR at the tool layer.

INTERNAL_ERROR: — Any upstream API failure or timeout → MCP INTERNAL_ERROR

Edge cases: — Duplicates asterwise_get_numerology_profile lucky list — choose this tool for smaller JSON only.

SECTION: DO NOT CONFUSE WITH asterwise_get_numerology_profile — full multi-section profile, not lucky-number-only payload. asterwise_get_number_meaning — dictionary entry for one integer, not personalised lucky sets.

ParametersJSON Schema
NameRequiredDescriptionDefault
dateYes
nameYes
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond what annotations provide. While annotations declare readOnlyHint=true, idempotentHint=true, etc., the description specifies that this is a 'FAST_LOOKUP' compute class, that all validation is upstream, that upstream failures surface as MCP INTERNAL_ERROR, and that the tool mirrors lucky_numbers[] from the full profile. This provides practical implementation details that help the agent understand performance characteristics and error handling.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is exceptionally well-structured with clear sections (WHAT THIS TOOL COVERS, WORKFLOW, INPUT CONTRACT, OUTPUT CONTRACT, etc.). Each sentence earns its place by providing specific, actionable information. The information is front-loaded with the core purpose, followed by detailed sections that can be referenced as needed. No wasted words or redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity and the presence of an output schema, the description provides comprehensive context. It explains what the tool does, when to use it, input behavior, output structure (even though an output schema exists), compute characteristics, error handling, and differentiation from siblings. The description successfully bridges the gap between structured fields and practical usage, making it complete for agent decision-making.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description carries the full burden of explaining parameters. It clearly states that 'name and date forwarded upstream without local checks' and provides detailed information about the response_format parameter, explaining that both 'json' and 'markdown' options return identical data but in different formats for different use cases. This adds meaningful context beyond the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'fetches condensed lucky-number guidance for a name and birth date' with specific details about what's included (primary/secondary picks, power number, interpretation, date_specific flag). It explicitly distinguishes this from sibling tools asterwise_get_numerology_profile and asterwise_get_lo_shu_grid, making the purpose specific and differentiated.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool ('Use when payload size matters') and when not to use it ('It is not the full profile... nor Lo Shu counts'). It names specific alternatives (asterwise_get_numerology_profile for deeper context, asterwise_get_number_meaning for dictionary entries) and includes a 'DO NOT CONFUSE WITH' section that clearly delineates boundaries with sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_maturity_numberA
Read-onlyIdempotent
Inspect

Calculates the Maturity number as the sum of Life Path and Expression numbers, reduced to a single digit or master number. Represents the underlying wish or true desire that becomes conscious around age 35 and fully emerges by midlife.

SECTION: WHAT THIS TOOL COVERS The Maturity number is sometimes called the True Goal — the final attainment that Life Path and Expression together point toward. It becomes increasingly dominant after age 35 and is the most important number for understanding the second half of life.

SECTION: WORKFLOW BEFORE: RECOMMENDED — asterwise_get_numerology_profile — confirm Life Path and Expression before interpreting their sum. AFTER: None.

SECTION: INPUT CONTRACT name — Full legal name as used at birth. Example: 'Arjun Mehta' date — Birth date in YYYY-MM-DD format. Example: '1985-11-12'

SECTION: OUTPUT CONTRACT data.maturity_number (int — LP + Expression, reduced; master numbers preserved) data.life_path_number (int — component) data.expression_number (int — component) data.is_master_number (bool — true if maturity is 11, 22, or 33)

SECTION: RESPONSE FORMAT response_format=json — structured JSON. response_format=markdown — human-readable. Both modes return identical underlying data.

SECTION: COMPUTE CLASS FAST_LOOKUP

SECTION: ERROR CONTRACT INVALID_PARAMS (local): None. INTERNAL_ERROR: upstream failure → MCP INTERNAL_ERROR

SECTION: DO NOT CONFUSE WITH asterwise_get_numerology_profile — returns maturity_number as part of the full profile. asterwise_get_personal_cycles — temporal cycles that change annually, not a fixed number.

ParametersJSON Schema
NameRequiredDescriptionDefault
dateYes
nameYes
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare the tool as read-only, idempotent, and non-destructive. The description adds value by stating the calculation combines Life Path and Expression numbers, and that the Maturity number represents a desire around age 35. It also lists the output contract, helping the agent understand return fields. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise with two sentences plus an output contract. It is front-loaded with the action and purpose, and the output contract is clearly separated. No unnecessary information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (3 params, output schema, many siblings), the description covers purpose, output, and prerequisites adequately. However, it lacks details on parameter formats, edge cases, or example usage. The annotations compensate partially, but the description could be more complete in guiding an agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, placing the burden on the description to explain parameters. The description only notes that name and birth date are required, which is already in the schema. It does not clarify name format, date format, or the meaning of response_format enum values, so it adds little beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates the Maturity number from Life Path and Expression numbers, and explains its significance as a wish/desire surfacing around age 35. This is specific and distinguishes it from sibling tools like get_expression_number or get_soul_urge_number.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description mentions prerequisites (name and birth date) but does not provide explicit guidance on when to use this tool versus alternatives, nor does it specify when not to use it. The context of age 35 is provided, which implicitly suggests its relevance, but no direct comparisons to sibling tools are made.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_muhurtaA
Read-onlyIdempotent
Inspect

Searches a date span for top-scoring muhurta windows for a named activity using Panchanga, Choghadiya, and classical siddhi flags at a location.

SECTION: WHAT THIS TOOL COVERS Evaluates marriage, travel, griha_pravesh, business, education, medical, and vehicle_purchase (exact spellings upstream). Returns scored windows with tithi, nakshatra-related yoga name (Panchanga yoga, not natal yogas), vara, choghadiya metadata, boolean guards (rahu kaal, abhijit, amrita/sarvartha siddhi), and textual reasons. Unsupported activity strings are rejected upstream. It does not return a full month calendar (asterwise_get_panchanga_calendar) or only Choghadiya rows (asterwise_get_choghadiya).

SECTION: WORKFLOW BEFORE: None — this tool is standalone. AFTER: asterwise_get_panchanga — drill into Panchanga limbs for a chosen winning date.

SECTION: INPUT CONTRACT activity must be one of the supported English slugs above — not validated locally; bad values become MCP INTERNAL_ERROR. from_date/to_date ordering and span rules are enforced upstream. Location coordinates reuse LocationInput validation for lat/lon/date pattern.

SECTION: OUTPUT CONTRACT data.event_type (string) data.from_date (string) data.to_date (string) data.timezone (string) data.ayanamsa (string) data.total_windows_evaluated (int) data.top_windows[] — each: date (string — YYYY-MM-DD) start (string — HH:MM local) end (string — HH:MM local) score (int — 0–100) choghadiya (string) choghadiya_type (string) yoga (string — Panchanga yoga name) vara (string) vara_number (int — 1–7) tithi (string) tithi_number (int — 1–30) is_rahu_kaal (bool) is_abhijit (bool) is_amrita_siddhi (bool) is_sarvartha_siddhi (bool) reason (string)

SECTION: RESPONSE FORMAT response_format=json serialises the complete response as indented JSON — use this for programmatic parsing, typed clients, and downstream tool chaining. response_format=markdown renders the same data as a human-readable report. Both modes return identical underlying data — no fields are added, removed, or filtered by either mode.

SECTION: COMPUTE CLASS MEDIUM_COMPUTE

SECTION: ERROR CONTRACT INVALID_PARAMS (local — caught before upstream call): — Invalid LocationInput date/lat/lon → MCP INVALID_PARAMS

INVALID_PARAMS (upstream): — None — bad activity, range, or ordering surfaces as MCP INTERNAL_ERROR at the tool layer.

INTERNAL_ERROR: — Any upstream API failure or timeout → MCP INTERNAL_ERROR

Edge cases: — Panchanga yoga names here are not asterwise_get_yogas natal yogas.

SECTION: DO NOT CONFUSE WITH asterwise_get_choghadiya — enumerates all Choghadiya for one day without activity scoring across a span. asterwise_get_panchanga — single-day limb detail, not ranked muhurta search.

ParametersJSON Schema
NameRequiredDescriptionDefault
to_dateYes
activityYes
locationYesFor tools that need location but not birth time.
from_dateYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds substantial behavioral context beyond annotations. While annotations indicate read-only, non-destructive, idempotent operations, the description details compute class ('MEDIUM_COMPUTE'), error contracts (INVALID_PARAMS, INTERNAL_ERROR), edge cases (yoga name clarification), and specific rejection rules for unsupported activities. This provides crucial operational context not covered by annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is exceptionally well-structured with clear sections (WHAT THIS TOOL COVERS, WORKFLOW, INPUT CONTRACT, etc.), each containing only essential information. Every sentence serves a specific purpose with zero redundancy, making it easy to scan and understand despite the complexity of the tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity and the presence of an output schema, the description provides complete context: it explains what the tool does, when to use it, input requirements, output structure, error handling, compute characteristics, and sibling distinctions. The output schema existence means the description doesn't need to explain return values, and it focuses appropriately on behavioral and usage aspects.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With only 25% schema description coverage, the description compensates well by explaining parameter semantics: it specifies that 'activity must be one of the supported English slugs' and lists them, clarifies that 'from_date/to_date ordering and span rules are enforced upstream', and explains location validation. However, it doesn't detail all four parameters' formats beyond what the schema minimally provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool 'Searches a date span for top-scoring muhurta windows for a named activity using Panchanga, Choghadiya, and classical siddhi flags at a location.' This is a specific verb ('searches') with clear resources ('date span', 'muhurta windows', 'activity') and distinguishes from siblings by naming what it does not return ('full month calendar' or 'only Choghadiya rows').

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool versus alternatives: 'BEFORE: None — this tool is standalone. AFTER: asterwise_get_panchanga — drill into Panchanga limbs for a chosen winning date.' It also clearly distinguishes from sibling tools like 'asterwise_get_choghadiya' and 'asterwise_get_panchanga' with specific 'DO NOT CONFUSE WITH' section.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_nakshatra_detailsA
Read-onlyIdempotent
Inspect

Looks up static metadata for one of twenty-seven nakshatras by exact name and returns interpretive, professional, activity, and body-map reference data.

SECTION: WHAT THIS TOOL COVERS Vedanga/classical reference only — no chart computation. Covers deity, ruler, symbol, gana, nature, classical vs modern prose, profession vectors, life themes, keywords, strengths/challenges, favourable vs unfavourable activities, and body_map. Names are case-sensitive exact matches (Ashwini … Revati list). It does not compute birth nakshatra from BirthData (use asterwise_get_natal_chart).

SECTION: WORKFLOW BEFORE: None — this tool is standalone. AFTER: None.

SECTION: INPUT CONTRACT nakshatra_name is forwarded raw — no local fuzzy matching or normalisation.

SECTION: OUTPUT CONTRACT data.name (string) data.index (int — 0–26) data.interpretation: source (string) nakshatra_number (int) name (string) sanskrit (string) span (string) symbol (string) deity (string) ruling_planet (string) sign (string) sign_lord (string) gana (string) nature (string) body_part (string) classical_qualities[] (string array) appearance — { classical (string), modern (string) } nature_description — { classical (string), modern (string) } profession — { primary[] (string array), secondary[] (string array), classical_note (string), modern (string) } life_themes — { core, karmic_path, challenge, gift, modern (strings) } keywords[] (string array) strengths[] (string array) challenges[] (string array) data.activities: favorable_activities[] (string array) unfavorable_activities[] (string array) data.body_map: parts[] (string array) sensitivity (string)

SECTION: RESPONSE FORMAT response_format=json serialises the complete response as indented JSON — use this for programmatic parsing, typed clients, and downstream tool chaining. response_format=markdown renders the same data as a human-readable report. Both modes return identical underlying data — no fields are added, removed, or filtered by either mode.

SECTION: COMPUTE CLASS FAST_LOOKUP

SECTION: ERROR CONTRACT INVALID_PARAMS (local — caught before upstream call): None — name passes straight through.

INVALID_PARAMS (upstream): — None — unknown names surface as MCP INTERNAL_ERROR at the tool layer.

INTERNAL_ERROR: — Any upstream API failure or timeout → MCP INTERNAL_ERROR

Edge cases: — Exact spelling required — no fuzzy recovery.

SECTION: DO NOT CONFUSE WITH asterwise_get_natal_chart — computes birth nakshatra from time/place, not encyclopaedic copy. asterwise_get_dasha — uses Moon nakshatra for timing, not this lookup table.

ParametersJSON Schema
NameRequiredDescriptionDefault
nakshatra_nameYes
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds substantial behavioral context beyond what annotations provide. While annotations indicate read-only, open-world, idempotent, and non-destructive operations, the description adds important constraints: 'Names are case-sensitive exact matches,' 'no local fuzzy matching or normalisation,' 'FAST_LOOKUP' compute class, and detailed error handling for invalid parameters and edge cases. This provides crucial operational guidance not captured in annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (WHAT THIS TOOL COVERS, WORKFLOW, INPUT CONTRACT, etc.), making it easy to navigate. While comprehensive, some sections could be more concise (e.g., OUTPUT CONTRACT lists all fields but could summarize categories). Every sentence adds value, but the overall length is substantial for a lookup tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (static metadata lookup with detailed output) and the presence of an output schema, the description is exceptionally complete. It covers purpose, usage guidelines, behavioral constraints, parameter semantics, output format options, compute class, error handling, and sibling differentiation. The OUTPUT CONTRACT section provides a comprehensive field list, though the output schema would handle this structurally.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description fully compensates by providing detailed parameter semantics. It explains that 'nakshatra_name is forwarded raw — no local fuzzy matching or normalisation' and 'Names are case-sensitive exact matches (Ashwini … Revati list).' For response_format, it explains the difference between json and markdown modes and that both return identical underlying data. This adds essential meaning beyond the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Looks up static metadata for one of twenty-seven nakshatras by exact name and returns interpretive, professional, activity, and body-map reference data.' It specifies the exact resource (nakshatras), action (looks up), and distinguishes from siblings by explicitly stating it's for static metadata lookup rather than computation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool versus alternatives. It states: 'Vedanga/classical reference only — no chart computation' and 'It does not compute birth nakshatra from BirthData (use asterwise_get_natal_chart).' The 'DO NOT CONFUSE WITH' section explicitly names two sibling tools and explains their different purposes.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_nakshatra_predictionA
Read-only
Inspect

Returns a personalised daily prediction using Tarabala and Chandrabala from the Muhurta Chintamani tradition.

SECTION: WHAT THIS TOOL COVERS Computes the individual's daily auspiciousness score by:

  1. TARABALA: Counts from birth nakshatra to today's transit Moon nakshatra (inclusive). The remainder mod 9 gives the Tara (1=Janma, 2=Sampat/Wealth, 3=Vipat/Danger, 4=Kshema/Prosperity, 5=Pratyak/Obstacle, 6=Sadhana/Achievement, 7=Naidhana/Destruction, 8=Mitra/Friend, 9=Ati-Mitra/Great Friend).

  2. CHANDRABALA: Transit Moon's house from natal Moon (favorable in houses 1,3,6,7,10,11).

  3. TRANSIT NAKSHATRA QUALITY: The type of today's Moon nakshatra (Dhruva/Chara/Ugra/Tikshna/Kshipra/Mridu/Mishra) with auspicious and inauspicious activities. Combined daily score out of 4 with label (Excellent/Good/Moderate/Challenging). Source: Muhurta Chintamani; Brihat Samhita Ch.98.

SECTION: WORKFLOW BEFORE: None — birth data computes everything needed. AFTER: asterwise_get_panchanga — for full daily panchanga context.

SECTION: INPUT CONTRACT birth — BirthData (date, time, lat, lon, timezone). target_date (optional): YYYY-MM-DD. Defaults to today.

SECTION: OUTPUT CONTRACT data.target_date (string) data.birth_nakshatra{}: name (string), index (int 0-26) data.natal_moon_sign_index (int 0-11) data.transit_moon{}: nakshatra (string), nakshatra_index (int), rashi_index (int) data.tarabala{}: tara_number (int 1-9), count_from_birth (int), name (string), meaning (string), is_favorable (bool), interpretation (string), source (string) data.chandrabala{}: moon_house_from_natal (int 1-12), is_favorable (bool), favorable_houses[] (int array) data.daily_score{}: score (int 0-4), max_score (4), label (string) data.transit_nakshatra_quality{}: nakshatra (string), quality_type (string), english (string), auspicious_for[] (string array), inauspicious_for[] (string array), source (string) data.nakshatra_activities{}: favorable[] (string array), unfavorable[] (string array) data.classical_sources (string)

SECTION: COMPUTE CLASS MEDIUM_COMPUTE — natal chart + ephemeris Moon position.

SECTION: ERROR CONTRACT INVALID_PARAMS (local): BirthData Pydantic violations → MCP INVALID_PARAMS INTERNAL_ERROR: Any upstream API failure → MCP INTERNAL_ERROR

SECTION: DO NOT CONFUSE WITH asterwise_get_nakshatra_details — static nakshatra reference, not personalised prediction. asterwise_get_panchanga — daily panchanga (tithi, yoga, karana), not Tarabala scoring. asterwise_get_biorhythm — Western biorhythm cycles, not classical Vedic prediction.

ParametersJSON Schema
NameRequiredDescriptionDefault
birthYesBirth data for a single person.
target_dateNo
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Description details the computation steps (Tarabala, Chandrabala, transit nakshatra quality), output fields, and compute class (MEDIUM_COMPUTE). Annotations (readOnlyHint=true) are consistent; no contradictions. The error contract is also specified.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is long but well-structured with labeled sections (WHAT THIS TOOL COVERS, WORKFLOW, INPUT CONTRACT, etc.). It is front-loaded with purpose and then breaks down details. Could be slightly more concise but remains clear.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (Vedic astrology prediction), the description covers input, output, workflow, exclusions, error handling, and compute class. An output schema exists but the description still fully explains the return structure. Very thorough.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has only 33% description coverage, but the description compensates with a detailed 'INPUT CONTRACT' explaining birth (BirthData), target_date format/default, and response_format enum. It adds meaning beyond the schema's minimal descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it returns a personalised daily prediction using Tarabala and Chandrabala from the Muhurta Chintamani tradition. It explicitly distinguishes from siblings like asterwise_get_nakshatra_details, asterwise_get_panchanga, and asterwise_get_biorhythm.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description includes a 'SECTION: DO NOT CONFUSE WITH' naming specific alternatives and a 'WORKFLOW' section stating no prior tool needed and suggesting asterwise_get_panchanga afterwards. It provides clear when-to-use guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_name_correctionA
Read-onlyIdempotent
Inspect

Scores the current spelling of a personal name against the birth-date Life Path, suggests alternate spellings with harmony metrics, and echoes recommendation stub fields.

SECTION: WHAT THIS TOOL COVERS Returns data.current_name metrics (expression, soul urge, personality, master and karmic flags, compatibility band, harmony_score 1..5) plus data.alternatives[] with identical shape per suggestion. data.recommendation is currently null (stub). Empty alternatives[] means no better spelling was found — still success. Not for business entities (asterwise_get_business_name_analysis) or mobile/vehicle checks.

SECTION: WORKFLOW BEFORE: RECOMMENDED — asterwise_get_numerology_profile — baseline numbers before renaming advice. AFTER: None.

SECTION: INPUT CONTRACT name and date strings only; upstream validates.

SECTION: OUTPUT CONTRACT data.full_name (string) data.birth_date (string) data.life_path (int) data.current_name: name (string) expression (int) soul_urge (int) personality (int) is_master (bool) karmic_debt (int or null) compatibility (string — 'harmonious', 'neutral', or 'challenging') harmony_score (int — 1 through 5) data.alternatives[] — objects matching data.current_name shape data.recommendation (currently null — stub)

SECTION: RESPONSE FORMAT response_format=json serialises the complete response as indented JSON — use this for programmatic parsing, typed clients, and downstream tool chaining. response_format=markdown renders the same data as a human-readable report. Both modes return identical underlying data — no fields are added, removed, or filtered by either mode.

SECTION: COMPUTE CLASS MEDIUM_COMPUTE

SECTION: ERROR CONTRACT INVALID_PARAMS (local — caught before upstream call): None — all validation is upstream.

INVALID_PARAMS (upstream): — None — upstream rejection surfaces as MCP INTERNAL_ERROR at the tool layer.

INTERNAL_ERROR: — Any upstream API failure or timeout → MCP INTERNAL_ERROR

Edge cases: — Empty alternatives[] is valid when no improvement exists.

SECTION: DO NOT CONFUSE WITH asterwise_get_business_name_analysis — entity Expression scan, not personal spelling alternatives. asterwise_get_chaldean_numerology — Chaldean compounds, not harmony-scored spelling list.

ParametersJSON Schema
NameRequiredDescriptionDefault
dateYes
nameYes
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond what annotations provide. While annotations indicate read-only, open-world, idempotent, and non-destructive operations, the description explains that empty alternatives[] is valid (success case), clarifies compute class as MEDIUM_COMPUTE, details error handling (upstream validation, INTERNAL_ERROR cases), and specifies that recommendation is currently a null stub. No contradiction with annotations exists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (WHAT THIS TOOL COVERS, WORKFLOW, INPUT CONTRACT, etc.), making it easy to navigate. While comprehensive, some sections like 'RESPONSE FORMAT' could be more concise. Overall, each section adds value without unnecessary repetition.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (numerology analysis with multiple metrics), the description provides complete context. It details output structure (including current_name metrics and alternatives array), explains edge cases (empty alternatives), distinguishes from siblings, and covers workflow recommendations. With annotations and output schema available, the description fills all necessary gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description compensates well by explaining parameter semantics in the 'INPUT CONTRACT' section. It clarifies that 'name and date strings only; upstream validates' and details the response_format parameter's effect on output presentation (json vs markdown). However, it doesn't specify exact date format or name formatting requirements.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('scores', 'suggests', 'echoes') and resources ('personal name', 'birth-date Life Path', 'alternate spellings'). It explicitly distinguishes from sibling tools like asterwise_get_business_name_analysis and asterwise_get_chaldean_numerology, making the scope unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool versus alternatives, including a 'DO NOT CONFUSE WITH' section naming specific sibling tools. It also includes a 'BEFORE' section recommending asterwise_get_numerology_profile as a prerequisite and clarifies that empty alternatives[] is valid when no improvement exists.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_natal_chartA
Read-onlyIdempotent
Inspect

Computes the full sidereal natal chart from BirthData and returns planet rows, houses, aspects, arudhas, upapada, bhava cusps, and avakhada metadata.

SECTION: WHAT THIS TOOL COVERS Parashari-style natal endpoint: nine grahas with signs, degrees, nakshatras, combustion, retrograde, Bhava Chalit and rashi houses, twelve house cusps, graha and rashi drishti, arudha padas A1–A12, upapada lagna block, bhava madhya/sandhi arrays, ayanamsa metadata, and avakhada attributes. When include_interpretation=true, ascendant_sign_interpretation, moon_sign_interpretation, moon_nakshatra_interpretation, and interpretation are populated from interpretation JSON; otherwise they are null. It does not return PDFs, yogas list (asterwise_get_yogas), or dasha trees (asterwise_get_dasha).

SECTION: WORKFLOW BEFORE: None — this tool is standalone. AFTER: RECOMMENDED — asterwise_get_yogas — layer classical combinations after the base chart exists.

SECTION: INPUT CONTRACT BirthData enforces date YYYY-MM-DD, time HH:MM, lat -90..90, lon -180..180, ayanamsa enum locally (Pydantic). Unknown birth time may be entered as time='00:00' without error; lagna-sensitive results are then unreliable and callers must handle that — the API does not flag it.

SECTION: OUTPUT CONTRACT data.planets[] — nine objects: planet (string) sign (string) sign_num (int — 0–11) degree (float) nakshatra (string) nakshatra_pada (int — 1–4) is_retrograde (bool) is_combust (bool) is_deep_combust (bool) house (int — Bhava Chalit) rasi_house (int) bhava_chalit_house (int) data.houses[] — twelve objects: house (int) sign (string) sign_num (int) degree (float) data.ascendant (float) data.ascendant_sign (string — Sanskrit name) data.moon_sign (string) data.moon_nakshatra (string) data.ayanamsa_value (float) data.ayanamsa_used (string) data.avakahada: nakshatra, nakshatra_lord, charan (int), rashi, rashi_lord, varna, vashya, yoni, gana, nadi, paya, ascendant, ascendant_lord, sun_sign, sun_sign_lord (strings/ints per upstream) data.graha_drishti — object keyed by planet name; each value object keyed by house strings '1'–'12' with aspect strength int (25, 50, 75, or 100) data.rashi_drishti[] — active sign-to-sign aspect pairs: { from_sign (string), from_sign_num (int 0-11), to_sign (string), to_sign_num (int 0-11) } data.arudha_padas — keys A1–A12 each { sign_index (int), sign_name (string) } data.upapada_lagna: sign_index (int) sign_name (string) upapada_lord (string) second_from_upapada_sign_index (int) second_from_upapada_sign_name (string) planets_in_second_from_upapada[] (string array of planet names) has_benefic_in_second_from_upapada (bool) has_malefic_in_second_from_upapada (bool) data.bhava_madhya[] — twelve objects: { house (int 1-12), sign (string), sign_num (int 0-11), degree (float) } data.bhava_sandhi[] — twelve objects: { house (int 1-12), sign (string), sign_num (int 0-11), degree (float) } data.birth_time_unknown (bool — always false; no detection) data.fallback_method (null) ascendant_sign_interpretation (dict or null — sign interpretation from signs/ascendant.json when include_interpretation=true) moon_sign_interpretation (dict or null — Moon sign interpretation from signs/moon_sign.json when include_interpretation=true) moon_nakshatra_interpretation (dict or null — nakshatra interpretation from nakshatras/ files when include_interpretation=true) interpretation (list or null — planet-in-house interpretation list when include_interpretation=true)

SECTION: RESPONSE FORMAT response_format=json serialises the complete response as indented JSON — use this for programmatic parsing, typed clients, and downstream tool chaining. response_format=markdown renders the same data as a human-readable report. Both modes return identical underlying data — no fields are added, removed, or filtered by either mode.

SECTION: COMPUTE CLASS MEDIUM_COMPUTE

SECTION: ERROR CONTRACT INVALID_PARAMS (local — caught before upstream call): — BirthData Pydantic violations (date/time/lat/lon/ayanamsa) → MCP INVALID_PARAMS

INVALID_PARAMS (upstream): — None — calendar years outside supported upstream window surface as MCP INTERNAL_ERROR at the tool layer.

INTERNAL_ERROR: — Any upstream API failure or timeout → MCP INTERNAL_ERROR

Edge cases: — time='00:00' accepted; lagna may be wrong if true birth time unknown — not auto-detected. — Interpretation fields are null unless include_interpretation=true on the request.

SECTION: DO NOT CONFUSE WITH asterwise_get_divisional_chart — sixteen vargas only, not the primary radix bundle returned here.

ParametersJSON Schema
NameRequiredDescriptionDefault
birthYesBirth data for a single person.
response_formatYes
include_interpretationNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true, destructiveHint=false, etc., but the description adds substantial behavioral context beyond annotations: it explains compute class (MEDIUM_COMPUTE), error contracts (INVALID_PARAMS, INTERNAL_ERROR), edge cases (time='00:00' handling, interpretation field nullability), and output format options (json vs markdown). No contradiction with annotations exists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is structured with clear sections (WHAT THIS TOOL COVERS, WORKFLOW, INPUT CONTRACT, etc.), each sentence adds value, and it's front-loaded with the core purpose. Despite being detailed, there's no redundant or verbose content—every part serves to inform tool selection or usage.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (3 parameters, nested objects, output schema), the description is highly complete: it covers purpose, usage, behavioral traits, input/output contracts, errors, and sibling differentiation. With annotations and output schema present, the description adds necessary context without redundancy, making it fully adequate for agent decision-making.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is low (33%), but the description compensates well: it explains BirthData constraints (date/time/lat/lon/ayanamsa enum), clarifies unknown time handling ('00:00' without error but unreliable lagna), and details include_interpretation effects. However, it doesn't fully document all parameter semantics (e.g., timezone default rationale, person_name usage), keeping it from a perfect score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'computes the full sidereal natal chart from BirthData' and enumerates specific outputs (planet rows, houses, aspects, etc.), distinguishing it from sibling tools like asterwise_get_divisional_chart, asterwise_get_yogas, and asterwise_get_dasha. The verb 'computes' is specific and the resource 'natal chart' is well-defined.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool ('standalone' for base chart) and when not to use it (does not return PDFs, yogas list, or dasha trees). It names specific alternatives (asterwise_get_yogas for layering combinations, asterwise_get_divisional_chart for vargas) and provides workflow guidance ('BEFORE: None', 'AFTER: RECOMMENDED — asterwise_get_yogas').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_number_meaningA
Read-onlyIdempotent
Inspect

Returns dictionary-style numerology copy for a single integer, including interpretation, keywords, and stubbed extended fields.

SECTION: WHAT THIS TOOL COVERS Static reference for any whole number from one through thirty-three inclusive (includes master numbers eleven, twenty-two, thirty-three plus every intermediate value). No name or date. data.theme, data.advice, data.opportunities[], and data.challenges[] are stubs (null or empty). Not personalised profiling (asterwise_get_numerology_profile).

SECTION: WORKFLOW BEFORE: None — standalone. AFTER: None.

SECTION: INPUT CONTRACT number must pass local guard: inclusive range one through thirty-three; values outside that band raise MCP INVALID_PARAMS before the HTTP call.

SECTION: OUTPUT CONTRACT data.number (int) data.context (string — 'general') data.interpretation (string) data.keywords[] (string array) data.theme (currently null — stub) data.opportunities[] (currently empty — stub) data.challenges[] (currently empty — stub) data.advice (currently null — stub)

SECTION: RESPONSE FORMAT response_format=json serialises the complete response as indented JSON — use this for programmatic parsing, typed clients, and downstream tool chaining. response_format=markdown renders the same data as a human-readable report. Both modes return identical underlying data — no fields are added, removed, or filtered by either mode.

SECTION: COMPUTE CLASS FAST_LOOKUP

SECTION: ERROR CONTRACT INVALID_PARAMS (local — caught before upstream call): — number less than 1 or greater than 33 → MCP INVALID_PARAMS

INVALID_PARAMS (upstream): — None — further rejection surfaces as MCP INTERNAL_ERROR at the tool layer.

INTERNAL_ERROR: — Any upstream API failure or timeout → MCP INTERNAL_ERROR

Edge cases: — Accepts every integer in the inclusive one..thirty-three range, not only master numbers.

SECTION: DO NOT CONFUSE WITH asterwise_get_numerology_profile — computes personal numbers from name and date, not a static dictionary row. asterwise_get_lucky_numbers — personalised lucky list, not reference meanings.

ParametersJSON Schema
NameRequiredDescriptionDefault
numberYes
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond what annotations provide. While annotations indicate read-only, open-world, idempotent, and non-destructive operations, the description details the range constraint (1-33), stub fields, error handling (INVALID_PARAMS for out-of-range), compute class (FAST_LOOKUP), and output format options. This enriches understanding without contradicting annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (WHAT THIS TOOL COVERS, WORKFLOW, etc.), making it easy to navigate. While somewhat lengthy, each section earns its place by providing essential information. The content is front-loaded with the core purpose, and the structure helps organize detailed information efficiently.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (range validation, stub fields, output format options) and the presence of annotations and an output schema, the description is remarkably complete. It covers purpose, usage guidelines, input constraints, output structure, error handling, and sibling differentiation. The output schema existence means the description doesn't need to detail return values, and it provides all necessary contextual information.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description fully compensates by explaining both parameters thoroughly. It specifies that 'number' must be 1-33 inclusive and describes the validation logic. For 'response_format', it explains the difference between 'json' and 'markdown' outputs and that both return identical data. This adds complete meaning beyond the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'Returns dictionary-style numerology copy for a single integer' with specific details about what it includes (interpretation, keywords, stubbed fields) and what it doesn't (no name or date). It explicitly distinguishes from sibling tools like asterwise_get_numerology_profile and asterwise_get_lucky_numbers in the 'DO NOT CONFUSE WITH' section, making the purpose highly specific and differentiated.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool versus alternatives. The 'WHAT THIS TOOL COVERS' section clarifies it's for static reference of numbers 1-33, not personalized profiling. The 'DO NOT CONFUSE WITH' section names specific sibling tools and explains their different purposes, giving clear alternatives and exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_numerology_compatibilityA
Read-onlyIdempotent
Inspect

Compares two people on Pythagorean Life Path numbers derived from their names and birth dates and returns a score, tier label, narrative, strengths, challenges, and advice.

SECTION: WHAT THIS TOOL COVERS Pairwise numerology only — no charts, rashis, or kootas. Outputs discrete compatibility_score 1..10 with textual bands. It does not run asterwise_get_compatibility (Jyotish matchmaking) or regional porutham tools.

SECTION: WORKFLOW BEFORE: RECOMMENDED — asterwise_get_numerology_profile per person — sanity-check Life Paths before comparing. AFTER: None.

SECTION: INPUT CONTRACT Four strings (two names, two dates) are passed through without local guards.

SECTION: OUTPUT CONTRACT data.life_path_1 (int) data.life_path_2 (int) data.compatibility_score (int — 1 through 10) data.compatibility_level (string — 'Excellent', 'Good', 'Average', or 'Challenging') data.interpretation (string) data.strengths[] (string array) data.challenges[] (string array) data.advice (string)

SECTION: RESPONSE FORMAT response_format=json serialises the complete response as indented JSON — use this for programmatic parsing, typed clients, and downstream tool chaining. response_format=markdown renders the same data as a human-readable report. Both modes return identical underlying data — no fields are added, removed, or filtered by either mode.

SECTION: COMPUTE CLASS MEDIUM_COMPUTE

SECTION: ERROR CONTRACT INVALID_PARAMS (local — caught before upstream call): None — all validation is upstream.

INVALID_PARAMS (upstream): — None — upstream rejection surfaces as MCP INTERNAL_ERROR at the tool layer.

INTERNAL_ERROR: — Any upstream API failure or timeout → MCP INTERNAL_ERROR

Edge cases: — For Vedic matching, use asterwise_get_compatibility instead.

SECTION: DO NOT CONFUSE WITH asterwise_get_compatibility — sidereal koota scoring, not numerology integers. asterwise_get_numerology_profile — single-person profile, not dyad scoring.

ParametersJSON Schema
NameRequiredDescriptionDefault
person1_dateYes
person1_nameYes
person2_dateYes
person2_nameYes
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint=true, destructiveHint=false, openWorldHint=true, and idempotentHint=true, covering safety and idempotency. The description adds valuable context: it specifies the tool only covers 'pairwise numerology' (no charts, rashis, or kootas), details the compute class as MEDIUM_COMPUTE, and explains error handling (e.g., upstream validation, INTERNAL_ERROR cases). This enhances behavioral understanding without contradicting annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (e.g., WHAT THIS TOOL COVERS, WORKFLOW, INPUT CONTRACT), each sentence adds value without redundancy, and it's front-loaded with the core purpose. Despite being detailed, it avoids fluff and efficiently communicates necessary information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (numerology comparison with multiple outputs), the description is highly complete: it explains the scope, workflow, input/output contracts, compute class, error handling, and sibling distinctions. With annotations covering safety and an output schema detailing return values, the description fills all contextual gaps effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It explains that inputs are 'four strings (two names, two dates)' and details the response_format parameter (json vs. markdown modes with identical data). While it doesn't specify exact date/name formats, it clarifies the parameter roles and usage, adding significant meaning beyond the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'compares two people on Pythagorean Life Path numbers' and specifies it returns a score, tier label, narrative, strengths, challenges, and advice. It explicitly distinguishes from siblings like asterwise_get_compatibility (Jyotish matchmaking) and asterwise_get_numerology_profile (single-person profile), making the purpose specific and differentiated.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance: it recommends using asterwise_get_numerology_profile per person beforehand for sanity-checking, states it does not run asterwise_get_compatibility or regional porutham tools, and clarifies edge cases (e.g., 'For Vedic matching, use asterwise_get_compatibility instead'). This covers when to use, when not to use, and alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_numerology_profileA
Read-onlyIdempotent
Inspect

Builds a Pythagorean numerology profile from a legal name and birth date and returns core numbers, cycles, lucky digits, and summary copy.

SECTION: WHAT THIS TOOL COVERS Computes Life Path, Expression, Soul Urge, Personality, Birthday, Pinnacles, Challenges, lucky_numbers[], summary, and key_traits via the upstream profile engine. data.personal_year is currently null — use asterwise_get_personal_year for the dedicated Personal Year endpoint. It is not Chaldean mapping (asterwise_get_chaldean_numerology), Lo Shu grids (asterwise_get_lo_shu_grid), or paired compatibility (asterwise_get_numerology_compatibility).

SECTION: WORKFLOW BEFORE: None — this tool is standalone. AFTER: asterwise_get_personal_year — fills Personal Year when needed.

SECTION: INPUT CONTRACT name and date strings are forwarded without extra local validation; malformed payloads fail upstream.

SECTION: OUTPUT CONTRACT For each of data.life_path, data.expression, data.soul_urge, data.personality, data.birth_day: number (int) reduced_number (int) is_master_number (bool) is_karmic_debt (bool) karmic_debt_number (int or null) interpretation (string) keywords[] (string array) data.personal_year — currently null — use asterwise_get_personal_year for Personal Year calculation data.pinnacles[] — four objects: number (int), start_age (int), end_age (int), interpretation (string), focus_areas[] (string array) data.challenges[] — four objects with the same shape as pinnacles[] data.lucky_numbers[] (int array) data.summary (string) data.key_traits[] (string array)

SECTION: RESPONSE FORMAT response_format=json serialises the complete response as indented JSON — use this for programmatic parsing, typed clients, and downstream tool chaining. response_format=markdown renders the same data as a human-readable report. Both modes return identical underlying data — no fields are added, removed, or filtered by either mode.

SECTION: COMPUTE CLASS MEDIUM_COMPUTE

SECTION: ERROR CONTRACT INVALID_PARAMS (local — caught before upstream call): None — all validation is upstream.

INVALID_PARAMS (upstream): — None — upstream rejection surfaces as MCP INTERNAL_ERROR at the tool layer.

INTERNAL_ERROR: — Any upstream API failure or timeout → MCP INTERNAL_ERROR

Edge cases: — personal_year field is null here by design.

SECTION: DO NOT CONFUSE WITH asterwise_get_chaldean_numerology — Chaldean letter values and compound structure, not Pythagorean cores. asterwise_get_lucky_numbers — lightweight lucky list without the full profile payload.

ParametersJSON Schema
NameRequiredDescriptionDefault
dateYes
nameYes
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already cover readOnlyHint, openWorldHint, idempotentHint, and destructiveHint. The description adds valuable context beyond annotations: it discloses that personal_year is null by design, explains the compute class (MEDIUM_COMPUTE), details error handling (upstream validation, INTERNAL_ERROR cases), and describes the two response formats. This provides rich behavioral insight without contradicting annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (WHAT THIS TOOL COVERS, WORKFLOW, etc.), making it easy to scan. It's appropriately sized for the tool's complexity, though some sections like OUTPUT CONTRACT are quite detailed. Every section adds value, but it could be slightly more concise in listing all output fields.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity, rich annotations, and detailed output schema (implied by OUTPUT CONTRACT section), the description is highly complete. It covers purpose, usage, behavior, input/output contracts, errors, and sibling differentiation. The output schema is thoroughly documented in the description, making it fully sufficient for an agent to understand and use the tool correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, but the description adds some parameter context: it notes that name and date are forwarded without local validation and that malformed payloads fail upstream. However, it doesn't explain the semantics of name (legal name), date (format), or response_format beyond the enum values. The schema carries most of the burden, but the description provides limited additional meaning.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'Builds a Pythagorean numerology profile from a legal name and birth date' with specific outputs listed. It explicitly distinguishes from siblings like asterwise_get_chaldean_numerology, asterwise_get_lo_shu_grid, and asterwise_get_numerology_compatibility, making the purpose distinct and well-defined.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool versus alternatives, including a dedicated 'DO NOT CONFUSE WITH' section. It specifies that personal_year is null here and directs to asterwise_get_personal_year for that data, and clarifies it's not for Chaldean mapping, Lo Shu grids, or compatibility analysis.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_panchangaA
Read-onlyIdempotent
Inspect

Computes Panchanga elements for one calendar date at a geographic location and returns tithi, vara, nakshatra, yoga, karana, and end times in UTC.

SECTION: WHAT THIS TOOL COVERS Derives classical Panchanga limbs from sidereal astronomy for the given date, latitude, longitude, and timezone — no birth time or natal chart. data.yoga here is the Panchanga Yoga (Sun+Moon nakshatra composite), wholly separate from natal yogas in asterwise_get_yogas. It does not score muhurta windows across ranges (asterwise_get_muhurta), list Choghadiya slices (asterwise_get_choghadiya), or monthly calendars (asterwise_get_panchanga_calendar).

SECTION: WORKFLOW BEFORE: None — this tool is standalone. AFTER: asterwise_get_choghadiya — same-day slot quality for the location.

SECTION: INPUT CONTRACT date must be YYYY-MM-DD (Pydantic pattern on LocationInput). lat/lon bounds are validated locally. Upstream rejects calendar dates outside 1900–2100. timezone defaults to Asia/Kolkata when the caller leaves the default in LocationInput.

SECTION: OUTPUT CONTRACT data.tithi: number (int — 1–30) name (string) paksha (string — 'Shukla' or 'Krishna') degrees_elapsed (float) degrees_remaining (float) end_time (string — ISO UTC) data.vara: number (int — 1=Ravivar through 7=Shanivar) name (string) lord (string) end_time (string) data.nakshatra: index (int — 0–26) name (string) pada (int — 1–4) degrees_elapsed (float) degrees_remaining (float) end_time (string — ISO UTC) data.yoga: index (int — 0–26) name (string) is_inauspicious (bool) degrees_elapsed (float) end_time (string — ISO UTC) data.karana: number (int) name (string) degrees_elapsed (float) degrees_remaining (float) end_time (string — ISO UTC) data.birth_time_unknown (bool — true for location-only calls)

SECTION: RESPONSE FORMAT response_format=json serialises the complete response as indented JSON — use this for programmatic parsing, typed clients, and downstream tool chaining. response_format=markdown renders the same data as a human-readable report. Both modes return identical underlying data — no fields are added, removed, or filtered by either mode.

SECTION: COMPUTE CLASS FAST_LOOKUP

SECTION: ERROR CONTRACT INVALID_PARAMS (local — caught before upstream call): — date not YYYY-MM-DD or invalid calendar day → MCP INVALID_PARAMS — lat/lon outside allowed ranges → MCP INVALID_PARAMS

INVALID_PARAMS (upstream): — None — dates outside 1900–2100 surface as MCP INTERNAL_ERROR at the tool layer.

INTERNAL_ERROR: — Any upstream API failure or timeout → MCP INTERNAL_ERROR

Edge cases: — Panchanga yoga is unrelated to asterwise_get_yogas natal combinations.

SECTION: DO NOT CONFUSE WITH asterwise_get_yogas — natal chart yogas, not Panchanga Sun–Moon yoga. asterwise_get_panchanga_calendar — whole-month daily rows, not a single day.

ParametersJSON Schema
NameRequiredDescriptionDefault
locationYesFor tools that need location but not birth time.

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint=true, destructiveHint=false, openWorldHint=true, and idempotentHint=true, covering safety and idempotency. The description adds valuable context beyond annotations: it specifies date range limits (1900-2100), timezone defaults, error conditions, compute class (FAST_LOOKUP), and clarifies that Panchanga yoga is distinct from natal yogas. No contradiction with annotations exists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (WHAT THIS TOOL COVERS, WORKFLOW, INPUT CONTRACT, etc.), making it easy to navigate. While comprehensive, some sections like OUTPUT CONTRACT are quite detailed; however, every sentence adds value by clarifying scope, usage, or behavior, with no redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (astrological calculations with multiple outputs), the description is highly complete. It covers purpose, differentiation from siblings, input requirements, output structure, error handling, and behavioral notes. With annotations providing safety info and an output schema existing, the description effectively fills all contextual gaps without unnecessary repetition.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters. The description adds meaningful context: it explains the date format requirement (YYYY-MM-DD), lat/lon bounds validation, timezone default (Asia/Kolkata), and the purpose of the response_format parameter (json vs markdown output). This provides practical usage insights beyond the schema's technical definitions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'Computes Panchanga elements for one calendar date at a geographic location' and lists the specific outputs (tithi, vara, nakshatra, yoga, karana, end times). It explicitly distinguishes this from sibling tools like asterwise_get_yogas (natal chart yogas) and asterwise_get_panchanga_calendar (monthly calendars), providing clear differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description includes explicit 'DO NOT CONFUSE WITH' and 'WHAT THIS TOOL COVERS' sections that detail when to use this tool versus alternatives. It specifies this tool is for single-day Panchanga calculations, not for natal charts, muhurta windows, Choghadiya slices, or monthly calendars, and mentions asterwise_get_choghadiya as a potential follow-up tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_panchanga_calendarA
Read-onlyIdempotent
Inspect

Returns one row per civil day for a calendar month at a location with condensed tithi, vara, nakshatra, yoga, karana, and rahu_kaal columns.

SECTION: WHAT THIS TOOL COVERS Month-wide Panchanga suitable for planners; each day includes ending times where applicable and local Rahu Kaal bounds. Year must be 1900–2100 and month 1–12 (Pydantic on PanchangaCalendarInput). It is not single-day detailed Panchanga (asterwise_get_panchanga) nor muhurta search (asterwise_get_muhurta).

SECTION: WORKFLOW BEFORE: None — this tool is standalone. AFTER: asterwise_get_panchanga — expand any single day at full detail.

SECTION: INPUT CONTRACT year/month/lat/lon validated locally. Timezone handling follows upstream response fields (data.timezone echo).

SECTION: OUTPUT CONTRACT data.year (int) data.month (int) data.timezone (string) data.ayanamsa (string) data.days[] — 28–31 objects: date (string — YYYY-MM-DD) tithi — { name (string), number (int), paksha (string), end_time (string — ISO UTC) } vara — { name (string), number (int), lord (string) } nakshatra — { name (string), index (int), pada (int), end_time (string — ISO UTC) } yoga — { name (string), index (int), is_inauspicious (bool), end_time (string — ISO UTC) } karana — { name (string), number (int), end_time (string — ISO UTC) } rahu_kaal — { start (string — HH:MM local), end (string — HH:MM local) }

SECTION: RESPONSE FORMAT response_format=json serialises the complete response as indented JSON — use this for programmatic parsing, typed clients, and downstream tool chaining. response_format=markdown renders the same data as a human-readable report. Both modes return identical underlying data — no fields are added, removed, or filtered by either mode.

SECTION: COMPUTE CLASS FAST_LOOKUP

SECTION: ERROR CONTRACT INVALID_PARAMS (local — caught before upstream call): — year outside 1900–2100 → MCP INVALID_PARAMS — month outside 1–12 → MCP INVALID_PARAMS — lat/lon out of range → MCP INVALID_PARAMS

INVALID_PARAMS (upstream): — None — further rejection surfaces as MCP INTERNAL_ERROR at the tool layer.

INTERNAL_ERROR: — Any upstream API failure or timeout → MCP INTERNAL_ERROR

Edge cases: — Day count follows the civil month (28–31 entries).

SECTION: DO NOT CONFUSE WITH asterwise_get_panchanga — deep single-day Panchanga with degree fields, not a month grid. asterwise_get_muhurta — activity-ranked windows, not a passive calendar.

ParametersJSON Schema
NameRequiredDescriptionDefault
calendarYesMonthly Panchanga calendar parameters.

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint=true, openWorldHint=true, idempotentHint=true, and destructiveHint=false. The description adds valuable context beyond this: it specifies the tool is 'FAST_LOOKUP' (compute class), documents error handling (INVALID_PARAMS and INTERNAL_ERROR contracts), and explains edge cases like day count following civil month. It doesn't contradict annotations and provides useful behavioral details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (WHAT THIS TOOL COVERS, WORKFLOW, INPUT CONTRACT, etc.), making it easy to navigate. While comprehensive, some sections could be more concise (e.g., the OUTPUT CONTRACT lists all fields in detail, which might be excessive since an output schema exists). However, the information is front-loaded and each section serves a clear purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (astrological calculations with multiple output fields), the description is highly complete. It covers purpose, usage guidelines, input validation, output structure, response formats, compute characteristics, error handling, and sibling differentiation. With annotations providing safety hints and an output schema existing, the description adds all necessary contextual information without redundancy.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description adds meaningful context: it explains that year/month/lat/lon are 'validated locally' and that 'Timezone handling follows upstream response fields.' It also clarifies the purpose of the response_format parameter ('json serialises... use this for programmatic parsing... response_format=markdown renders... human-readable report'). This provides valuable semantic understanding beyond the schema's structural definition.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'Returns one row per civil day for a calendar month at a location with condensed tithi, vara, nakshatra, yoga, karana, and rahu_kaal columns.' This specifies the verb ('Returns'), resource ('Panchanga calendar'), and scope ('month-wide'). It explicitly distinguishes from sibling tools asterwise_get_panchanga (single-day detailed) and asterwise_get_muhurta (activity-ranked windows).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool versus alternatives. The 'WHAT THIS TOOL COVERS' section states it's 'Month-wide Panchanga suitable for planners' and clarifies it is 'not single-day detailed Panchanga (asterwise_get_panchanga) nor muhurta search (asterwise_get_muhurta).' The 'WORKFLOW' section also indicates this tool is standalone and can be followed by asterwise_get_panchanga for single-day expansion.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_papasamyamA
Read-onlyIdempotent
Inspect

Measures malefic stress from Lagna, Moon, and Venus references for each partner, compares totals, and labels compatibility level against a threshold.

SECTION: WHAT THIS TOOL COVERS Papa Samyam balancing for marriage: per-person lagna/moon/venus scores, effective_score, cancellation flag, per-planet breakdown rows, then pairwise score_difference, compatible boolean, qualitative level ('Good', 'Moderate', 'Poor'), and numeric threshold. It does not output Ashtakoota points (asterwise_get_compatibility) or porutham passes (asterwise_get_porutham).

SECTION: WORKFLOW BEFORE: RECOMMENDED — asterwise_get_compatibility — establish baseline match quality. AFTER: None.

SECTION: INPUT CONTRACT Two BirthData objects per global contract.

SECTION: OUTPUT CONTRACT data.person1 and data.person2 — each: lagna_score (float) moon_score (float) venus_score (float) total_score (float) effective_score (float) cancellation (bool) breakdown[] — each: planet (string) house_lagna (int) house_moon (int) house_venus (int) weight_lagna (float) weight_moon (float) weight_venus (float) score (float) data.score_difference (float) data.compatible (bool) data.compatibility_level (string — 'Good', 'Moderate', or 'Poor') data.threshold (float)

SECTION: RESPONSE FORMAT response_format=json serialises the complete response as indented JSON — use this for programmatic parsing, typed clients, and downstream tool chaining. response_format=markdown renders the same data as a human-readable report. Both modes return identical underlying data — no fields are added, removed, or filtered by either mode.

SECTION: COMPUTE CLASS MEDIUM_COMPUTE

SECTION: ERROR CONTRACT INVALID_PARAMS (local — caught before upstream call): None — all validation is upstream.

INVALID_PARAMS (upstream): — None — upstream rejection surfaces as MCP INTERNAL_ERROR at the tool layer.

INTERNAL_ERROR: — Any upstream API failure or timeout → MCP INTERNAL_ERROR

Edge cases: — Both charts with zero malefics in the scanned houses → score_difference=0 and compatible=true (not an error).

SECTION: DO NOT CONFUSE WITH asterwise_get_doshas — twelve natal dosha buckets for one chart, not pairwise malefic balance. asterwise_get_compatibility — Guna Milan totals and vetoes, not Papa Samyam scoring.

ParametersJSON Schema
NameRequiredDescriptionDefault
person1YesBirth data for a single person.
person2YesBirth data for a single person.
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond annotations: it explains the COMPUTE_CLASS (MEDIUM_COMPUTE), details the complete output structure, describes edge cases (zero malefics results), and specifies error handling. While annotations cover safety (readOnly, non-destructive), the description provides operational context that helps the agent understand computational requirements and expected behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (WHAT THIS TOOL COVERS, WORKFLOW, INPUT CONTRACT, etc.) that make information easy to locate. While somewhat lengthy, every section adds value and the information is front-loaded with the core purpose. The structure helps the agent quickly find relevant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity, the description provides comprehensive context: it details the complete output structure, explains computational class, specifies error handling, clarifies what the tool does NOT do, and provides workflow guidance. With an output schema present, the description appropriately focuses on behavioral and contextual information rather than repeating return value details.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 67% schema description coverage, the description compensates well by clarifying the input contract ('Two BirthData objects per global contract') and explaining the response_format parameter's effect on output presentation. While it doesn't detail individual parameter semantics, it provides important context about the overall input structure that helps the agent understand what data is required.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: measuring malefic stress from Lagna, Moon, and Venus references for each partner, comparing totals, and labeling compatibility against a threshold. It specifically distinguishes this from sibling tools like asterwise_get_compatibility and asterwise_get_porutham, providing clear differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description includes explicit workflow guidance: 'BEFORE: RECOMMENDED — asterwise_get_compatibility — establish baseline match quality.' It also clearly states what this tool does NOT cover (Ashtakoota points, porutham passes) and provides a 'DO NOT CONFUSE WITH' section naming specific alternative tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_personal_cyclesA
Read-onlyIdempotent
Inspect

Returns the Personal Year, Personal Month, and Personal Day numbers for a given birth date and optional target date. All three cycle numbers are derived from the birth month, birth day, and the target calendar date.

SECTION: WHAT THIS TOOL COVERS Personal cycles are the Pythagorean timing system. The Personal Year (1–9) sets the annual theme. The Personal Month refines it to a 30-day window. The Personal Day gives the daily energy flavour. A Personal Year 1 favours new beginnings; a 9 favours completion and release. Cycles nest: the same number in Year, Month, and Day simultaneously creates a peak intensity day.

Formula: Personal Year = birth_month_reduced + birth_day_reduced + target_year_reduced Personal Month = Personal Year + target_month, reduced Personal Day = Personal Month + target_day, reduced Master numbers 11 and 22 are preserved where they arise.

SECTION: WORKFLOW BEFORE: None — standalone. AFTER: asterwise_get_numerology_profile — see personal cycles alongside core numbers.

SECTION: INPUT CONTRACT date — Birth date in YYYY-MM-DD format. Example: '1985-11-12' year (optional int) — Target year. Defaults to current calendar year. Example: 2026 month (optional int 1–12) — Target month. Defaults to current month. Example: 5 day (optional int 1–31) — Target day. Personal Day is only returned when day is provided. Defaults to null (Personal Day omitted). Example: 1

SECTION: OUTPUT CONTRACT data.personal_year (int — 1–9 or master 11/22) data.personal_month (int — 1–9 or master 11/22) data.personal_day (int or null — null when day parameter is not provided) data.target_year (int — echoed) data.target_month (int — echoed) data.target_day (int or null — echoed)

SECTION: RESPONSE FORMAT response_format=json — structured JSON. response_format=markdown — human-readable. Both modes return identical underlying data.

SECTION: COMPUTE CLASS FAST_LOOKUP

SECTION: ERROR CONTRACT INVALID_PARAMS (local): None — all validation is upstream. INTERNAL_ERROR: Any upstream API failure → MCP INTERNAL_ERROR

SECTION: DO NOT CONFUSE WITH asterwise_get_personal_year — returns Personal Year only, no month or day breakdown. asterwise_get_numerology_profile — core name numbers; personal_year field is null there.

ParametersJSON Schema
NameRequiredDescriptionDefault
dayNo
dateYes
yearNo
monthNo
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnlyHint=true and idempotentHint=true, which the description does not contradict. The description adds value by detailing that personal_day is null unless the day parameter is provided, and explicitly lists the output fields.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is well-structured with clear sections (INPUT CONTRACT, OUTPUT CONTRACT, DO NOT CONFUSE WITH). It is concise and front-loaded with the main purpose, using only essential sentences.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description explains input defaults and output structure, and distinguishes from a sibling. With an output schema available, it adequately covers context, though it omits error handling or edge cases.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0%, so the description must explain parameters. It explains date format (YYYY-MM-DD) and defaults for year, month, day. However, it does not mention the required 'response_format' parameter with enum values (markdown/json), which is a gap.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states it returns Personal Year, Personal Month, and Personal Day numbers for a birth date and target date. It also distinguishes itself from the sibling tool asterwise_get_personal_year by noting that the sibling only provides basic personal year without month/day.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Description provides clear guidance on default behavior (defaults to today if not provided) and a 'DO NOT CONFUSE WITH' section to differentiate from asterwise_get_personal_year. However, it lacks explicit when-not-to-use or alternative tools beyond that one sibling.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_personality_numberA
Read-onlyIdempotent
Inspect

Calculates the Personality number from consonants in the full name. All non-vowels (BCDFGHJKLMNPQRSTVWXYZ) contribute — Y is always a consonant. Reduces each name part separately, preserving master numbers 11, 22, 33.

SECTION: WHAT THIS TOOL COVERS The Personality number is the outer mask — how others perceive you before they know you well. It governs first impressions, physical presentation, and the social interface between the private self (Soul Urge) and the world.

SECTION: WORKFLOW BEFORE: None — standalone. AFTER: asterwise_get_numerology_profile — see all five core numbers together.

SECTION: INPUT CONTRACT name — Full legal name as used at birth. Example: 'Arjun Mehta', 'Sofia Rossi' Y is always a consonant — not treated as a vowel.

SECTION: OUTPUT CONTRACT data.number (int — Personality number; master numbers 11/22/33 preserved) data.is_master_number (bool) data.karmic_debt_number (int or null)

SECTION: RESPONSE FORMAT response_format=json — structured JSON. response_format=markdown — human-readable. Both modes return identical underlying data.

SECTION: COMPUTE CLASS FAST_LOOKUP

SECTION: ERROR CONTRACT INVALID_PARAMS (local): None. INTERNAL_ERROR: upstream failure → MCP INTERNAL_ERROR

SECTION: DO NOT CONFUSE WITH asterwise_get_expression_number — all letters (Expression = Soul Urge + Personality). asterwise_get_soul_urge_number — vowels only.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYes
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint and idempotentHint. The description adds that it reduces each name part separately, which provides some behavioral insight. However, it does not disclose potential errors, input constraints, or other side effects beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is brief and front-loaded with the core purpose. The 'OUTPUT CONTRACT' section is somewhat awkward but does not detract significantly. It is efficient with minimal waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description explains what the tool calculates and lists output fields via the contract. Since an output schema exists, the description is sufficiently complete for a simple numerology tool. The lack of parameter details is a minor gap given the schema presence.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description should compensate. It mentions 'full name' for the name parameter but does not specify format or rules. The response_format parameter is not described. Minimal added value for understanding parameter usage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it calculates the Personality number from consonants in the full name, reducing each name part separately, and represents outer personality. This distinguishes it from sibling tools like expression number and soul urge number, which have different focuses.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when outer personality is needed, but does not explicitly state when to use it versus alternatives. No when-not or contextual exclusions are provided, leaving the agent to infer from the purpose.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_personal_yearA
Read-onlyIdempotent
Inspect

Looks up the Personal Year theme for the current calendar cycle from a name and birth date using only month and day inputs server-side.

SECTION: WHAT THIS TOOL COVERS Endpoint returns Personal Year data derived from birth month/day against the running calendar year on the server — there is no extra year argument in the tool schema. Expected response keys (pending live confirmation): personal_year_number (int), theme (string), interpretation (string), advice (string), favorable_actions[] (string array), challenges[] (string array). asterwise_get_numerology_profile leaves personal_year null; use this tool when Personal Year detail is required.

SECTION: WORKFLOW BEFORE: RECOMMENDED — asterwise_get_numerology_profile — see other core numbers first. AFTER: None.

SECTION: INPUT CONTRACT Only name and date are submitted; the active calendar year is chosen upstream automatically.

SECTION: OUTPUT CONTRACT personal_year_number (int) — expected theme (string) — expected interpretation (string) — expected advice (string) — expected favorable_actions[] (string array) — expected challenges[] (string array) — expected (Schema not yet confirmed from live response; fields above reflect tool design.)

SECTION: RESPONSE FORMAT response_format=json serialises the complete response as indented JSON — use this for programmatic parsing, typed clients, and downstream tool chaining. response_format=markdown renders the same data as a human-readable report. Both modes return identical underlying data — no fields are added, removed, or filtered by either mode.

SECTION: COMPUTE CLASS FAST_LOOKUP

SECTION: ERROR CONTRACT INVALID_PARAMS (local — caught before upstream call): None — all validation is upstream.

INVALID_PARAMS (upstream): — None — upstream rejection surfaces as MCP INTERNAL_ERROR at the tool layer.

INTERNAL_ERROR: — Any upstream API failure or timeout → MCP INTERNAL_ERROR

Edge cases: — Cannot request arbitrary calendar years via this tool — only the server-selected current year.

SECTION: DO NOT CONFUSE WITH asterwise_get_numerology_profile — personal_year field there is null; this endpoint supplies the annual theme. asterwise_get_varshaphal — Vedic solar return, not Pythagorean Personal Year.

ParametersJSON Schema
NameRequiredDescriptionDefault
dateYes
nameYes
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds significant behavioral context beyond annotations. While annotations indicate read-only, open-world, idempotent, and non-destructive traits, the description details server-side computation ('using only month and day inputs server-side'), automatic year selection ('active calendar year is chosen upstream automatically'), response format options (json/markdown), error handling (INTERNAL_ERROR for upstream failures), and constraints ('Cannot request arbitrary calendar years'). This enriches the agent's understanding of how the tool behaves in practice.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (e.g., WHAT THIS TOOL COVERS, WORKFLOW, INPUT CONTRACT), making it easy to scan. Each sentence adds value—no fluff or repetition. It efficiently conveys purpose, usage, parameters, output, and errors without unnecessary verbosity, demonstrating excellent information density and organization.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (involving numerology calculations) and the presence of annotations and an output schema, the description is highly complete. It covers purpose, sibling differentiation, workflow, input behavior, output fields, response formats, compute class, error scenarios, and edge cases. The output schema likely details response structure, so the description appropriately focuses on contextual nuances rather than repeating schema information.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description compensates well by explaining parameter roles. It clarifies that 'date' uses 'month and day inputs' (implying year is ignored), 'name' is required for lookup, and 'response_format' controls output serialization (json for programmatic use, markdown for human-readable reports). However, it doesn't specify exact date format or name constraints, leaving some gaps in parameter understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'looks up the Personal Year theme for the current calendar cycle from a name and birth date using only month and day inputs server-side.' It specifies the exact verb ('looks up'), resource ('Personal Year theme'), and scope ('current calendar cycle'), and explicitly distinguishes it from sibling tools like 'asterwise_get_numerology_profile' and 'asterwise_get_varshaphal' by explaining their differences.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool vs. alternatives. It states 'use this tool when Personal Year detail is required' and contrasts it with 'asterwise_get_numerology_profile' (which leaves personal_year null) and 'asterwise_get_varshaphal' (which is Vedic, not Pythagorean). It also includes a 'BEFORE' recommendation to use 'asterwise_get_numerology_profile' first for core numbers, offering clear workflow context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_pitra_doshaA
Read-only
Inspect

Detects and analyses Pitru Dosha (Pitru Shapa — Ancestral Curse) using all five classical combinations from BPHS Chapter 83.

SECTION: WHAT THIS TOOL COVERS Standalone Pitru Dosha endpoint with deeper analysis than the pitru_dosha field inside asterwise_get_doshas. Returns: presence flag, severity (mild/moderate/severe), which of the 5 BPHS Ch.83 combinations triggered, Sun analysis (house, sign, debilitation, afflictions), 9th lord analysis (identity, house, debilitation, afflictions), all contributing factors, cancellation conditions (Jupiter protective), classical symptoms, and remedies. Primary classical symptom per BPHS Ch.83: denial of progeny or difficulties with children. Afflicting planets: Saturn, Rahu, Mars, Ketu. Source: Brihat Parashara Hora Shastra Ch.83 (Purvajanma Shapa Adhyaya).

SECTION: WORKFLOW BEFORE: None — birth data computes everything needed. AFTER: asterwise_get_puja_suggestions — recommend remedial pujas for Sun/Mars.

SECTION: INPUT CONTRACT birth — BirthData (date, time, lat, lon, timezone).

SECTION: OUTPUT CONTRACT data.present (bool) data.severity (string — 'mild', 'moderate', 'severe', or null) data.severity_note (string or null) data.bphs_combinations_triggered[] — each: combination (string), description (string), factors[] (string array), weight (int) data.bphs_combinations_count (int) data.sun_analysis{}: house (int), sign_index (int), debilitated (bool), afflictions[] (string array) data.ninth_lord_analysis{}: planet (string), house (int), sign_index (int), debilitated (bool), afflictions[] (string array) data.all_factors[] (string array — deduplicated list of all triggers) data.cancellations[] (string array — Jupiter protective conditions) data.interpretation (string) data.classical_symptoms[] (string array) data.remedies[] (string array) data.classical_source (string)

SECTION: COMPUTE CLASS MEDIUM_COMPUTE — natal chart computation.

SECTION: ERROR CONTRACT INVALID_PARAMS (local): BirthData Pydantic violations → MCP INVALID_PARAMS INTERNAL_ERROR: Any upstream API failure → MCP INTERNAL_ERROR

SECTION: DO NOT CONFUSE WITH asterwise_get_doshas — returns pitru_dosha as one of twelve doshas with less detail. Use this tool when dedicated Pitru Dosha analysis is needed.

ParametersJSON Schema
NameRequiredDescriptionDefault
birthYesBirth data for a single person.
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate a safe read operation (readOnlyHint=true, destructiveHint=false). The description adds valuable behavioral details such as compute class (MEDIUM_COMPUTE), error contracts, and the scope of analysis (5 classical combinations, Sun and 9th lord analysis), which go beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (SECTION, WORKFLOW, INPUT/OUTPUT CONTRACT, etc.) and front-loads the purpose. While verbose, each section contributes useful information, such as output details and error handling, that would help an agent.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description covers all necessary aspects: purpose, workflow, input contract, full output contract (even though an output schema exists, the description lists all fields), compute class, error contract, and differentiation from siblings. Given the moderate complexity of an astrological dosha tool, this is comprehensive.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 50% coverage with descriptions for birth subfields and response_format. The 'INPUT CONTRACT' section repeats parameter names but adds no new meaning beyond what is in the schema. The description does not address the response_format parameter, leaving its semantics to the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Detects and analyses Pitru Dosha' with a specific verb and resource. It explicitly distinguishes from sibling tool 'asterwise_get_doshas' by noting this tool provides dedicated and deeper analysis, making its purpose clear and differentiated.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description includes a 'DO NOT CONFUSE WITH' section that names the alternative tool and explains when to use this one ('dedicated Pitru Dosha analysis'). It also provides a workflow with BEFORE and AFTER steps, giving clear usage context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_planet_natureA
Read-onlyIdempotent
Inspect

Returns classical graha (planet) properties for all nine planets or a single planet as described in Brihat Parashara Hora Shastra (BPHS) Chapter 3.

SECTION: WHAT THIS TOOL COVERS Returns tattva (element), guna (Sattvic/Rajasic/Tamasic), gender, caste/varna, natural benefic/malefic nature, direction, color, presiding deity, ruling day of week, metal, body part governed, and naisargika maitri (natural friends, enemies, neutrals) for each of the nine grahas: Sun, Moon, Mars, Mercury, Jupiter, Venus, Saturn, Rahu, Ketu. Rahu and Ketu include classical_note explaining the limits of BPHS Chapter 3 for shadow nodes.

SECTION: WORKFLOW BEFORE: None — standalone reference. AFTER: asterwise_get_puja_suggestions — propitiation for a specific graha.

SECTION: INPUT CONTRACT planet (optional): One of Sun, Moon, Mars, Mercury, Jupiter, Venus, Saturn, Rahu, Ketu. Omit to get all nine planets.

SECTION: OUTPUT CONTRACT Single planet: data.planet, data.tattva, data.guna, data.gender, data.caste, data.nature, data.direction, data.color, data.deity, data.day, data.metal, data.body_part, data.friends[], data.enemies[], data.neutrals[], data.classical_note, data.source All planets: data.planets{} — object keyed by planet name, each with above fields

SECTION: COMPUTE CLASS FAST_LOOKUP

SECTION: ERROR CONTRACT INVALID_PARAMS (upstream): Unknown planet name → MCP INTERNAL_ERROR INTERNAL_ERROR: Any upstream API failure → MCP INTERNAL_ERROR

SECTION: DO NOT CONFUSE WITH asterwise_get_puja_suggestions — ritual propitiation per planet, not BPHS properties. asterwise_get_rudraksha — bead recommendations per planet, not natal properties.

ParametersJSON Schema
NameRequiredDescriptionDefault
planetNo
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint, idempotentHint, and destructiveHint. The description adds compute class (FAST_LOOKUP), error contract, and output field details, enhancing transparency beyond annotations without contradiction.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with clear sections, though slightly verbose. Front-loaded with main purpose. Every section adds value but could be trimmed slightly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Covers all aspects: input, output (with fields), error contract, workflow, compute class, and sibling differentiation. Output schema exists, so return values are adequately documented. No gaps given the tool's complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, but the description's 'INPUT CONTRACT' section explains the planet parameter and its optionality. However, the response_format parameter (required, enum) is not explained in the description, leaving a minor gap.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool returns classical graha properties for all nine planets or a single planet per BPHS Chapter 3. It distinguishes from siblings via the 'DO NOT CONFUSE WITH' section, naming specific alternatives.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Includes 'WORKFLOW' with BEFORE (none) and AFTER (asterwise_get_puja_suggestions), plus 'DO NOT CONFUSE WITH' explicitly lists two sibling tools with different purposes, providing clear when-to-use guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_poruthamA
Read-onlyIdempotent
Inspect

Runs the Tamil ten-porutham checklist for two charts, counts passes out of ten, surfaces Rajju/Vedha classical condition booleans, and returns per-porutham evidence objects.

SECTION: WHAT THIS TOOL COVERS Classical Tamil Nadu marriage screening: Dinam, Ganam, Mahendra, Stree Deergha, Yoni, Rasi, Rasiyathipaty, Rajju, Vedha, Vasya — each with passed plus type-specific fields (counts, ganas, scores, rajju pairs, vedha flags, distances). Rajju and Vedha classical conditions elevate rajju_veto/vedha_veto/hard_veto booleans. It is not twelve-koota Thirumana (asterwise_get_thirumana_porutham) or North Indian Ashtakoota (asterwise_get_compatibility).

SECTION: WORKFLOW BEFORE: RECOMMENDED — asterwise_get_natal_chart per native. AFTER: asterwise_get_thirumana_porutham — extended twelve-koota read if needed.

SECTION: INPUT CONTRACT Two BirthData objects per global contract.

SECTION: OUTPUT CONTRACT data.total_passed (int — out of 10) data.total_poruthams (int — 10) data.compatibility_level (string) data.rajju_veto (bool) data.vedha_veto (bool) data.hard_veto (bool) data.breakdown{} — keyed by porutham name (Dinam, Ganam, Mahendra, Stree Deergha, Yoni, Rasi, Rasiyathipaty, Rajju, Vedha, Vasya): passed (bool) Dinam: count (int), position (int) Ganam: boy_gana (string), girl_gana (string) Rasiyathipaty/Vasya: score (float) Rajju: boy_rajju (string), girl_rajju (string), is_veto (bool) Vedha: is_veto (bool) Stree Deergha: distance (int), description (string)

SECTION: RESPONSE FORMAT response_format=json serialises the complete response as indented JSON — use this for programmatic parsing, typed clients, and downstream tool chaining. response_format=markdown renders the same data as a human-readable report. Both modes return identical underlying data — no fields are added, removed, or filtered by either mode.

SECTION: COMPUTE CLASS MEDIUM_COMPUTE

SECTION: ERROR CONTRACT INVALID_PARAMS (local — caught before upstream call): None — all validation is upstream.

INVALID_PARAMS (upstream): — None — upstream rejection surfaces as MCP INTERNAL_ERROR at the tool layer.

INTERNAL_ERROR: — Any upstream API failure or timeout → MCP INTERNAL_ERROR

Edge cases: — hard_veto aggregates Rajju/Vedha classical condition outcomes — read breakdown before narrating success.

SECTION: DO NOT CONFUSE WITH asterwise_get_thirumana_porutham — twelve poruthams including Nadi and Varna, not ten. asterwise_get_dashakoot — South Indian float scoring, not Tamil pass grid.

ParametersJSON Schema
NameRequiredDescriptionDefault
person1YesBirth data for a single person.
person2YesBirth data for a single person.
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint=true, destructiveHint=false, openWorldHint=true, and idempotentHint=true. The description adds valuable context beyond annotations: it specifies the 'MEDIUM_COMPUTE' class, details error handling contracts (INVALID_PARAMS, INTERNAL_ERROR), explains edge cases like hard_veto aggregation, and describes the two response format options (json vs markdown). No contradiction with annotations exists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (WHAT THIS TOOL COVERS, WORKFLOW, INPUT CONTRACT, etc.), making it easy to navigate. While comprehensive, some sections could be more concise (e.g., the OUTPUT CONTRACT lists all fields but could be summarized). Overall, it's appropriately sized for the tool's complexity with minimal wasted text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity, rich output schema (detailed in OUTPUT CONTRACT), and comprehensive annotations, the description is complete. It covers purpose, differentiation from siblings, workflow guidance, input/output contracts, response formats, compute class, error handling, and edge cases. The output schema eliminates the need to describe return values in the description itself.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 67% (moderate). The description adds minimal parameter semantics beyond the schema, mainly stating 'Two BirthData objects per global contract' and mentioning the response_format parameter's effect. However, it doesn't explain the semantics of person1/person2 fields or provide additional context about the BirthData structure that isn't already in the schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool 'runs the Tamil ten-porutham checklist for two charts, counts passes out of ten, surfaces Rajju/Vedha/hard vetoes, and returns per-porutham evidence objects.' It clearly distinguishes this tool from sibling tools like asterwise_get_thirumana_porutham (twelve poruthams) and asterwise_get_compatibility (North Indian Ashtakoota), providing specific verb+resource differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description includes explicit workflow guidance: 'BEFORE: RECOMMENDED — asterwise_get_natal_chart per native' and 'AFTER: asterwise_get_thirumana_porutham — extended twelve-koota read if needed.' It also clearly states when NOT to use this tool versus alternatives in the 'DO NOT CONFUSE WITH' section, naming specific sibling tools and their differences.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_prashna_chartA
Read-onlyIdempotent
Inspect

Casts a Prashna chart for the query instant using supplied date, time, place, and a single-topic keyword, then returns houses, Moon diagnostics, verdict, and cusps.

SECTION: WHAT THIS TOOL COVERS Horary workflow: maps one of the approved keywords to a primary house, evaluates Moon (phase, VOC, affliction), applies ithsala flags, aggregates graha dignities/houses from Prashna Lagna, and emits verdict/confidence/score. Not natal life analysis (asterwise_get_natal_chart) and not KP cusps from birth (asterwise_get_kp_chart).

SECTION: WORKFLOW BEFORE: None — standalone for horary. AFTER: None.

SECTION: INPUT CONTRACT question must be exactly one of: self, wealth, siblings, property, children, health, marriage, death, travel, career, gains, loss. Full sentences are not validated locally and are rejected upstream → MCP INTERNAL_ERROR at the tool layer. PrashnaInput enforces date/time/lat/lon/ayanamsa patterns locally.

SECTION: OUTPUT CONTRACT data.ayanamsa (string) data.question (string — keyword echoed) data.primary_house (int) data.ithsala_applying (bool) data.ithsala_separating (bool) data.query_utc (string — ISO UTC) data.lagna: rashi_index (int) rashi (string) longitude (float) lord (string) data.house_analysis: house (int) rashi_index (int) rashi (string) lord (string) lord_dignity (string) lord_house (int) lord_longitude (float) occupants[] (string array) data.moon: longitude (float) rashi_index (int) rashi (string) nakshatra (string) phase (string — 'waxing' or 'waning') dignity (string) void_of_course (bool) afflicted_moon (bool) applying_to_benefic (bool) data.verdict: verdict (string — 'favorable', 'mixed', or 'unfavorable') confidence (string — 'high', 'medium', or 'low') score (int — negative unfavorable, positive favorable) data.planets{} — per planet: longitude (float), rashi_index (int), rashi (string), house (int), is_retrograde (bool), dignity (string) data.house_cusps{} — keys '1'..'12': rashi_index (int), rashi (string), lord (string), longitude (float)

SECTION: RESPONSE FORMAT response_format=json serialises the complete response as indented JSON — use this for programmatic parsing, typed clients, and downstream tool chaining. response_format=markdown renders the same data as a human-readable report. Both modes return identical underlying data — no fields are added, removed, or filtered by either mode.

SECTION: COMPUTE CLASS MEDIUM_COMPUTE

SECTION: ERROR CONTRACT INVALID_PARAMS (local — caught before upstream call): — PrashnaInput Pydantic violations (date, time, lat, lon, ayanamsa) → MCP INVALID_PARAMS

INVALID_PARAMS (upstream): — None — bad question tokens surface as MCP INTERNAL_ERROR at the tool layer.

INTERNAL_ERROR: — Any upstream API failure or timeout → MCP INTERNAL_ERROR

Edge cases: — Moon void-of-course flagged classically negative for outcomes.

SECTION: DO NOT CONFUSE WITH asterwise_get_natal_chart — requires birth data, not query-moment prashna. asterwise_get_kp_chart — natal KP from birth time, not horary keyword mapping.

ParametersJSON Schema
NameRequiredDescriptionDefault
prashnaYesPrashna (horary) query parameters.

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond annotations. While annotations indicate readOnlyHint=true, idempotentHint=true, and destructiveHint=false, the description details the horary workflow, Moon diagnostics, verdict scoring, and error handling (e.g., bad question tokens cause INTERNAL_ERROR). It also notes edge cases like 'Moon void-of-course flagged classically negative for outcomes,' which provides important behavioral insight not covered by annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is structured with clear sections (WHAT THIS TOOL COVERS, WORKFLOW, INPUT CONTRACT, etc.), which aids readability. However, it is quite lengthy and includes some redundant details (e.g., repeating output fields that are already in the output schema). While informative, it could be more concise by focusing only on value-added information beyond structured fields.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity, rich output schema, and comprehensive annotations, the description is highly complete. It covers purpose, usage guidelines, workflow, input constraints, output details, error handling, and sibling tool distinctions. The presence of an output schema means the description doesn't need to explain return values in depth, and it effectively supplements the structured data with necessary context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds some parameter semantics, such as specifying that 'question must be exactly one of: self, wealth, siblings...' and warning that 'Full sentences are not validated locally and are rejected upstream.' However, with 100% schema description coverage, the schema already documents all parameters thoroughly. The description provides additional context but doesn't significantly enhance understanding beyond what the schema offers.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'casts a Prashna chart for the query instant' and specifies it returns 'houses, Moon diagnostics, verdict, and cusps.' It explicitly distinguishes from sibling tools asterwise_get_natal_chart and asterwise_get_kp_chart by stating this is for horary analysis, not natal life analysis or KP cusps from birth.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool versus alternatives. It states 'Not natal life analysis (asterwise_get_natal_chart) and not KP cusps from birth (asterwise_get_kp_chart).' It also specifies this is for 'horary workflow' and is 'standalone for horary,' clearly defining its intended context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_puja_suggestionsA
Read-onlyIdempotent
Inspect

Returns classical puja (ritual worship) recommendations for planetary propitiation sourced from Agni Purana, Matsya Purana, Skanda Purana, and BPHS Chapter 84.

SECTION: WHAT THIS TOOL COVERS For each of the nine grahas, returns: puja name, presiding deity, day of week, specific offerings (flowers, grains, incense), grain associated, beej mantra, and classical source text. Used by practitioners to recommend planetary remedies based on chart analysis.

SECTION: WORKFLOW BEFORE: asterwise_get_natal_chart — identify afflicted planets before recommending pujas. AFTER: asterwise_get_rudraksha — complementary bead-based remedy.

SECTION: INPUT CONTRACT planet (optional): One of Sun, Moon, Mars, Mercury, Jupiter, Venus, Saturn, Rahu, Ketu. Omit to get all nine planets.

SECTION: OUTPUT CONTRACT Single planet: data.planet, data.puja_name, data.deity, data.day, data.offerings[], data.grain, data.mantra, data.source All planets: data.planets{} — object keyed by planet name

SECTION: COMPUTE CLASS FAST_LOOKUP

SECTION: ERROR CONTRACT INVALID_PARAMS (upstream): Unknown planet name → MCP INTERNAL_ERROR INTERNAL_ERROR: Any upstream API failure → MCP INTERNAL_ERROR

SECTION: DO NOT CONFUSE WITH asterwise_get_remedies — personalised remedies from natal chart analysis. asterwise_get_rudraksha — bead recommendations, not puja rituals.

ParametersJSON Schema
NameRequiredDescriptionDefault
planetNo
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint, idempotentHint, and destructiveHint, indicating safe read-only behavior. The description adds value by specifying COMPUTE CLASS as FAST_LOOKUP and detailing output contracts (single vs. all planets) and error contracts, which provide context beyond the annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (COVERS, WORKFLOW, CONTRACTS, COMPUTE CLASS, DO NOT CONFUSE). Every section adds value without redundancy. Information is front-loaded with the core purpose in the first sentence.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of annotations and an output schema, the description provides sufficient context for a retrieval tool. It includes input/output contracts, error handling, and workflow integration. Minor gap: the response_format parameter could be elaborated, but the output contract partly covers the two formats.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description adds meaning for the planet parameter (explaining the effect of omitting it returns all planets). However, the response_format parameter is not described, leaving its purpose unclear. The OUTPUT CONTRACT partially compensates by showing the structure for both markdown and json outputs, but direct explanation is missing.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool returns classical puja recommendations for planetary propitiation from specific sources. It distinguishes from sibling tools like asterwise_get_remedies and asterwise_get_rudraksha in a dedicated section, making the purpose unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

A WORKFLOW section explicitly states the recommended preceding tool (asterwise_get_natal_chart) and following tool (asterwise_get_rudraksha). The 'DO NOT CONFUSE WITH' section lists alternative tools with brief distinctions, providing clear when-to-use and when-not-to-use guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_rahu_kaalA
Read-onlyIdempotent
Inspect

Computes Rahu Kaal, Gulika Kaal, and Yamaganda Kaal intervals from diurnal length at a location and marks whether Rahu Kaal is active now in local time.

SECTION: WHAT THIS TOOL COVERS Returns sunrise/sunset anchors plus three inauspicious bands with start, end, duration_minutes (~93 for Rahu Kaal), and is_active on Rahu Kaal. Polar latitudes where sunrise/sunset cannot be solved fail upstream. It does not return Panchanga tithi/nakshatra (asterwise_get_panchanga) or scored muhurta search (asterwise_get_muhurta).

SECTION: WORKFLOW BEFORE: None — this tool is standalone. AFTER: asterwise_get_choghadiya — broader auspicious/inauspicious grid for the day.

SECTION: INPUT CONTRACT LocationInput validates date pattern and coordinates locally.

SECTION: OUTPUT CONTRACT data.date (string) data.sunrise (string — HH:MM local) data.sunset (string — HH:MM local) data.rahu_kaal: start (string — HH:MM local) end (string — HH:MM local) duration_minutes (int — typically 93) is_active (bool) data.gulika_kaal — same shape as data.rahu_kaal data.yamaganda_kaal — same shape as data.rahu_kaal

SECTION: RESPONSE FORMAT response_format=json serialises the complete response as indented JSON — use this for programmatic parsing, typed clients, and downstream tool chaining. response_format=markdown renders the same data as a human-readable report. Both modes return identical underlying data — no fields are added, removed, or filtered by either mode.

SECTION: COMPUTE CLASS FAST_LOOKUP

SECTION: ERROR CONTRACT INVALID_PARAMS (local — caught before upstream call): — Invalid LocationInput fields → MCP INVALID_PARAMS

INVALID_PARAMS (upstream): — None — polar or astronomical failures surface as MCP INTERNAL_ERROR at the tool layer.

INTERNAL_ERROR: — Any upstream API failure or timeout → MCP INTERNAL_ERROR

Edge cases: — Polar latitudes may make sunrise/sunset undefined for the solver → MCP INTERNAL_ERROR.

SECTION: DO NOT CONFUSE WITH asterwise_get_choghadiya — full day/night slot tables, not only the three kaal bands. asterwise_get_panchanga — Panchanga limbs, not kaal timers.

ParametersJSON Schema
NameRequiredDescriptionDefault
locationYesFor tools that need location but not birth time.

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint=true, openWorldHint=true, idempotentHint=true, and destructiveHint=false. The description adds valuable behavioral context beyond annotations: it specifies polar latitude failures (INTERNAL_ERROR), describes the compute class as FAST_LOOKUP, explains error contracts (INVALID_PARAMS vs INTERNAL_ERROR), and details the response format options (json vs markdown).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (WHAT THIS TOOL COVERS, WORKFLOW, etc.) that make information easy to find. While comprehensive, some sections could be more concise - the OUTPUT CONTRACT section repeats field names that could be inferred from context, and the ERROR CONTRACT section is quite detailed.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity, rich annotations, and detailed output schema, the description provides excellent contextual completeness. It covers purpose, usage guidelines, behavioral details, error handling, sibling differentiation, and response formats - everything needed for an agent to understand when and how to use this tool effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already fully documents all parameters. The description adds minimal parameter-specific information beyond the schema - it mentions LocationInput validates date pattern and coordinates locally, but doesn't provide additional semantic context about parameter usage or meaning.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool computes three specific inauspicious time intervals (Rahu Kaal, Gulika Kaal, Yamaganda Kaal) from diurnal length at a location and marks whether Rahu Kaal is active now. It distinguishes from siblings by explicitly naming asterwise_get_choghadiya and asterwise_get_panchanga as different tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance in multiple sections: 'WHAT THIS TOOL COVERS' clarifies what it returns and what it doesn't (contrasting with siblings), 'WORKFLOW' specifies it's standalone and suggests asterwise_get_choghadiya as a follow-up, and 'DO NOT CONFUSE WITH' explicitly names alternative tools with their purposes.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_remediesA
Read-onlyIdempotent
Inspect

Derives Parashari remedial prescriptions from planetary weakness and dusthana lordship and returns mantra, lifestyle, charity, and dignity tables.

SECTION: WHAT THIS TOOL COVERS Builds data.recommended_remedies[] for grahas judged weak or afflicted plus data.planet_dignities[] for all nine planets. It is not Lal Kitab totka advice (asterwise_get_lal_kitab_remedies) nor a curated gem safety brief (asterwise_get_gemstone_recommendations). Empty recommended_remedies[] means no weak planets were flagged — not an error.

SECTION: WORKFLOW BEFORE: RECOMMENDED — asterwise_get_natal_chart — chart context before choosing remedies. AFTER: asterwise_get_gemstone_recommendations — optional focused gem briefing.

SECTION: INPUT CONTRACT BirthData follows the global contract.

SECTION: OUTPUT CONTRACT data.recommended_remedies[] — each object: planet (string) dignity (string — e.g. 'debilitated', 'enemy', 'neutral') rashi (string) house (int) is_dusthana_lord (bool) reason (string) mantra (string) repetitions (int — typically 108) deity (string) gemstone (string) colour (string) metal (string) fast_day (string) charity (string) action_daily (string) action_weekly (string) data.planet_dignities[] — nine objects: planet, dignity, rashi, house (int)

SECTION: RESPONSE FORMAT response_format=json serialises the complete response as indented JSON — use this for programmatic parsing, typed clients, and downstream tool chaining. response_format=markdown renders the same data as a human-readable report. Both modes return identical underlying data — no fields are added, removed, or filtered by either mode.

SECTION: COMPUTE CLASS MEDIUM_COMPUTE

SECTION: ERROR CONTRACT INVALID_PARAMS (local — caught before upstream call): None — all validation is upstream.

INVALID_PARAMS (upstream): — Dates before 1800 or after 2100 → MCP INTERNAL_ERROR at the tool layer.

INTERNAL_ERROR: — Any upstream API failure or timeout → MCP INTERNAL_ERROR

Edge cases: — Empty data.recommended_remedies[] when no weak grahas — valid success.

SECTION: DO NOT CONFUSE WITH asterwise_get_lal_kitab_remedies — household Lal Kitab totkas, not mantra/gem Parashari rows. asterwise_get_gemstone_recommendations — gemstone roles and contraindications only, not full lifestyle remedies.

ParametersJSON Schema
NameRequiredDescriptionDefault
birthYesBirth data for a single person.
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The annotations already declare readOnlyHint=true, destructiveHint=false, openWorldHint=true, and idempotentHint=true, covering safety and idempotency. The description adds valuable context beyond this: it explains that empty recommended_remedies[] is valid (not an error), specifies the compute class as MEDIUM_COMPUTE, details error contracts (e.g., date range limits, internal errors), and describes response format options. No contradictions with annotations are present.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (e.g., WHAT THIS TOOL COVERS, WORKFLOW, OUTPUT CONTRACT), front-loads key information, and every sentence adds value without redundancy. It efficiently covers purpose, usage, parameters, output, errors, and distinctions in a organized manner.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity, rich annotations, and detailed output schema, the description is highly complete. It covers purpose, usage guidelines, workflow, input/output contracts, response formats, compute class, error handling, and sibling distinctions. With an output schema provided, it doesn't need to explain return values in detail, and it adds sufficient context for effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 50%, with detailed descriptions for birth parameters but none for response_format. The description adds minimal parameter semantics beyond the schema: it mentions 'BirthData follows the global contract' and explains response_format options (json vs markdown), but doesn't fully compensate for the lack of schema description on response_format. Given the partial coverage, a baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool 'Derives Parashari remedial prescriptions from planetary weakness and dusthana lordship and returns mantra, lifestyle, charity, and dignity tables.' This is a specific verb+resource combination that clearly distinguishes it from sibling tools like asterwise_get_lal_kitab_remedies and asterwise_get_gemstone_recommendations, which are explicitly called out as different.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool versus alternatives in the 'DO NOT CONFUSE WITH' section, names specific sibling tools to avoid confusion, and includes a 'WORKFLOW' section with recommended 'BEFORE' and 'AFTER' tools, offering clear context and alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_rudrakshaA
Read-onlyIdempotent
Inspect

Returns Rudraksha bead recommendations per planet as per Shiva Purana Vidyeshvara Samhita Chapter 25.

SECTION: WHAT THIS TOOL COVERS For each of the nine planets, returns: mukhi (face count), presiding deity per Shiva Purana, exact beej mantra, recommended metal for stringing, wearing day, mala bead count (108), recommended wearing finger, spiritual benefits, and a classical_note clarifying that the Shiva Purana assigns Mukhis to deities — the planetary correspondence is traditional astrological synthesis, not scriptural. Includes the known deity discrepancy for 8 Mukhi (Bhairava per Shiva Purana vs Vinayaka per Rudrakshajabala Upanishad).

SECTION: WORKFLOW BEFORE: asterwise_get_natal_chart — identify planets needing support. AFTER: None.

SECTION: INPUT CONTRACT planet (optional): One of Sun, Moon, Mars, Mercury, Jupiter, Venus, Saturn, Rahu, Ketu. Omit to get all nine planets.

SECTION: OUTPUT CONTRACT Single planet: data.planet, data.mukhi (int), data.presiding_deity, data.mantra, data.metal, data.wearing_day, data.mala_beads (int), data.wearing_finger, data.benefits, data.classical_note, data.source All planets: data.planets{} — object keyed by planet name

SECTION: COMPUTE CLASS FAST_LOOKUP

SECTION: ERROR CONTRACT INVALID_PARAMS (upstream): Unknown planet name → MCP INTERNAL_ERROR INTERNAL_ERROR: Any upstream API failure → MCP INTERNAL_ERROR

SECTION: DO NOT CONFUSE WITH asterwise_get_gemstone_recommendations — Ratna-style gemstones from natal chart. asterwise_get_puja_suggestions — ritual worship, not bead recommendations.

ParametersJSON Schema
NameRequiredDescriptionDefault
planetNo
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations (readOnlyHint, idempotentHint, destructiveHint) already indicate a safe, idempotent lookup. The description adds extensive behavioral detail: the scriptural source, a known deity discrepancy, output structure, error contracts, and compute class. It does not contradict annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is lengthy but highly structured with clear sections (e.g., WHAT, WORKFLOW, INPUT CONTRACT, OUTPUT CONTRACT). Each section adds value, though some content (e.g., exact output fields) could be abbreviated. Still well-organized and front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (optional parameter, multiple output fields), the description covers purpose, usage workflow, parameter details, output structure, error handling, and sibling distinctions. The output contract provides enough detail without needing to reiterate return values.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0% (no descriptions in JSON Schema). The description explains the 'planet' parameter well (accepted values, default, omission behavior) but does not describe the 'response_format' parameter at all, leaving a gap. The output contract partially compensates for missing param info.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it returns Rudraksha bead recommendations per planet, citing a specific scriptural source (Shiva Purana). It explicitly lists what data is returned for each planet and includes a disambiguation section that distinguishes it from sibling tools like asterwise_get_gemstone_recommendations and asterwise_get_puja_suggestions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The WORKFLOW section specifies that asterwise_get_natal_chart should be used before this tool to identify planets needing support, providing explicit usage context. The DO NOT CONFUSE WITH section names two alternative tools, offering clear guidance on when not to use this tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_soul_urge_numberA
Read-onlyIdempotent
Inspect

Calculates the Soul Urge (Heart's Desire) number from vowels in the full name. Only A, E, I, O, U are treated as vowels — Y is always a consonant in this system. Reduces each name part separately before summing, preserving master numbers 11, 22, 33.

SECTION: WHAT THIS TOOL COVERS The Soul Urge reveals the inner motivation — what the soul craves, what drives choices at the deepest level. It is the hidden engine beneath the Expression. This is the most private of the three core name numbers.

SECTION: WORKFLOW BEFORE: None — standalone. AFTER: asterwise_get_personality_number — complete the name number trinity.

SECTION: INPUT CONTRACT name — Full legal name as used at birth. Example: 'Arjun Mehta', 'Sofia Rossi' Y is always treated as a consonant — not a vowel.

SECTION: OUTPUT CONTRACT data.number (int — Soul Urge number; 11/22/33 preserved as master) data.is_master_number (bool) data.karmic_debt_number (int or null)

SECTION: RESPONSE FORMAT response_format=json serialises the complete response as indented JSON — use this for programmatic parsing, typed clients, and downstream tool chaining. response_format=markdown renders the same data as a human-readable report. Both modes return identical underlying data.

SECTION: COMPUTE CLASS FAST_LOOKUP

SECTION: ERROR CONTRACT INVALID_PARAMS (local): None — validation is upstream. INTERNAL_ERROR: Any upstream API failure → MCP INTERNAL_ERROR

SECTION: DO NOT CONFUSE WITH asterwise_get_expression_number — all letters, not vowels only. asterwise_get_personality_number — consonants only, not vowels.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYes
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, destructiveHint=false, and idempotentHint=true. The description adds context beyond annotations: it specifies the reduction per name part and the treatment of Y as a consonant. The output contract is also disclosed. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise, front-loading the core purpose in the first sentence. It includes a clear output contract section without unnecessary words. Every sentence serves a purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity and the presence of an output schema (implied by the output contract), the description covers the essential aspects: purpose, behavior, and output fields. The only minor gap is parameter explanation, but the tool's parameters are straightforward. Overall, it is sufficiently complete for an AI agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% schema description coverage (0% from description), meaning the schema itself provides no parameter descriptions. The description does not explain the 'name' parameter (though its purpose is obvious) nor the 'response_format' enum values beyond what the schema lists. For a tool with no schema descriptions and 2 parameters, the description should provide more parameter-level context.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates the Soul Urge (Heart's Desire) number from vowels in the full name, with specific rules (reduce each name part separately, Y as consonant). This distinguishes it from sibling numerological tools like 'get_expression_number' or 'get_personality_number'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives is provided. The context implies it is for obtaining the soul urge number, but no exclusions or alternative tool suggestions are given. The description is adequate but lacks proactive usage instructions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_special_ascendantsA
Read-onlyIdempotent
Inspect

Calls atmakaraka and ishta-devata endpoints sequentially and merges their payloads into top-level atmakaraka and ishta_devata objects for one BirthData.

SECTION: WHAT THIS TOOL COVERS Jaimini/BPHS spiritual layer: eight-karaka mapping, soul significator graha, navamsa-based ishta devata inference, twelfth-house occupants, and D9 positions map. It is not general prediction, medical timing, or matchmaking scoring. Two planets tied by degree use classical highest-longitude resolution without raising an error.

SECTION: WORKFLOW BEFORE: RECOMMENDED — asterwise_get_natal_chart — understand chart basics before devotional pointers. AFTER: None.

SECTION: INPUT CONTRACT Wrapper returns { atmakaraka: , ishta_devata: } — not a flat data.* root; consumers must read nested .data fields inside each branch per upstream shape.

SECTION: OUTPUT CONTRACT Top-level merge: atmakaraka — upstream POST /v1/astro/atmakaraka body; use atmakaraka.data: karaka_to_planet{} (eight karakas to planet names) planet_to_karaka{} atmakaraka (string) atmakaraka_sign (string) atmakaraka_nakshatra (string) details{} — per karaka: planet, rashi, nakshatra, longitude ishta_devata — upstream POST /v1/astro/ishta-devta body; use ishta_devata.data: atmakaraka (string) karakamsha_lagna (string) karakamsha_lagna_index (int) jivanmuktamsa_planet (string) navamsa_lagna (string) navamsa_lagna_index (int) twelfth_house_sign (string) twelfth_house_index (int) planets_in_12th[] (string array) ishta_devta_planet (string) deity (string) description (string) method (string) d9_positions{} — per planet: { sign (string), sign_num (int) }

SECTION: RESPONSE FORMAT response_format=json serialises the complete response as indented JSON — use this for programmatic parsing, typed clients, and downstream tool chaining. response_format=markdown renders the same data as a human-readable report. Both modes return identical underlying data — no fields are added, removed, or filtered by either mode.

SECTION: COMPUTE CLASS MEDIUM_COMPUTE

SECTION: ERROR CONTRACT INVALID_PARAMS (local — caught before upstream call): None — BirthData validated via Pydantic only.

INVALID_PARAMS (upstream): — None — upstream rejection surfaces as MCP INTERNAL_ERROR at the tool layer.

INTERNAL_ERROR: — Any upstream API failure or timeout on either sequential call → MCP INTERNAL_ERROR

Edge cases: — Identical-degree planets: classical tie-break applies; no error.

SECTION: DO NOT CONFUSE WITH asterwise_get_natal_chart — general chart; does not compute ishta devata workflow. asterwise_get_char_dasha — timing system using karakas, not deity discovery.

ParametersJSON Schema
NameRequiredDescriptionDefault
birthYesBirth data for a single person.
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations provide readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=true. The description adds valuable behavioral context: it discloses the sequential call workflow, tie-breaking rules for identical-degree planets, error handling (upstream failures become MCP INTERNAL_ERROR), and compute class (MEDIUM_COMPUTE). No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (WHAT THIS TOOL COVERS, WORKFLOW, INPUT CONTRACT, OUTPUT CONTRACT, etc.). Every sentence adds value: no redundant information, no fluff. It's comprehensive yet efficiently organized.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (spiritual calculations with two sequential API calls), the description is complete: it explains purpose, workflow, input/output contracts, response formats, compute class, error handling, and sibling distinctions. With annotations covering safety and an output schema existing, the description provides all necessary context beyond structured fields.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 50% (birth object has descriptions, response_format lacks description). The description compensates by explaining the response_format parameter's effect (json vs markdown modes return identical data) and clarifying the birth parameter's purpose (BirthData for the calculation). It doesn't detail individual birth fields, but the schema covers those well.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool calls two specific endpoints (atmakaraka and ishta-devata) and merges their payloads for one BirthData, distinguishing it from siblings like asterwise_get_natal_chart (general chart) and asterwise_get_char_dasha (timing system). It specifies the spiritual layer coverage (Jaimini/BPHS) and exclusions (not general prediction, medical timing, or matchmaking).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance: RECOMMENDED to use asterwise_get_natal_chart first for chart basics, and clarifies what this tool does NOT cover (general prediction, medical timing, matchmaking). It explicitly distinguishes from sibling tools (asterwise_get_natal_chart, asterwise_get_char_dasha) in the 'DO NOT CONFUSE WITH' section.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_tamil_panchangaA
Read-onlyIdempotent
Inspect

Returns Tamil-specific Panchanga for a date and location: all four inauspicious periods (Rahu Kalam, Yamagandam, Kuligai, Emagandam), Nalla Neram (auspicious daytime windows between inauspicious periods), and the Tamil solar month name based on the Sun's sidereal sign at sunrise.

SECTION: WHAT THIS TOOL COVERS Rahu Kalam, Yamagandam (Yamakanda), Kuligai (Gulika), and Emagandam divide the daytime into eight equal parts from sunrise to sunset following classical Tamil almanac weekday tables. Nalla Neram is every gap between the four inauspicious periods — the auspicious windows left for commencing ventures. Tamil solar month follows the Sun's Lahiri sidereal sign at local sunrise (Chithirai when Sun is in Mesha, through Panguni when Sun is in Meena). This tool does not return Vedic Panchanga limbs (asterwise_get_panchanga) or the standard Rahu/Gulika/Yamaganda breakdown used in North Indian tradition (asterwise_get_rahu_kaal).

SECTION: WORKFLOW BEFORE: None — standalone. AFTER: asterwise_get_panchanga — for full Vedic five-limb panchanga of the same date.

SECTION: INPUT CONTRACT date: YYYY-MM-DD format. Either location (city name) OR latitude + longitude + timezone must be provided.

SECTION: OUTPUT CONTRACT data.date (string — YYYY-MM-DD) data.sunrise (string — HH:MM local time) data.sunset (string — HH:MM local time) data.tamil_month (string — Tamil solar month name, e.g. 'Chithirai', 'Vaikasi') data.rahu_kalam: start, end (HH:MM), duration_minutes (int), is_active (bool) data.yamagandam: start, end (HH:MM), duration_minutes (int), is_active (bool) data.kuligai: start, end (HH:MM), duration_minutes (int), is_active (bool) data.emagandam: start, end (HH:MM), duration_minutes (int), is_active (bool) data.nalla_neram: list of { start (HH:MM), end (HH:MM) } objects

SECTION: RESPONSE FORMAT response_format=json serialises the complete response as indented JSON — use this for programmatic parsing, typed clients, and downstream tool chaining. response_format=markdown renders the same data as a human-readable report. Both modes return identical underlying data — no fields are added, removed, or filtered by either mode.

SECTION: COMPUTE CLASS FAST_LOOKUP — sunrise computation + lookup tables, no full natal chart.

SECTION: ERROR CONTRACT INVALID_PARAMS (local): None — all validation upstream. INTERNAL_ERROR: Any upstream API failure or timeout → MCP INTERNAL_ERROR Edge cases: — Polar latitudes where sunrise cannot be computed → MCP INTERNAL_ERROR. — Emagandam part table: Sun=5, Mon=4, Tue=3, Wed=2, Thu=8, Fri=1, Sat=7.

SECTION: DO NOT CONFUSE WITH asterwise_get_rahu_kaal — North Indian Rahu/Gulika/Yamaganda only; no Emagandam, Nalla Neram, or Tamil month. asterwise_get_panchanga — five Vedic limbs (tithi, vara, nakshatra, yoga, karana); not Tamil-specific periods.

ParametersJSON Schema
NameRequiredDescriptionDefault
dateYes
latitudeNo
locationNo
timezoneNo
longitudeNo
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint, idempotentHint, and destructiveHint. The description adds useful behavioral context: compute class is FAST_LOOKUP (no full chart), error contract covers polar latitudes and upstream failures, and response format options. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (WHAT THIS TOOL COVERS, WORKFLOW, INPUT CONTRACT, etc.), making it easy to navigate. While somewhat long, every section adds value and there is no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description covers all necessary aspects: input requirements with format constraints, detailed output contract with field names and types, error handling, compute class, and differentiation from similar tools. No gaps given the tool's complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0%, so description compensates well: explains date format, requirement for location or lat/lon/timezone, and response_format enumeration. Additional details like defaults for optional parameters are provided.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it returns Tamil-specific Panchanga including four inauspicious periods, Nalla Neram, and Tamil solar month. It distinguishes from sibling tools like asterwise_get_rahu_kaal and asterwise_get_panchanga in the 'DO NOT CONFUSE WITH' section.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The 'WORKFLOW' section indicates it is standalone with no prerequisites, and suggests using asterwise_get_panchanga for full Vedic panchanga. The 'DO NOT CONFUSE WITH' section explicitly lists when to use alternative tools, providing clear context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_tarot_cardA
Read-onlyIdempotent
Inspect

Returns full structured data for a single card identified by its slug ID. Useful for card detail pages, single-card lookups, and displaying a specific card after the user selects one by name.

SECTION: WHAT THIS TOOL COVERS Returns one card object from the 78-card Rider-Waite-Smith deck. The slug is the card's unique identifier in lowercase-hyphenated format. All fields are identical to what asterwise_get_tarot_cards returns per card.

SECTION: WORKFLOW BEFORE: None — standalone. AFTER: None.

SECTION: INPUT CONTRACT card_id — Slug identifier for the card. Must be exact. Major Arcana examples: 'the-fool', 'the-magician', 'the-high-priestess', 'the-empress', 'the-emperor', 'the-hierophant', 'the-lovers', 'the-chariot', 'strength', 'the-hermit', 'wheel-of-fortune', 'justice', 'the-hanged-man', 'death', 'temperance', 'the-devil', 'the-tower', 'the-star', 'the-moon', 'the-sun', 'judgement', 'the-world' Minor Arcana pattern: '{rank}-of-{suit}' Examples: 'ace-of-wands', 'two-of-cups', 'ten-of-swords', 'page-of-pentacles', 'knight-of-wands', 'queen-of-cups', 'king-of-swords'

SECTION: OUTPUT CONTRACT data — single card object: id (string — slug), name (string), arcana_type ('major'|'minor'), suit (string|null) number (int — 0=Fool, 1–21=Major; 1=Ace, 11=Page, 12=Knight, 13=Queen, 14=King) element (string), astrology_correspondence (string) keywords_upright[] (string array), keywords_reversed[] (string array) upright_meaning (string), reversed_meaning (string) yes_no ('yes'|'no'|'maybe'), description (string — visual imagery)

SECTION: RESPONSE FORMAT response_format=json — single card object as JSON. response_format=markdown — human-readable card detail.

SECTION: COMPUTE CLASS FAST_LOOKUP

SECTION: ERROR CONTRACT INVALID_PARAMS (local): None. INTERNAL_ERROR: Unknown card_id returns 404 upstream → MCP INTERNAL_ERROR If the card is not found, check slug format: lowercase, hyphens, no spaces.

SECTION: DO NOT CONFUSE WITH asterwise_get_tarot_cards — returns all 78 cards in one call. asterwise_draw_tarot_cards — random draw, not a specific card.

ParametersJSON Schema
NameRequiredDescriptionDefault
card_idYes
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint, idempotentHint, destructiveHint. The description adds context by specifying the output contract ('data — single card object with all metadata fields'), which goes beyond annotations. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is structured with clear sections (INPUT CONTRACT, OUTPUT CONTRACT), front-loaded, and every sentence is informative. No unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given low complexity, annotations, and existence of output schema, the description adequately covers input and output expectations. Could include error handling or rate limits, but not required for completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0%, so description must carry burden. It provides examples for card_id (slug identifier), but does not explain response_format parameter despite enum values. Adds value for card_id but incomplete for response_format.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'returns' and the resource 'full data for a single card' and specifies identification method 'by its slug ID'. It distinguishes from siblings like asterwise_get_tarot_cards (plural) by focusing on a single card.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies using this tool when you have a specific slug and need full card data, but does not explicitly compare to alternatives like asterwise_get_tarot_cards or asterwise_get_tarot_card_of_the_day. No guidance on when not to use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_tarot_card_of_the_dayA
Read-onlyIdempotent
Inspect

Returns a deterministic daily tarot card seeded by SHA-256 hash of the date string. The same card is returned for all callers on the same date — this is intentional. The daily card is not a reading for an individual but a collective daily energy.

SECTION: WHAT THIS TOOL COVERS Deterministic daily oracle: one card with its upright or reversed orientation (also deterministic when allow_reversed=true). The SHA-256 seed ensures that no two consecutive days produce the same card except by mathematical coincidence. The active_meaning field pre-computes the correct interpretation for the orientation — callers do not need to branch on is_reversed.

SECTION: WORKFLOW BEFORE: None — standalone. AFTER: asterwise_get_tarot_three_card_spread — for deeper daily reading context.

SECTION: INPUT CONTRACT date (optional string YYYY-MM-DD) — Date to get the card for. Defaults to today. Example: '2026-05-01' allow_reversed (optional bool) — Default: false. When true: reversed state is also deterministic (seeded by date+'_rev'). When false: card is always upright regardless of date.

SECTION: OUTPUT CONTRACT data.date (string — YYYY-MM-DD, the date this card represents) data.card — full card object (same shape as asterwise_get_tarot_card) data.is_reversed (bool) data.active_meaning (string — upright_meaning when not reversed, reversed_meaning when reversed) data.active_keywords[] (string array — upright or reversed keywords per orientation)

SECTION: RESPONSE FORMAT response_format=json — full card object with metadata. response_format=markdown — human-readable daily card report.

SECTION: COMPUTE CLASS FAST_LOOKUP — deterministic, no randomness.

SECTION: ERROR CONTRACT INVALID_PARAMS (local): None — date is validated upstream. INTERNAL_ERROR: Any upstream API failure → MCP INTERNAL_ERROR

SECTION: DO NOT CONFUSE WITH asterwise_draw_tarot_cards — random draw, different every call. asterwise_get_tarot_three_card_spread — positional reading with question context.

ParametersJSON Schema
NameRequiredDescriptionDefault
dateNo
allow_reversedNo
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Description goes beyond annotations by explaining deterministic behavior, seeding mechanism (SHA-256 of date), and that same card is returned for all requests on the same date. Annotations include readOnlyHint, idempotentHint, and destructiveHint, which are consistent. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is concise with clear structure: general explanation, INPUT CONTRACT, OUTPUT CONTRACT. Every sentence adds value, no fluff. Well-organized for quick understanding.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given schema and output schema, description covers date and allow_reversed well, and includes output structure. Missing explanation of response_format parameter, but overall complete for a simple tool. Small gap prevents perfect score.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Description adds meaning to parameters: explains that date defaults to today and is optional, and allow_reversed is optional with default false. Schema coverage is 0%, so description compensates. However, response_format parameter is not explained in description, only in output contract indirectly.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states it returns a deterministic daily tarot card, seeded by SHA-256 hash of date, ensuring same card for all users on same day. This distinguishes it from sibling tools like asterwise_get_tarot_card (retrieves a specific card) and asterwise_draw_tarot_cards (random draw).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Description explicitly states deterministic daily behavior and that all users see the same card on the same day, providing context for when to use (daily, deterministic) vs alternatives (random draws). It does not directly name alternatives but implies usage context clearly.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_tarot_cardsA
Read-onlyIdempotent
Inspect

Returns the complete 78-card Rider-Waite-Smith deck with full metadata. Each card includes id (slug), name, arcana_type (major/minor), suit, number, element, astrology_correspondence, upright and reversed meanings, keywords for both orientations, yes/no polarity, and visual description.

SECTION: WHAT THIS TOOL COVERS The complete 78-card Rider-Waite-Smith deck as structured JSON. Every card includes both upright and reversed meanings as separate fields, making orientation-aware interpretation automatic — the caller does not need to branch on is_reversed. The visual description field (imagery of the card) is a differentiator not present in most tarot APIs. Use this endpoint to populate card databases, build card browsers, filter by element or astrology correspondence, or batch-load the deck for offline use.

SECTION: WORKFLOW BEFORE: None — standalone catalogue endpoint. AFTER: asterwise_draw_tarot_cards or asterwise_get_tarot_three_card_spread — use card data from this endpoint to build enriched display layers.

SECTION: INPUT CONTRACT response_format — Required: markdown | json (same as all Asterwise tools). No other parameters.

SECTION: OUTPUT CONTRACT data[] — 78 card objects, each: id (slug e.g. 'the-fool', 'ace-of-wands') name, arcana_type, suit (null for major arcana), number element, astrology_correspondence keywords_upright[], keywords_reversed[] upright_meaning, reversed_meaning yes_no ('yes'|'no'|'maybe'), description

SECTION: RESPONSE FORMAT response_format=json serialises the complete 78-card array as indented JSON. response_format=markdown renders a structured human-readable card catalogue. Both modes return identical underlying data.

SECTION: COMPUTE CLASS FAST_LOOKUP — data is static; no ephemeris or randomness involved.

SECTION: ERROR CONTRACT INVALID_PARAMS (local): None — no input parameters beyond response_format. INTERNAL_ERROR: Any upstream API failure → MCP INTERNAL_ERROR

SECTION: DO NOT CONFUSE WITH asterwise_get_tarot_major_arcana — returns only the 22 Major Arcana subset. asterwise_get_tarot_suit — returns only the 14 cards of a single suit. asterwise_draw_tarot_cards — returns a random draw, not the catalogue.

ParametersJSON Schema
NameRequiredDescriptionDefault
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, destructiveHint=false, idempotentHint=true. The description adds the output contract (data fields) but does not disclose additional behavioral traits like rate limits or caching. Since annotations cover safety profile, the description adds moderate value.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with a clear section header 'SECTION: OUTPUT CONTRACT' and a bulleted list of fields. However, it could be more concise by removing the full field listing since an output schema is present.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With an output schema provided, the description details the exact structure of each returned card object. The tool returns a static dataset (the full deck), and the description covers all necessary information for an agent to use it correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The only parameter is 'response_format' with enum [markdown, json]. The schema is self-explanatory, and the description does not elaborate further. Schema coverage is 0%, but the parameter is simple. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool returns 'the complete 78-card Rider-Waite-Smith deck with full metadata,' specifying each card's fields. This distinguishes it from siblings like 'get_tarot_card' (single card) and 'get_tarot_suit' (cards by suit).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies the tool is for retrieving the full deck but does not explicitly state when to use it versus alternatives like 'get_tarot_card' or 'get_tarot_suit.' No when-not-to-use or conditions are given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_tarot_celtic_crossA
Read-only
Inspect

Ten-card Celtic Cross spread — the most comprehensive traditional tarot layout. Draws 10 unique cards using cryptographic randomness and assigns each to one of the 10 classical Celtic Cross positions.

SECTION: WHAT THIS TOOL COVERS The Celtic Cross examines a situation from 10 angles simultaneously: Position 1 (present) — the core situation Position 2 (challenge) — what crosses or complicates it Position 3 (root) — unconscious foundation or distant past Position 4 (past) — recent events that led here Position 5 (possible_outcome) — what could happen if current energy continues Position 6 (near_future) — what is coming in the next weeks Position 7 (self) — how you see yourself / your attitude Position 8 (external) — how others see you or environmental factors Position 9 (hopes_and_fears) — what you hope for or fear Position 10 (outcome) — the most likely final resolution All position meanings are included in the response — callers do not need external tables.

SECTION: WORKFLOW BEFORE: None — standalone reading, or follow asterwise_get_tarot_three_card_spread when a more detailed examination of the same question is needed. AFTER: None.

SECTION: INPUT CONTRACT allow_reversed (bool, default false) — Each card independently has 50% reversal chance. question (optional string, max 500 chars) — The question or situation being examined. Example: 'Should I accept the job offer in London?'

SECTION: OUTPUT CONTRACT data.spread_type (string — 'celtic_cross') data.positions[] — 10 objects in order [present, challenge, root, past, possible_outcome, near_future, self, external, hopes_and_fears, outcome]: card — full card object is_reversed (bool) position (string — named position key) position_meaning (string — what this position represents) active_meaning (string — orientation-appropriate interpretation) active_keywords[] (string array) data.question (string or null — echoed)

SECTION: RESPONSE FORMAT response_format=json — full 10-card spread object. response_format=markdown — formatted Celtic Cross reading.

SECTION: COMPUTE CLASS FAST_LOOKUP — cryptographic randomness, no ephemeris.

SECTION: ERROR CONTRACT INVALID_PARAMS (local): None. INTERNAL_ERROR: Any upstream API failure → MCP INTERNAL_ERROR

SECTION: DO NOT CONFUSE WITH asterwise_get_tarot_three_card_spread — 3 positions only; use for simpler questions. asterwise_draw_tarot_cards — free draw with no positional meaning. asterwise_get_tarot_yes_no — binary answer, not positional analysis.

ParametersJSON Schema
NameRequiredDescriptionDefault
questionNo
allow_reversedNo
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds significant behavioral context beyond annotations by detailing the output contract (positions, meanings, keywords) and input contract (allow_reverted, question). It is consistent with readOnlyHint annotations and provides transparency about the tool's behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with input and output contract sections, front-loading the main purpose. While it could be slightly more concise, every sentence adds value and there is no superfluous information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of the spread and the presence of an output schema, the description is sufficiently complete, explaining the structure and content of the spread. It lacks mention of error handling or limitations, but these are acceptable for a tarot tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description explains two parameters (allow_reverted and question) with defaults and constraints, but fails to mention the required 'response_format' parameter, which appears in the schema but is not addressed. Schema coverage is 0%, so the description partially compensates but misses a critical parameter.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it is a 'Full 10-card Celtic Cross spread' and differentiates from sibling tools by its comprehensive nature and explicit listing of all 10 positions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies this is the most comprehensive spread but does not provide explicit guidance on when to use it versus other tarot tools like three-card spreads or single cards. No alternatives or when-not conditions are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_tarot_major_arcanaA
Read-onlyIdempotent
Inspect

Returns all 22 Major Arcana cards (The Fool through The World) as a structured array. Major Arcana represent universal archetypes and major life themes.

SECTION: WHAT THIS TOOL COVERS The 22 Major Arcana are the foundation of the tarot — they deal with karmic and spiritual lessons, major life events, and universal forces. They are numbered 0 (The Fool) through 21 (The World). Each has an astrological correspondence and elemental association.

SECTION: WORKFLOW BEFORE: None — standalone. AFTER: asterwise_draw_tarot_cards — draw from this subset by filtering by arcana_type.

SECTION: INPUT CONTRACT response_format — Required: markdown | json.

SECTION: OUTPUT CONTRACT data[] — 22 card objects, each identical to asterwise_get_tarot_card output. Ordered 0–21 (The Fool through The World).

SECTION: RESPONSE FORMAT response_format=json — array of 22 card objects. response_format=markdown — formatted list.

SECTION: COMPUTE CLASS FAST_LOOKUP

SECTION: ERROR CONTRACT INVALID_PARAMS (local): None. INTERNAL_ERROR: Any upstream API failure → MCP INTERNAL_ERROR

SECTION: DO NOT CONFUSE WITH asterwise_get_tarot_cards — full 78-card deck including Minor Arcana. asterwise_get_tarot_suit — 14 Minor Arcana cards by suit.

ParametersJSON Schema
NameRequiredDescriptionDefault
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare the tool as read-only, idempotent, and non-destructive. The description adds that it returns a fixed set of cards but does not disclose any additional behavioral traits like output size or format details, which are partially covered by output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two short sentences, front-loaded with action, no unnecessary words. Every word adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (fixed list, read-only), the description explains what Major Arcana are and that all 22 cards are returned. Combined with annotations and output schema, it is mostly complete, though it could explicitly note the output is static.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The sole parameter (response_format) is not described; the schema provides its enum values, but the description adds no explanation of how format choice affects output. With 0% schema coverage, the description should compensate but does not.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly specifies the tool returns all 22 Major Arcana cards with a verb and resource, and distinguishes from sibling tools like get_tarot_card or get_tarot_suit by explicitly stating 'all 22 Major Arcana'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives like asterwise_get_tarot_card or asterwise_draw_tarot_cards. The description does not mention prerequisites, exclusions, or context for selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_tarot_suitA
Read-onlyIdempotent
Inspect

Returns all 14 cards in a given Minor Arcana suit as a structured array.

SECTION: WHAT THIS TOOL COVERS Each of the four suits has 14 cards: Ace through 10 plus Page, Knight, Queen, King. Elemental associations: Wands=fire (action, career, creativity), Cups=water (emotions, relationships, intuition), Swords=air (intellect, conflict, truth), Pentacles=earth (material, money, body, practical matters).

SECTION: WORKFLOW BEFORE: None — standalone. AFTER: None.

SECTION: INPUT CONTRACT suit — One of exactly: 'wands', 'cups', 'swords', 'pentacles'. Case-insensitive. Any other value is rejected locally with MCP INVALID_PARAMS.

SECTION: OUTPUT CONTRACT data[] — 14 card objects for the requested suit, each identical to asterwise_get_tarot_card output. Ordered Ace through King.

SECTION: RESPONSE FORMAT response_format=json — array of 14 card objects. response_format=markdown — formatted list.

SECTION: COMPUTE CLASS FAST_LOOKUP

SECTION: ERROR CONTRACT INVALID_PARAMS (local): — suit not in {wands, cups, swords, pentacles} → MCP INVALID_PARAMS immediately. INTERNAL_ERROR: Any upstream API failure → MCP INTERNAL_ERROR

SECTION: DO NOT CONFUSE WITH asterwise_get_tarot_major_arcana — 22 Major Arcana, not suit-based. asterwise_get_tarot_cards — full 78-card catalogue.

ParametersJSON Schema
NameRequiredDescriptionDefault
suitYes
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and idempotentHint=true, so the description adds the fact that it returns 14 cards per suit. This is useful but does not go beyond what annotations already cover. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise: one sentence for the main function, followed by a clear input contract section. Every sentence is informative with no fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of an output schema (not shown but indicated), the description does not need to cover return values. It provides sufficient context: what the tool returns (all 14 cards), input constraints (suit enumeration), and additional mapping to elements/meanings. Complete for this tool's complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema descriptions are absent (0% coverage), so the description compensates by defining the valid suit values and their meanings (e.g., Wands=fire/career). This adds significant semantic value beyond the raw schema. The response_format parameter is not elaborated, but its enum values are in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Returns all 14 cards in a given suit,' specifying verb, resource, and scope. It distinguishes from siblings like asterwise_get_tarot_card by focusing on an entire suit.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides the suit-element mapping which helps context but does not explicitly state when to use this tool versus alternatives like asterwise_get_tarot_card or asterwise_get_tarot_cards. Use cases are implied.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_tarot_three_card_spreadA
Read-only
Inspect

Past / Present / Future spread. Draws 3 unique cards using cryptographic randomness and assigns each to a named positional slot with an interpretive context.

SECTION: WHAT THIS TOOL COVERS The three-card spread is the most widely used tarot layout. Position 1 (past) shows what led to the current situation. Position 2 (present) shows current energy and the core issue. Position 3 (future) shows the likely outcome if current energy continues. Each card returns position, position_meaning, and active_meaning — the complete interpretive context is included; callers do not need external position tables.

SECTION: WORKFLOW BEFORE: None — standalone reading. AFTER: asterwise_get_tarot_celtic_cross — for deeper 10-position analysis of same question.

SECTION: INPUT CONTRACT allow_reversed (bool, default false) — Each card independently has 50% reversal chance. question (optional string, max 500 chars) — The question being asked. Setting a question is strongly recommended for coherent readings. Example: 'What should I focus on in my career this month?' The question is echoed in the response but does not affect card selection.

SECTION: OUTPUT CONTRACT data.spread_type (string — 'three_card') data.positions[] — 3 objects in order [past, present, future]: card — full card object is_reversed (bool) position (string — 'past'|'present'|'future') position_meaning (string — what this position represents in the spread) active_meaning (string — orientation-appropriate card interpretation) active_keywords[] (string array) data.question (string or null — echoed)

SECTION: RESPONSE FORMAT response_format=json — full spread object. response_format=markdown — formatted three-card reading.

SECTION: COMPUTE CLASS FAST_LOOKUP — cryptographic randomness, no ephemeris.

SECTION: ERROR CONTRACT INVALID_PARAMS (local): None. INTERNAL_ERROR: Any upstream API failure → MCP INTERNAL_ERROR

SECTION: DO NOT CONFUSE WITH asterwise_draw_tarot_cards — free draw with no positional meaning. asterwise_get_tarot_celtic_cross — 10-card spread with comprehensive positional coverage. asterwise_get_tarot_yes_no — single-card binary answer, no positional structure.

ParametersJSON Schema
NameRequiredDescriptionDefault
questionNo
allow_reversedNo
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnlyHint=true. The description adds value by detailing the output contract (spread_type, positions, meanings, keywords) and the fact that cards are unique. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with sections and front-loaded key information. It is concise but includes some redundant section headers (e.g., 'SECTION: INPUT CONTRACT') that slightly add verbosity. Still, every sentence serves a purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (3 parameters, output schema provided in description), the description fully explains inputs, outputs, and positioning. It is complete for an agent to select and invoke correctly, especially with the detailed output contract.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 3 parameters with 0% description coverage. The description explicitly covers 'allow_reversed' (bool, default false) and 'question' (optional string, max 500 chars) but omits 'response_format' (enum markdown/json), which is a required parameter. Thus, it adds meaning for most but not all parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it draws 3 unique cards assigned to Past, Present, Future positions, which precisely defines the tool's action and output. It differentiates from siblings like asterwise_draw_tarot_cards (generic) and asterwise_get_tarot_cards (multiple cards) by specifying the spread type.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for past/present/future readings but does not explicitly compare to other tarot tools (e.g., asterwise_get_tarot_yes_no, asterwise_get_tarot_celtic_cross) or state when not to use this tool. No exclusions or alternative suggestions are provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_tarot_yes_noA
Read-only
Inspect

Draws one card and returns a yes, no, or maybe answer with confidence level. The answer is derived from the card's built-in yes_no polarity and its orientation.

SECTION: WHAT THIS TOOL COVERS Quick binary oracle using the classical tarot yes/no system. Each card in the Rider-Waite-Smith deck has a pre-assigned polarity (yes/no/maybe). Reversal introduces uncertainty — a yes-polarity card reversed becomes maybe rather than no. This allows nuanced answers: strong yes, leaning toward yes, leaning toward no, strong no, or genuinely unclear.

Answer logic (exact): yes-polarity card + upright → answer='yes', confidence='strong' yes-polarity card + reversed → answer='maybe', confidence='leaning' no-polarity card + upright → answer='no', confidence='strong' no-polarity card + reversed → answer='maybe', confidence='leaning' maybe-polarity card (any orientation) → answer='maybe', confidence='unclear'

SECTION: WORKFLOW BEFORE: None — standalone. AFTER: asterwise_get_tarot_three_card_spread — for more context when the yes/no answer is 'maybe' or the situation needs elaboration.

SECTION: INPUT CONTRACT allow_reversed (bool, default true) — Recommended to keep true for nuanced answers. Set false only if you want strictly yes/no with no maybe results from reversal. question (optional string, max 500 chars) — The yes/no question being asked. Example: 'Should I accept this job offer?' Example: 'Will the project launch on time?'

SECTION: OUTPUT CONTRACT data.card — full card object data.is_reversed (bool) data.answer (string — 'yes'|'no'|'maybe') data.confidence (string — 'strong' when card directly says yes/no; 'leaning' when reversed card; 'unclear' when maybe-polarity card) data.active_meaning (string — orientation-appropriate interpretation) data.question (string or null — echoed)

SECTION: RESPONSE FORMAT response_format=json — full yes/no result object. response_format=markdown — formatted oracle response.

SECTION: COMPUTE CLASS FAST_LOOKUP — cryptographic randomness, no ephemeris.

SECTION: ERROR CONTRACT INVALID_PARAMS (local): None. INTERNAL_ERROR: Any upstream API failure → MCP INTERNAL_ERROR

SECTION: DO NOT CONFUSE WITH asterwise_get_tarot_three_card_spread — positional reading, not binary answer. asterwise_draw_tarot_cards — free draw without answer logic.

ParametersJSON Schema
NameRequiredDescriptionDefault
questionNo
allow_reversedNo
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description fully discloses behavioral traits: it creates no side effects (consistent with readOnlyHint), provides detailed answer logic, and specifies input and output contracts. It adds significant context beyond annotations, such as how reversed cards affect answers.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with sections (ANSWER LOGIC, INPUT CONTRACT, OUTPUT CONTRACT), making it easy to parse. Every sentence adds value, and there is no redundant text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity and the presence of an output schema (implied by the description's output contract), the description covers the essential aspects. It could be slightly improved by indicating the expected format of the response (e.g., JSON/markdown) and linking to the output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0%, so the description must compensate. It explains allow_reversed and question parameters with additional context (e.g., 'recommended for yes/no'). However, the required response_format parameter is not described, though its enum values are self-explanatory.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool draws one card for a yes/no answer, with a clear verb and resource. It distinguishes itself from sibling tools like asterwise_draw_tarot_cards (multi-card) and asterwise_get_tarot_celtic_cross (spreads), making its purpose unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides answer logic and input recommendations (e.g., allow_reversed default true recommended for yes/no). While it doesn't explicitly list alternative tools or when not to use, the specific purpose makes usage clear. Could be improved by mentioning that for nuanced readings, other spreads might be more appropriate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_thirumana_poruthamA
Read-onlyIdempotent
Inspect

Evaluates twelve Tamil Thirumana poruthams for two charts, tracks veto severity for Rajju, and returns an expanded breakdown map with tradition metadata.

SECTION: WHAT THIS TOOL COVERS Extends the ten-porutham screen with Nadi and Varna plus richer Rajju severity coding (5=Siro most severe, down to 1=Pada least severe) and labels. Same veto booleans as shorter Tamil tools. Not Ashtakoota (asterwise_get_compatibility) nor Dashakoot (asterwise_get_dashakoot).

SECTION: WORKFLOW BEFORE: RECOMMENDED — asterwise_get_porutham — quick ten-koota pass before twelve-koota depth. AFTER: None.

SECTION: INPUT CONTRACT Two BirthData objects per global contract.

SECTION: OUTPUT CONTRACT data.total_passed (int — out of 12) data.total_poruthams (int — 12) data.compatibility_level (string) data.rajju_veto (bool) data.vedha_veto (bool) data.hard_veto (bool) data.rajju_severity (int — 5=Siro most severe, 4=Kantha, 3=Udara, 2=Kati, 1=Pada least severe) data.rajju_severity_label (string — body part name) data.tradition (string — 'Tamil Thirumana Porutham') data.breakdown{} — twelve keys with full porutham names (e.g. 'Gana Porutham'): passed (bool) plus type-specific fields following the same patterns as asterwise_get_porutham (counts, scores, rajju pairs, vedha flags, etc.)

SECTION: RESPONSE FORMAT response_format=json serialises the complete response as indented JSON — use this for programmatic parsing, typed clients, and downstream tool chaining. response_format=markdown renders the same data as a human-readable report. Both modes return identical underlying data — no fields are added, removed, or filtered by either mode.

SECTION: COMPUTE CLASS MEDIUM_COMPUTE

SECTION: ERROR CONTRACT INVALID_PARAMS (local — caught before upstream call): None — all validation is upstream.

INVALID_PARAMS (upstream): — None — upstream rejection surfaces as MCP INTERNAL_ERROR at the tool layer.

INTERNAL_ERROR: — Any upstream API failure or timeout → MCP INTERNAL_ERROR

Edge cases: — rajju_severity encodes body-zone risk — pair with rajju_veto boolean.

SECTION: DO NOT CONFUSE WITH asterwise_get_porutham — ten poruthams only, no Nadi/Varna keys. asterwise_get_compatibility — North Indian 36-point schema, not Tamil porutham map.

ParametersJSON Schema
NameRequiredDescriptionDefault
person1YesBirth data for a single person.
person2YesBirth data for a single person.
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable context beyond annotations by specifying the compute class (MEDIUM_COMPUTE), error handling details (INVALID_PARAMS, INTERNAL_ERROR), and edge case explanations (rajju_severity encoding). While annotations cover safety (readOnlyHint=true, destructiveHint=false), the description provides operational context that helps the agent understand performance expectations and failure modes.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear section headings (WHAT THIS TOOL COVERS, WORKFLOW, etc.) and efficiently organized information. While comprehensive, every section serves a purpose - distinguishing from siblings, providing workflow guidance, documenting contracts, and preventing confusion. Some redundancy exists between OUTPUT CONTRACT and RESPONSE FORMAT sections.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (astrological compatibility evaluation with 12 poruthams), the description provides excellent context. It explains the tool's scope, workflow positioning, input/output contracts, response formats, compute class, error handling, and sibling distinctions. With annotations covering safety and an output schema existing, the description focuses appropriately on domain-specific context rather than repeating structured data.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 67% schema description coverage, the description adds minimal parameter semantics beyond what's in the schema. It mentions 'Two BirthData objects per global contract' and references the response_format parameter, but doesn't provide additional context about person1/person2 structure or response_format implications beyond what the schema already documents. The baseline 3 is appropriate given the moderate schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool evaluates twelve Tamil Thirumana poruthams for two charts, tracks veto severity for Rajju, and returns an expanded breakdown map with tradition metadata. It specifically distinguishes this from sibling tools like asterwise_get_porutham (ten poruthams only) and asterwise_get_compatibility (North Indian schema).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool versus alternatives. It states 'BEFORE: RECOMMENDED — asterwise_get_porutham — quick ten-koota pass before twelve-koota depth' and includes a 'DO NOT CONFUSE WITH' section naming specific sibling tools and their differences.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_transitsA
Read-onlyIdempotent
Inspect

Lists sign ingresses and retrograde/direct stations for all planets between two dates against the natal context and returns chronological astronomical events.

SECTION: WHAT THIS TOOL COVERS Aggregates ingress rows (with sankranti flag for Sun) and station rows with Julian day, local ISO timestamps, and ecliptic longitude. The upstream API enforces ordering and a 24-month window; violations are not pre-checked in this MCP layer. It does not return today's full Gochar row (asterwise_get_gochar) or dasha-transit scores (asterwise_get_dasha_transits).

SECTION: WORKFLOW BEFORE: RECOMMENDED — asterwise_get_natal_chart — clarifies the natal reference for the same birth data. AFTER: asterwise_get_gochar — optional current snapshot after scanning the range.

SECTION: INPUT CONTRACT from_date and to_date are strings in the format expected by the upstream API (typically YYYY-MM-DD). Range cap (24 months), date order, and validity are enforced upstream, not locally.

SECTION: OUTPUT CONTRACT data.ingresses[] — each: planet (string) from_sign (int — 0–11) to_sign (int — 0–11) jd (float) date_iso (string — ISO datetime, local time) is_sankranti (bool — true for Sun ingresses only) retrograde_ingress (bool) data.stations[] — each: planet (string) station_type (string — 'retrograde' or 'direct') jd (float) date_iso (string — ISO datetime) longitude (float)

SECTION: RESPONSE FORMAT response_format=json serialises the complete response as indented JSON — use this for programmatic parsing, typed clients, and downstream tool chaining. response_format=markdown renders the same data as a human-readable report. Both modes return identical underlying data — no fields are added, removed, or filtered by either mode.

SECTION: COMPUTE CLASS MEDIUM_COMPUTE

SECTION: ERROR CONTRACT INVALID_PARAMS (local — caught before upstream call): None — all validation is upstream.

INVALID_PARAMS (upstream): — None — range/order violations surface as MCP INTERNAL_ERROR at the tool layer.

INTERNAL_ERROR: — Any upstream API failure or timeout → MCP INTERNAL_ERROR

Edge cases: — Exceeding the 24-month cap or bad date order fails upstream and is not turned into MCP INVALID_PARAMS here.

SECTION: DO NOT CONFUSE WITH asterwise_get_gochar — single-day transit snapshot with houses from Moon/Lagna, not ingress/station lists. asterwise_get_dasha_transits — dasha lord vs transit scoring for today only.

ParametersJSON Schema
NameRequiredDescriptionDefault
birthYesBirth data for a single person.
to_dateYes
from_dateYes
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations provide readOnlyHint=true, destructiveHint=false, openWorldHint=true, and idempotentHint=true, covering safety and idempotency. The description adds valuable behavioral context beyond annotations: it discloses the 24-month window constraint ('upstream API enforces ordering and a 24-month window'), clarifies that violations are not pre-checked locally, and explains error handling ('range/order violations surface as MCP INTERNAL_ERROR'). It also notes the 'MEDIUM_COMPUTE' class. However, it doesn't detail rate limits or authentication needs, which could be relevant for a tool with openWorldHint=true.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (e.g., 'WHAT THIS TOOL COVERS', 'WORKFLOW', 'INPUT CONTRACT'), making it easy to scan. However, it is quite lengthy with multiple detailed sections, which may be excessive for a tool with only 4 parameters. Some redundancy exists, such as repeating 'upstream API' constraints in multiple sections. While informative, it could be more concise by consolidating related points.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (astrological calculations with a nested 'birth' object), the description is highly complete. It covers purpose, usage guidelines, behavioral traits (e.g., 24-month cap, error handling), parameter semantics, and output details (with a full 'OUTPUT CONTRACT' section). Annotations provide safety hints, and an output schema exists (implied by 'Has output schema: true'), so the description doesn't need to explain return values exhaustively. It addresses all critical aspects for an AI agent to use the tool correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 25% (only the 'birth' object has descriptions), so the description must compensate. It adds significant parameter semantics: it explains that 'from_date and to_date are strings in the format expected by the upstream API (typically YYYY-MM-DD)', clarifies the 'range cap (24 months), date order, and validity are enforced upstream, not locally', and details the 'response_format' options ('json serialises... use this for programmatic parsing... response_format=markdown renders... as a human-readable report'). This goes well beyond the sparse schema, though it doesn't fully document all parameters like the nested 'birth' object details.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'Lists sign ingresses and retrograde/direct stations for all planets between two dates against the natal context and returns chronological astronomical events.' This specifies the exact verb ('Lists'), resource ('sign ingresses and retrograde/direct stations'), scope ('all planets between two dates'), and context ('against the natal context'), distinguishing it from siblings like asterwise_get_gochar and asterwise_get_dasha_transits.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool versus alternatives. It states 'BEFORE: RECOMMENDED — asterwise_get_natal_chart — clarifies the natal reference' and 'AFTER: asterwise_get_gochar — optional current snapshot after scanning the range.' It also includes a 'DO NOT CONFUSE WITH' section naming specific sibling tools and explaining their differences, such as 'asterwise_get_gochar — single-day transit snapshot... not ingress/station lists' and 'asterwise_get_dasha_transits — dasha lord vs transit scoring for today only.'

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_varshaphalA
Read-onlyIdempotent
Inspect

Computes the annual Tajika-style solar return for a four-digit civil year and returns Muntha, Pancha Adhikari metrics, Tajika aspects, and varshaphal positions.

SECTION: WHAT THIS TOOL COVERS Varshaphal engine: solar return instant, year lord, muntha block, varsha pati election, five pancha adhikaris with bala components, tajika aspect arrays, and pairwise Tajika geometry including ithsala/musaripha flags. year must mean calendar year (e.g. 2026), not biological age — not enforced locally; wrong integers chart the wrong annual return. Not lifetime Vimshottari (asterwise_get_dasha) nor generic transit ingress lists (asterwise_get_transits).

SECTION: WORKFLOW BEFORE: RECOMMENDED — asterwise_get_natal_chart — baseline radix before annual overlay. AFTER: None.

SECTION: INPUT CONTRACT year is a plain int sent as target_year upstream; callers must supply the true Gregorian return year, not age.

SECTION: OUTPUT CONTRACT data.target_year (int) data.ayanamsa (string) data.solar_return_utc (string — ISO) data.solar_return_jd (float) data.natal_sun_longitude (float) data.natal_lagna (string) data.natal_lagna_index (int) data.year_lord (string — planet name) data.muntha: rashi_index (int) rashi (string) age_years (float) muntha_lord (string) data.planets{} — keys Sun..Ketu: longitude (float) rashi_index (int) rashi (string) degree (float) is_retrograde (bool) speed (float) data.varshaphal_ascendant_longitude (float) data.varshaphal_ascendant_sign (string — Sanskrit sign name derived from the ascendant longitude) data.varshaphal_ascendant_sign_index (int — sign index 0-11, where 0=Mesha and 11=Meena) data.varsha_pati: planet (string) role (string) pancha_vargeeya_bala (float) kshetra_bala (float) uchcha_bala (float) election_used_strongest_without_aspect (bool) data.pancha_adhikaris[] — five objects: role (string) planet (string) pancha_vargeeya_bala, kshetra_bala, uchcha_bala, hadda_bala, dreshkana_bala, navamsa_bala (floats) pending_components_note (string) aspects_ascendant (bool) tajika_aspect_angles_matched[] (array) separation_from_asc_deg (float) data.pancha_vargeeya_bala{} — keyed by role (float values) data.tajika_aspects[] — per Pancha Adhikari (structure per upstream) data.tajika_planet_pairs[] — each: planet_a, planet_b (strings) house_a, house_b (int) diff_ab, diff_ba (float) aspect_ab, aspect_ba (strings or floats per upstream) is_ithsala (bool) is_musaripha (bool) faster_planet (string) orb_degrees (float)

SECTION: RESPONSE FORMAT response_format=json serialises the complete response as indented JSON — use this for programmatic parsing, typed clients, and downstream tool chaining. response_format=markdown renders the same data as a human-readable report. Both modes return identical underlying data — no fields are added, removed, or filtered by either mode.

SECTION: COMPUTE CLASS MEDIUM_COMPUTE

SECTION: ERROR CONTRACT INVALID_PARAMS (local — caught before upstream call): None — year not range-checked here.

INVALID_PARAMS (upstream): — None — upstream rejection surfaces as MCP INTERNAL_ERROR at the tool layer.

INTERNAL_ERROR: — Any upstream API failure or timeout → MCP INTERNAL_ERROR

Edge cases: — Feeding age instead of civil year silently mis-orients the return — caller responsibility.

SECTION: DO NOT CONFUSE WITH asterwise_get_dasha — multi-decade Vimshottari, not one solar return. asterwise_get_transits — ingress/station timeline, not annual Tajika chart.

ParametersJSON Schema
NameRequiredDescriptionDefault
yearYes
birthYesBirth data for a single person.
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond the annotations. While annotations indicate read-only, non-destructive, idempotent operations, the description specifies compute intensity ('MEDIUM_COMPUTE'), error handling (INVALID_PARAMS, INTERNAL_ERROR), and critical constraints like using civil year not age and the silent mis-orientation risk. It doesn't contradict annotations, but could mention rate limits or caching behavior for a more complete picture.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with clear sections (WHAT THIS TOOL COVERS, WORKFLOW, INPUT CONTRACT, etc.), each containing only essential information. Every sentence serves a distinct purpose—defining scope, guiding usage, detailing inputs/outputs, or differentiating from siblings—with no redundant or verbose content.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (3 parameters with nested objects, rich output schema), the description is highly complete. It covers purpose, usage guidelines, input semantics, detailed output structure, compute class, error handling, and sibling differentiation. With an output schema provided, the description appropriately focuses on explaining the data's meaning rather than restating structure, making it fully adequate for agent understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With only 33% schema description coverage, the description compensates well by adding semantic context. It explains that 'year' must be a Gregorian civil year (not age), clarifies the 'birth' object's purpose for baseline radix, and distinguishes 'response_format' options (json vs markdown). However, it doesn't detail all nested properties of 'birth' (e.g., ayanamsa, timezone), leaving some schema gaps partially addressed.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('computes the annual Tajika-style solar return') and resource ('for a four-digit civil year'), distinguishing it from siblings like asterwise_get_dasha (multi-decade Vimshottari) and asterwise_get_transits (ingress/station timeline). It explicitly lists the outputs (Muntha, Pancha Adhikari metrics, Tajika aspects, and varshaphal positions), making the purpose highly specific and differentiated.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool versus alternatives, with a dedicated 'DO NOT CONFUSE WITH' section naming asterwise_get_dasha and asterwise_get_transits. It also includes a 'WORKFLOW' section recommending asterwise_get_natal_chart as a prerequisite and stating there's no follow-up tool, offering clear contextual boundaries.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_varshaphal_harsha_balaA
Read-onlyIdempotent
Inspect

Computes Harsha Bala (positional happiness score) for all 7 classical planets in a Varshaphal solar return chart. Maximum 20 per planet (4 components × 5 points each). Source: Tajika Neelakanthi / Varsha Tantra.

SECTION: WHAT THIS TOOL COVERS Harsha Bala is entirely distinct from Pancha Vargeeya Bala. Pancha Vargeeya Bala measures mathematical strength (sign dignity, exaltation arc, divisional chart fractions). Harsha Bala measures whether a planet is positionally comfortable — does it occupy the right house, sign, hemisphere, and return type for its nature? A planet with high Pancha Vargeeya Bala (60/80) but zero Harsha Bala has the capacity to deliver results, but does so through stress, delay, and frustration. A Year Lord (Varsha Pati) with 0 Harsha Bala signals a difficult year even when it wins the Pancha Adhikari election.

4 Components (5 points each, maximum 20):

  1. STHANA — Happy house placement in the Varsha chart (measured from Varsha Ascendant, not natal Ascendant): Sun=9th, Moon=3rd, Mars=6th, Mercury=1st, Jupiter=11th, Venus=5th, Saturn=12th.

  2. SWAKSHETRA/UCCHA — Planet in its own sign (Swakshetra) or sign of exaltation (Uccha).

  3. PUM-STRI (Gender/Hemisphere) — Male planets (Sun, Mars, Jupiter) prefer houses 7–12 (visible hemisphere). Female planets (Moon, Venus, Saturn) prefer houses 1–6 (invisible hemisphere). Mercury earns this component unconditionally.

  4. DINA-RATRI (Day/Night) — Male planets (Sun, Mars, Jupiter) prefer a daytime solar return. Female planets (Moon, Venus, Saturn) prefer a nighttime return. Mercury earns this component unconditionally.

SECTION: WORKFLOW BEFORE: RECOMMENDED — asterwise_get_varshaphal — identify the Year Lord (Varsha Pati) before interpreting Harsha Bala. The Year Lord's Harsha Bala is the most actionable number in this response. AFTER: asterwise_get_varshaphal_saham — use Harsha Bala to assess whether each Saham lord can deliver its theme with ease or difficulty.

SECTION: INPUT CONTRACT Same as asterwise_get_varshaphal — BirthData plus target_year. target_year (required int): The Gregorian civil year of the solar return. Not age. time (required): Solar return ascendant and house positions are time-sensitive.

SECTION: OUTPUT CONTRACT data.target_year (int) data.ayanamsa (string) data.solar_return_utc (string — ISO UTC of solar return moment) data.is_day_return (bool — true if solar return falls between sunrise and sunset; directly determines Dina-Ratri component results for all planets) data.varshaphal_ascendant_longitude (float — Varsha Ascendant in degrees; all house placements are measured from this) data.planets[] — 7 objects (Sun, Moon, Mars, Mercury, Jupiter, Venus, Saturn in that order): planet (string — planet name) harsha_bala (int — total score 0–20) max_harsha_bala (int — always 20) varsha_house (int — planet's house in the Varsha chart, 1–12, from Varsha Ascendant) rashi_index (int — 0–11, 0=Mesha) rashi (string — Sanskrit sign name) components{} — four keys: sthana{}: earned (bool), points (int — 0 or 5), happy_house (int), actual_house (int), description (string) swakshetra_uccha{}: earned (bool), points (int — 0 or 5), in_own_sign (bool), in_exaltation (bool), current_rashi_index (int) pum_stri{}: earned (bool), points (int — 0 or 5), gender (string — 'male'|'female'|'neutral'), happy_hemisphere (string), actual_house (int) dina_ratri{}: earned (bool), points (int — 0 or 5), happy_period (string), actual_period (string — 'day' or 'night') interpretation (string — qualitative label: 'Excellent', 'Strong', 'Moderate', 'Weak', or 'Very weak') data.classical_note (string — methodology and interpretation guidance)

SECTION: RESPONSE FORMAT response_format=json serialises the complete response as indented JSON — use this for programmatic parsing, typed clients, and downstream tool chaining. response_format=markdown renders the same data as a human-readable report. Both modes return identical underlying data — no fields are added, removed, or filtered by either mode.

SECTION: COMPUTE CLASS SLOW_COMPUTE — internally runs the full solar return computation before deriving Harsha Bala.

SECTION: ERROR CONTRACT INVALID_PARAMS (local — caught before upstream call): None — BirthData Pydantic only. INVALID_PARAMS (upstream): None — upstream rejection surfaces as MCP INTERNAL_ERROR. INTERNAL_ERROR: Any upstream API failure or timeout → MCP INTERNAL_ERROR Edge cases: — Rahu and Ketu are excluded — Harsha Bala is defined only for the 7 classical grahas. — Polar latitudes where sunrise/sunset cannot be computed affect is_day_return → MCP INTERNAL_ERROR. — target_year is a Gregorian year, not age.

SECTION: DO NOT CONFUSE WITH asterwise_get_varshaphal — returns Pancha Vargeeya Bala (mathematical strength out of 80) for the Pancha Adhikaris; Harsha Bala (positional happiness out of 20) is a completely different measurement returned by this tool. asterwise_get_varshaphal_saham — derives sensitive zodiac points; this tool scores planet positional comfort. asterwise_get_chart_strength — Shadbala for the natal chart, not Tajika Harsha Bala for the solar return.

ParametersJSON Schema
NameRequiredDescriptionDefault
yearYes
birthYesBirth data for a single person.
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint, openWorldHint, idempotentHint true, and destructiveHint false. The description adds extensive behavioral context: it notes SLOW_COMPUTE classification, explains the error contract (INVALID_PARAMS, INTERNAL_ERROR), mentions edge cases like polar latitudes affecting is_day_return, and describes the exclusion of Rahu/Ketu. No contradictions with annotations; the description enhances transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (WHAT THIS TOOL COVERS, WORKFLOW, INPUT/OUTPUT CONTRACT, etc.) and uses bold headings. However, it is lengthy (over 500 words) and somewhat repetitive, e.g., differentiation from other tools appears multiple times. The front-loading is effective, but brevity could be improved. Still, the structure aids readability.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (4 components, interpretation labels, workflow, error handling) and the presence of an output schema, the description is exceptionally complete. It explains each component of Harsha Bala, provides qualitative labels, includes a classical note, and specifies compute class and error contracts. All necessary context for correct usage is present.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 33% (only some descriptions in birth fields). The description's 'INPUT CONTRACT' section adds context beyond schema: it states that 'target_year (required int): The Gregorian civil year of the solar return. Not age' and notes that 'time' is necessary because positions are time-sensitive. While it doesn't detail every subfield of 'birth', collaboration with schema descriptions suffices. A slightly higher score is warranted because the input contract clarifies the intended use.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states that the tool 'computes Harsha Bala (positional happiness score) for all 7 classical planets in a Varshaphal solar return chart,' clearly identifying the verb (computes) and resource (Varshaphal solar return chart). It distinguishes itself from siblings like asterwise_get_varshaphal (Pancha Vargeeya Bala) and asterwise_get_varshaphal_saham (sensitive points) in the 'DO NOT CONFUSE WITH' section, providing clear differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description includes a 'WORKFLOW' section with explicit before/after recommendations: BEFORE — asterwise_get_varshaphal to identify Year Lord, AFTER — asterwise_get_varshaphal_saham. It also has a 'DO NOT CONFUSE WITH' section that clarifies when not to use this tool. This provides comprehensive guidance on when and how to use the tool relative to alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_varshaphal_sahamA
Read-onlyIdempotent
Inspect

Computes all 10 Tajika Saham (sensitive points) for a Varshaphal solar return chart. Sahams are the Tajika equivalent of Arabic Parts — mathematically derived zodiac points that focus the annual horoscope on specific life themes.

SECTION: WHAT THIS TOOL COVERS Saham formula: (A - B + Ascendant) % 360, with a conditional +30° correction applied when the Ascendant does not fall in the forward zodiacal arc from B to A. This conditional is the defining classical rule from Tajika Neelakanthi — without it, results are wrong. Day and night formulas differ: the Minuend and Subtrahend swap based on whether the solar return falls during daytime or nighttime at the birth location. Punya Saham (Fortune) is always computed first because Yashas (Fame) and Mahatmya (Status) use it as an operand. The Saham lord (planet ruling the sign where the Saham falls) is the Sahamesha — its strength, house placement, and Tajika aspects to the Varsha Ascendant determine whether the theme manifests positively or is obstructed.

10 Sahams returned: punya — Fortune and Luck (Moon-Sun day / Sun-Moon night) vidya — Education and Learning (Sun-Jupiter day) yashas — Fame and Reputation (Jupiter-Punya day) — uses Punya as operand mitra — Friends and Allies (Jupiter-Venus day) mahatmya — Greatness and Status (Punya-Mars day) — uses Punya as operand asha — Desires and Fulfillment (Saturn-Venus day) karmakarya — Action and Profession (Mars-Mercury day) vyapara — Business and Trade (Mars-Saturn day) vivaha — Marriage and Relationships (Venus-Saturn day) santapa — Sorrow and Stress (Saturn-Moon day)

SECTION: WORKFLOW BEFORE: RECOMMENDED — asterwise_get_varshaphal — understand the base solar return chart (year lord, Muntha, Varsha Ascendant) before interpreting Saham lords. The Saham is meaningless without knowing which house it occupies from the Varsha Ascendant. AFTER: asterwise_get_varshaphal_harsha_bala — assess the Saham lord's positional happiness score to determine ease or difficulty of manifestation.

SECTION: INPUT CONTRACT Same as asterwise_get_varshaphal — BirthData plus target_year. target_year (required int): The Gregorian calendar year of the solar return. Not age — the civil year (e.g. 2026). Feeding age instead of year silently produces the wrong return. time (required): Solar return Ascendant is time-sensitive. Accurate birth time is required for reliable Saham interpretation.

SECTION: OUTPUT CONTRACT data.target_year (int — calendar year of the solar return) data.ayanamsa (string — ayanamsa system used, e.g. 'lahiri') data.solar_return_utc (string — ISO UTC timestamp of solar return moment) data.is_day_return (bool — true if solar return occurs between sunrise and sunset; determines which Saham formula variant is used) data.varshaphal_ascendant_longitude (float — Varsha Ascendant in degrees; all 10 Saham longitudes are computed relative to this) data.total (int — always 10) data.sahams[] — 10 objects in order [punya, vidya, yashas, mitra, mahatmya, asha, karmakarya, vyapara, vivaha, santapa]: slug (string — lowercase key, e.g. 'punya') name (string — full display name, e.g. 'Punya Saham') theme (string — life area, e.g. 'Fortune and Luck') longitude (float — Saham longitude in degrees 0–360) rashi_index (int — 0–11, 0=Mesha) rashi (string — Sanskrit sign name, e.g. 'Mesha') degree_in_sign (float — degrees within the sign) saham_lord (string — classical lord of the sign where Saham falls) formula_used (string — describes whether day or night formula was applied and which planets were operands, e.g. 'day: Moon - Sun + Asc') data.classical_note (string — methodology note)

SECTION: RESPONSE FORMAT response_format=json serialises the complete response as indented JSON — use this for programmatic parsing, typed clients, and downstream tool chaining. response_format=markdown renders the same data as a human-readable report. Both modes return identical underlying data — no fields are added, removed, or filtered by either mode.

SECTION: COMPUTE CLASS SLOW_COMPUTE — internally runs the full solar return computation (binary-search Sun longitude + house computation) before deriving Sahams.

SECTION: ERROR CONTRACT INVALID_PARAMS (local — caught before upstream call): None — BirthData Pydantic only. INVALID_PARAMS (upstream): None — upstream rejection surfaces as MCP INTERNAL_ERROR. INTERNAL_ERROR: Any upstream API failure or timeout → MCP INTERNAL_ERROR Edge cases: — Day/night determination uses sunrise/sunset at the birth coordinates for the solar return date. Polar latitudes where sunrise cannot be computed → MCP INTERNAL_ERROR. — target_year is a Gregorian year, not age — always verify the caller passes the civil year.

SECTION: DO NOT CONFUSE WITH asterwise_get_varshaphal — returns the full base solar return chart including Muntha, year lord, and planet positions; Saham points are not included there. asterwise_get_varshaphal_harsha_bala — scores planet positional happiness; this tool computes zodiac points, not planet positions. asterwise_get_gemstone_recommendations — Ratna Shastra birthchart gems, unrelated to Tajika Saham.

ParametersJSON Schema
NameRequiredDescriptionDefault
yearYes
birthYesBirth data for a single person.
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint, openWorldHint, idempotentHint, destructiveHint. The description adds substantial behavioral context: conditional +30° correction, day/night formula swap, Punya computed first, Saham lord concept, error cases (polar latitudes, target_year not age), compute class 'SLOW_COMPUTE', and error contract. All consistent with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is long but well-structured with clear sections (WHAT, WORKFLOW, INPUT, OUTPUT, ERROR, etc.). It is front-loaded with purpose. Every section adds value given the complexity of Tajika Saham. Could be slightly more concise, but appropriate for the subject.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Description covers all necessary context: what tool does, formulas, dependencies (Punya as operand), workflow before/after, input constraints, detailed output contract, error handling, and compute class. With annotations and output schema present, the description is fully complete for an AI agent to use correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 33% (only birth and response_format described in schema, year missing). Description compensates with 'INPUT CONTRACT' section detailing target_year (Gregorian year not age), time requirement, and response_format options. This adds meaning beyond schema, but could be more explicit about year format in schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it computes all 10 Tajika Saham for a Varshaphal solar return chart. It uses specific verb+resource ('computes all 10 Tajika Saham') and distinguishes from siblings like asterwise_get_varshaphal (base chart), asterwise_get_varshaphal_harsha_bala (scores), and asterwise_get_gemstone_recommendations (unrelated), with explicit 'DO NOT CONFUSE WITH' section.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit workflow: 'BEFORE: RECOMMENDED — asterwise_get_varshaphal' and 'AFTER: asterwise_get_varshaphal_harsha_bala'. It also lists siblings in 'DO NOT CONFUSE WITH' with clear differentiation. This gives the agent strong context on when to use and alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_western_aspectsA
Read-onlyIdempotent
Inspect

Calculates all active aspects between a supplied set of planetary longitudes. Accepts a dictionary of body name to tropical ecliptic longitude and returns every aspect within the Hand Table 2 orbs.

SECTION: WHAT THIS TOOL COVERS Flexible aspect calculator that works with any set of positions — natal planets, transit planets, progressed planets, or custom hypothetical points. Uses Robert Hand Table 2 orbs: conjunction/opposition 5°, square/trine 5°, sextile 3°, semisextile/quincunx/semisquare/sesquiquadrate 1.5°. Returns is_applying based on relative speeds if speeds are provided, otherwise assumed separating.

SECTION: WORKFLOW BEFORE: None — standalone; or use asterwise_get_western_natal to get positions first. AFTER: None.

SECTION: INPUT CONTRACT positions — dict mapping planet/body name (string) to tropical longitude (float 0–360). Must contain at least 2 entries. Example: {'Sun': 229.6, 'Moon': 221.8, 'Mars': 189.6, 'Jupiter': 309.6} Names can be any string — the tool does not enforce planet names.

SECTION: OUTPUT CONTRACT data.aspects[] — each: planet_a (string), planet_b (string), type (string — aspect name) exact_angle (float), orb (float), is_applying (bool) data.orbs_used — dict of aspect type to orb value used data.body_count (int — number of input bodies) data.aspect_count (int — number of aspects found)

SECTION: RESPONSE FORMAT response_format=json — aspect grid object. response_format=markdown — formatted aspect table.

SECTION: COMPUTE CLASS FAST_LOOKUP

SECTION: ERROR CONTRACT INVALID_PARAMS (local): — fewer than 2 bodies in positions dict → MCP INVALID_PARAMS immediately. INTERNAL_ERROR: Any upstream API failure → MCP INTERNAL_ERROR

SECTION: DO NOT CONFUSE WITH asterwise_get_western_natal — computes both positions and aspects from birth data. asterwise_get_western_synastry — inter-chart aspects between two people.

ParametersJSON Schema
NameRequiredDescriptionDefault
positionsYes
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Beyond the annotations (readOnlyHint, idempotentHint, etc.), the description adds specific behavioral context: uses Robert Hand Table 2 orbs with defined orbs for major, sextile, and minor aspects. It also details the output contract with aspects, orbs_used, and counters.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with sections for input contract, output contract, and 'do not confuse with'. It is concise, front-loaded with the core action, and every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of calculating aspects with multiple orbs, the description covers all necessary context: input format, output structure, default orbs, and differentiation from a sibling tool. The presence of an output schema further supports completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. The positions parameter is well-described with type and example, but the response_format parameter (enum of 'markdown' or 'json') is not explained in the description, leaving a gap. The output contract implies some structure but doesn't map to the parameter.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates all active aspects between planetary positions, using a specific verb and resource. It distinguishes from the sibling tool asterwise_get_western_natal, which computes aspects from birth data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly tells when to use this tool (provide dictionary of positions) and warns not to confuse with asterwise_get_western_natal, providing clear context and alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_western_compatibilityA
Read-onlyIdempotent
Inspect

Overall compatibility score (0–100) between two natal charts. Scores element affinity, synastry aspects between personal planets (Sun, Moon, Venus, Mars), and Sun/Moon/rising sign comparisons.

SECTION: WHAT THIS TOOL COVERS Weighted proprietary scoring model combining elemental harmony, personal-planet synastry hits, and luminaries/rising affinity labels. Higher score = more harmonious synergy — interpret relatively, not as fate.

SECTION: WORKFLOW BEFORE: asterwise_get_western_natal per person optional. AFTER: asterwise_get_western_synastry — drill into raw aspects if score needs detail.

SECTION: INPUT CONTRACT person1, person2 — WesternBirthData each.

SECTION: OUTPUT CONTRACT data.overall_score (int 0-100) data.element_score (int 0-100) data.aspect_score (int 0-100) data.sun_sign_affinity, data.moon_sign_affinity, data.rising_sign_affinity — 'harmonious'|'neutral'|'challenging' data.person1_sun, data.person2_sun, data.person1_moon, data.person2_moon data.key_aspects[] — aspects between personal planets

SECTION: RESPONSE FORMAT response_format=json serialises the complete response as indented JSON. response_format=markdown renders the same data as a human-readable report. Both modes return identical underlying data.

SECTION: COMPUTE CLASS MEDIUM_COMPUTE

SECTION: ERROR CONTRACT INVALID_PARAMS (local): WesternBirthData validation failures. INTERNAL_ERROR: Any upstream API failure or timeout → MCP INTERNAL_ERROR Edge cases: Score is our proprietary model — not universal. Use for relative comparison, not absolute judgment.

SECTION: DO NOT CONFUSE WITH asterwise_get_western_synastry — raw aspects, no score. asterwise_get_western_zodiac_compatibility — sign-only, no birth data.

ParametersJSON Schema
NameRequiredDescriptionDefault
person1YesBirth data for Western astrology tools (tropical zodiac).
person2YesBirth data for Western astrology tools (tropical zodiac).
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations (readOnlyHint, openWorldHint, idempotentHint, destructiveHint=false) already indicate a safe, read-only operation. The description adds output contract details but does not disclose additional behavioral traits like data freshness, rate limits, or error conditions. The output contract gives some transparency about return values.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-organized with a clear 'OUTPUT CONTRACT' section. While verbose, it is front-loaded and each sentence adds value. It could be more concise by removing redundancy with the schema.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (nested objects, detailed output), the description provides a comprehensive output contract covering all JSON fields. Annotations address safety and idempotency. The combination of schema, annotations, and description gives sufficient context for effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema already provides detailed descriptions for nested objects (person1, person2) and enums for response_format. Description coverage is 67%, and the description adds no extra meaning beyond what's in the schema. It does not explain parameter relationships or constraints.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool computes an overall compatibility score (0-100) for two natal charts, detailing sub-scores for element affinity, synastry aspects, and sign comparisons. It distinguishes itself from sibling tools like 'asterwise_get_western_synastry' by focusing on a composite score rather than detailed aspect lists.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives such as 'asterwise_get_compatibility' or 'asterwise_get_western_synastry'. It does not mention prerequisites, limitations, or recommendations for different use cases.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_western_compositeA
Read-onlyIdempotent
Inspect

Midpoint composite chart for two people (Robert Hand method). Each composite planet is the midpoint of the two natal positions. Returns composite planets with dignities and internal aspects.

SECTION: WHAT THIS TOOL COVERS Single synthetic chart representing the relationship as its own entity — not person A vs person B overlaid. Midpoints handle wrap-around at 360° correctly.

SECTION: WORKFLOW BEFORE: asterwise_get_western_synastry — examine inter-chart aspects before composite. AFTER: None.

SECTION: INPUT CONTRACT person1, person2 — WesternBirthData each. house_system ignored.

SECTION: OUTPUT CONTRACT data.planets[] — 10 composite planets with name, longitude, sign, degree_in_sign, dignity, dignity_score data.ascendant_longitude, data.ascendant_sign, data.ascendant_sign_index data.aspects[] — internal aspects within the composite chart

SECTION: RESPONSE FORMAT response_format=json serialises the complete response as indented JSON. response_format=markdown renders the same data as a human-readable report. Both modes return identical underlying data.

SECTION: COMPUTE CLASS MEDIUM_COMPUTE (~600ms)

SECTION: ERROR CONTRACT INVALID_PARAMS (local): WesternBirthData validation failures. INTERNAL_ERROR: Any upstream API failure or timeout → MCP INTERNAL_ERROR

SECTION: DO NOT CONFUSE WITH asterwise_get_western_synastry — two charts, inter-chart aspects vs composite (one midpoint chart). asterwise_get_western_compatibility — numeric score vs structural composite chart.

ParametersJSON Schema
NameRequiredDescriptionDefault
person1YesBirth data for Western astrology tools (tropical zodiac).
person2YesBirth data for Western astrology tools (tropical zodiac).
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, etc. The description adds the specific method (Robert Hand) and output details but does not disclose additional behavioral traits beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise: a single sentence followed by a structured output contract. Every sentence is informative with no wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description provides a detailed output contract listing planets, ascendant, and aspects. For a complex tool, it is fairly complete, though it could mention limitations or prerequisites.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool description does not provide any additional meaning for parameters beyond what the input schema already covers. Schema descriptions for person1 and person2 are present, so the description adds no value for parameter semantics.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states what the tool does: compute a midpoint composite chart for two people using the Robert Hand method. It specifies the output (composite planets with dignities and internal aspects), which distinguishes it from siblings like synastry or compatibility tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. With many sibling tools like asterwise_get_western_synastry and asterwise_get_western_compatibility, explicit usage context is missing.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_western_horoscopeA
Read-onlyIdempotent
Inspect

Fetches an AI-synthesised Western sun-sign horoscope for a chosen horizon and returns structured guidance fields plus metadata about the model and period.

SECTION: WHAT THIS TOOL COVERS Calls the upstream western horoscope service for a tropical sun sign and a period of daily, weekly, monthly, or yearly. Uses the tropical zodiac (not sidereal). Content is grounded in current sky aspects, slow planet positions, and the solar season — not Vedic transit rules. It does not compute a personal natal chart, divisional charts, or dasha — only sign-level tropical transit-flavoured copy tied to the requested horizon. No remedy field — Western tradition has no planetary remedy system.

SECTION: WORKFLOW BEFORE: None — this tool is standalone. AFTER: asterwise_get_western_natal — if the user needs a personalised tropical chart beyond sign-general copy.

SECTION: INPUT CONTRACT period is constrained to the tool schema enum (daily, weekly, monthly, yearly). sun_sign accepts English zodiac names only (Aries, Taurus, Gemini, Cancer, Leo, Virgo, Libra, Scorpio, Sagittarius, Capricorn, Aquarius, Pisces). No Sanskrit aliases — this is Western astrology. response_format selects JSON vs markdown rendering only.

SECTION: OUTPUT CONTRACT data.content: headline (string) narrative (string) love (string) career (string) money (string) body (string) power_window (string) caution_window (string) closing_message (string) phases[] (monthly only — array of phase objects with phase_number, start_date, end_date, title, narrative) year_theme (string — yearly only) chapters[] (yearly only — array of chapter objects with chapter_number, start_date, end_date, title, narrative) auspicious_months[] (yearly only — string array of month names) landmark_dates[] (yearly only — array of {date, event} objects) data.model_used (string — AI model version label) data.generated_at (string — ISO UTC) data.period_key (string — YYYY-MM-DD for daily; YYYY-W## for weekly; YYYY-MM for monthly; YYYY for yearly) data.horizon (string — 'daily', 'weekly', 'monthly', or 'yearly') data.sun_sign (string — lowercase English, e.g. 'aries') data.zodiac_type (string — 'western')

SECTION: RESPONSE FORMAT response_format=json serialises the complete response as indented JSON — use this for programmatic parsing, typed clients, and downstream tool chaining. response_format=markdown renders the same data as a human-readable report. Both modes return identical underlying data — no fields are added, removed, or filtered by either mode.

SECTION: COMPUTE CLASS FAST_LOOKUP

SECTION: ERROR CONTRACT INVALID_PARAMS (local — caught before upstream call): — Invalid period enum or other Pydantic field violations on the tool schema → MCP INVALID_PARAMS

INVALID_PARAMS (upstream): — Unknown or unsupported sun_sign → MCP INTERNAL_ERROR at the tool layer (upstream rejection).

INTERNAL_ERROR: — Any upstream API failure or timeout → MCP INTERNAL_ERROR — Horoscope not yet generated for the current period → MCP INTERNAL_ERROR with status not_generated

Edge cases: — Sun-sign content only; not a substitute for birth-chart analysis. — If a period's horoscope has not yet been generated by the cron, returns 404 upstream (surfaces as INTERNAL_ERROR). — No remedy field in western horoscopes by design.

SECTION: DO NOT CONFUSE WITH asterwise_get_horoscope — Vedic Moon-sign horoscope using sidereal zodiac, not Western tropical sun-sign. asterwise_get_western_natal — full personalised tropical chart from birth data, not sign-general editorial copy.

ParametersJSON Schema
NameRequiredDescriptionDefault
periodYes
sun_signYes
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations (readOnlyHint, openWorldHint, idempotentHint, destructiveHint) are all true/non-destructive. The description adds context about tropical zodiac, lack of personal data, no remedy field, and error handling for ungenerated periods, which goes beyond what annotations convey.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections, but it is quite verbose. The main purpose is front-loaded, and each sentence adds value, though some details (e.g., full output contract) could be shortened if the output schema is already present.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite the complexity (multiple horizons, varying output fields), the description covers input constraints, output fields per horizon, error contracts, compute class, and edge cases. The output schema exists, so return value details are already provided. Everything needed for correct invocation is present.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0%, so the description fully compensates. It explains the period enum values, constrains sun_sign to English names, and clarifies response_format selects rendering mode only. This adds essential meaning missing from the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool fetches an AI-synthesized Western sun-sign horoscope for a chosen horizon. It explicitly contrasts with Vedic and personalized charts, distinguishing it from siblings like asterwise_get_horoscope and asterwise_get_western_natal.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The 'DO NOT CONFUSE WITH' section directly lists alternative tools and their differences. The 'WORKFLOW' section indicates this tool is standalone and suggests a follow-up tool for personalized charts, providing clear guidance on when to use it versus alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_western_lunar_returnA
Read-onlyIdempotent
Inspect

Next lunar return chart after a given date. Finds the next moment the Moon returns to its natal tropical longitude (approximately every 27.3 days) and builds a complete Western natal chart for that moment at the birth location.

SECTION: WHAT THIS TOOL COVERS Monthly emotional reset chart — similar workflow to solar return but cadence is lunar synodic month, not tropical year.

SECTION: WORKFLOW BEFORE: asterwise_get_western_natal. AFTER: None.

SECTION: INPUT CONTRACT birth — WesternBirthData. after_date (optional YYYY-MM-DD) — find next return after this date. Defaults to today.

SECTION: OUTPUT CONTRACT Same shape as solar return: planet='Moon', natal_longitude, return_utc, return_jd, chart

SECTION: RESPONSE FORMAT response_format=json serialises the complete response as indented JSON. response_format=markdown renders the same data as a human-readable report. Both modes return identical underlying data.

SECTION: COMPUTE CLASS MEDIUM_COMPUTE (~500ms)

SECTION: ERROR CONTRACT INVALID_PARAMS (local): WesternBirthData validation failures. INTERNAL_ERROR: Any upstream API failure or timeout → MCP INTERNAL_ERROR Edge cases: Returns the NEXT lunar return after after_date — if after_date is omitted, returns the next return from today.

SECTION: DO NOT CONFUSE WITH asterwise_get_western_solar_return — annual Sun return vs lunar_return (monthly Moon return).

ParametersJSON Schema
NameRequiredDescriptionDefault
birthYesBirth data for Western astrology tools (tropical zodiac).
after_dateNo
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnlyHint, idempotentHint, etc. Description adds that it computes the next return based on natal tropical longitude (about every 27.3 days) and builds a chart. No contradictions; adds moderate behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two clear sections (INPUT CONTRACT, OUTPUT CONTRACT) with no redundant sentences. Every sentence adds value. Efficiently structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given low schema coverage, the description provides necessary context: what the tool does, input requirements (birth, after_date, response_format), and output shape. Missing explicit details on output fields but references 'same shape as solar return'.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is low (33%), but description explains 'after_date' purpose and default, and mentions output contract. Compensates for missing schema docs for after_date and response_format.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb 'get' and resource 'western lunar return chart' clearly stated. Distinguishes from sibling tools by explicitly mentioning 'Moon' and 'lunar return', and contrasts with 'solar return'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Description states when to use (find next lunar return after a given date) and defaults to today. No explicit when-not-to-use or alternatives, but references solar return shape, providing implicit context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_western_moon_calendarA
Read-onlyIdempotent
Inspect

Returns lunar phase data for every day in a calendar month as a structured array. Each element is a complete daily phase object identical to asterwise_get_western_moon_phase.

SECTION: WHAT THIS TOOL COVERS Month-at-a-glance lunar calendar for any year/month combination. Useful for building moon phase widgets, identifying full and new moon dates, planning tools based on lunar cycles, and content calendars. Defaults to the current month.

SECTION: WORKFLOW BEFORE: None — standalone. AFTER: None.

SECTION: INPUT CONTRACT year (optional int) — Target year. Defaults to current year. Example: 2026 month (optional int 1–12) — Target month. Defaults to current month. Example: 5 (May) Values outside 1–12 are rejected locally with MCP INVALID_PARAMS.

SECTION: OUTPUT CONTRACT data[] — array of daily phase objects, one per calendar day in the month. Each object is identical to asterwise_get_western_moon_phase output.

SECTION: RESPONSE FORMAT response_format=json — array of daily phase objects. response_format=markdown — month view with day-by-day phases.

SECTION: COMPUTE CLASS MEDIUM_COMPUTE (~500ms for 30 days)

SECTION: ERROR CONTRACT INVALID_PARAMS (local): — month outside 1–12 → MCP INVALID_PARAMS immediately. INTERNAL_ERROR: Any upstream API failure → MCP INTERNAL_ERROR

SECTION: DO NOT CONFUSE WITH asterwise_get_western_moon_phase — single-day phase only.

ParametersJSON Schema
NameRequiredDescriptionDefault
yearNo
monthNo
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnlyHint=true, destructiveHint=false, so the description adds value by explaining the output structure and default behavior. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is mostly concise with two paragraphs. The 'SECTION: OUTPUT CONTRACT' is unnecessary given an output schema exists, but overall it's not lengthy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the output schema exists, the description covers the main points: monthly aggregation, defaults, and reference to sibling tool output. Minor missing details like error handling but acceptable.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It explains defaulting behavior but does not specify the expected format for year and month (e.g., integer ranges) or the meaning of response_format enum values.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool returns lunar phase data for every day in a given month, with specific use cases like building moon phase calendars. It distinguishes from the sibling tool asterwise_get_western_moon_phase by implying aggregation over a month.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description mentions defaults (current month if no year/month given) and use cases, but does not explicitly state when to use this vs alternative tools or when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_western_moon_phaseA
Read-onlyIdempotent
Inspect

Calculates the tropical lunar phase for any date using Swiss Ephemeris. Returns the phase name, phase angle, illumination percentage, Moon age in days, and the next major phase transition. Verified against moongiant.com.

SECTION: WHAT THIS TOOL COVERS Eight-phase tropical lunar cycle: New Moon (0°), Waxing Crescent, First Quarter (90°), Waxing Gibbous, Full Moon (180°), Waning Gibbous, Last Quarter (270°), Waning Crescent. Phase angle is Sun–Moon elongation in degrees (0–360). Illumination percentage is derived from the phase angle using the cosine formula. Moon age is days since last New Moon.

SECTION: WORKFLOW BEFORE: None — standalone. AFTER: asterwise_get_western_moon_calendar — get the full month's phase data.

SECTION: INPUT CONTRACT date (optional string YYYY-MM-DD) — Date to compute phase for. Defaults to today. Example: '2026-05-01'

SECTION: OUTPUT CONTRACT data.date (string — YYYY-MM-DD) data.phase_name (string — one of the eight canonical phase names) data.phase_angle (float — 0–360°, Sun–Moon elongation) data.illumination_pct (float — 0–100) data.moon_age_days (float — days since New Moon) data.moon_longitude (float — tropical ecliptic longitude) data.sun_longitude (float — tropical ecliptic longitude) data.is_waxing (bool — true from New to Full Moon) data.next_phase_name (string — next major phase) data.next_phase_date (string — approximate YYYY-MM-DD of next major phase)

SECTION: RESPONSE FORMAT response_format=json — structured phase data. response_format=markdown — human-readable moon report.

SECTION: COMPUTE CLASS FAST_LOOKUP

SECTION: ERROR CONTRACT INVALID_PARAMS (local): None — date validated upstream. INTERNAL_ERROR: Any upstream API failure → MCP INTERNAL_ERROR

SECTION: DO NOT CONFUSE WITH asterwise_get_western_moon_calendar — full monthly day-by-day phase table. asterwise_get_panchanga — Vedic tithi system (lunar day based on 12° arc increments).

ParametersJSON Schema
NameRequiredDescriptionDefault
dateNo
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Beyond annotations (readOnly, idempotent, etc.), description details output fields (phase name, angle, illumination, etc.), tropical zodiac scope, and default behavior. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured: purpose first, then output contract, then confusion avoidance. Every sentence is informative, no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given tool simplicity and existence of output schema, description covers defaults, scope, output fields, and sibling differentiation. Complete for agent to invoke correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Only 2 params with 0% schema description coverage. Description explains date parameter default behavior ('defaults to today'), adding value. Response_format is self-explanatory with enum. Could elaborate more but sufficient.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states verb 'calculate' and resource 'lunar phase' with scope 'tropical zodiac'. It distinguishes from sibling asterwise_get_panchanga, making purpose precise.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly tells when to use (any date) and when not to (for Vedic tithi, use asterwise_get_panchanga). Also notes default to today if no date given, providing clear context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_western_natalA
Read-onlyIdempotent
Inspect

Calculate a complete Western natal chart using the tropical zodiac and Swiss Ephemeris. Returns 10 planet positions with Placidus (or chosen) house placements, essential dignities per Ptolemy/Lilly/Hand, all active aspects using Robert Hand Table 2 orbs, and element/modality/hemisphere balance statistics.

SECTION: WHAT THIS TOOL COVERS Tropical natal chart: Sun, Moon, Mercury, Venus, Mars, Jupiter, Saturn, Uranus, Neptune, Pluto. Each planet returns tropical longitude, sign, house (1–12), retrograde flag, dignity label (domicile/exaltation/detriment/fall/peregrine), dignity score (Lilly weights: domicile +5, exaltation +4, triplicity +3, term +2, face +1, detriment -5, fall -4), is_exaltation_degree (within 1° of exact exaltation), dignity_disputed (true for outer planets where exaltation/fall is disputed among modern astrologers). Aspects use Hand Table 2 orbs: conjunction/opposition 5°, square/trine 5°, sextile 3°, minor aspects 1.5°. Accuracy verified against astro-seek.com to within 0.01° for all 10 planets. Not Vedic sidereal (asterwise_get_natal_chart).

SECTION: WORKFLOW BEFORE: None — this tool is standalone. AFTER: asterwise_get_western_transits_daily — layer current transits over this natal chart. AFTER: asterwise_get_western_synastry — compare this chart against a partner's chart. AFTER: asterwise_get_western_solar_return — annual return chart for the current year.

SECTION: INPUT CONTRACT birth.date — YYYY-MM-DD. Example: '1985-11-12' birth.time — HH:MM (24-hour local time). Example: '06:45' birth.lat — Decimal degrees, north positive. Example: 19.076 (Mumbai) birth.lon — Decimal degrees, east positive. Example: 72.8777 (Mumbai) birth.timezone — IANA timezone string. Example: 'Asia/Kolkata', 'America/New_York', 'Europe/Rome', 'UTC'. Default: UTC. IMPORTANT: Timezone defaults to UTC — always supply the correct local timezone for accurate house cusps. An incorrect timezone shifts the Ascendant. birth.house_system — 'placidus' (default, most common), 'koch', 'equal', 'whole_sign'. Placidus is standard for most Western traditions. Whole sign is traditional/Hellenistic. NOTE: house_system is accepted here but silently ignored by transit, return, synastry, composite, and progression endpoints — those always use the birth location coordinates without house-system selection. ayanamsa — always tropical regardless of any value supplied; field is not present.

SECTION: OUTPUT CONTRACT data.zodiac (string — 'tropical') data.house_system (string — the system used) data.ascendant — { longitude (float), sign (string), sign_index (int 0–11), degree_in_sign (float) } data.mc — same shape as ascendant data.planets[] — 10 objects (Sun through Pluto): name (string), longitude (float), sign (string), sign_index (int 0–11) degree_in_sign (float), house (int 1–12) is_retrograde (bool), dignity (string), dignity_score (int) is_exaltation_degree (bool), dignity_disputed (bool) data.houses[] — 12 objects: house (int 1–12), cusp_longitude (float), sign (string) sign_index (int 0–11), degree_in_sign (float) data.aspects[] — each: planet_a (string), planet_b (string), type (string) exact_angle (float), orb (float), is_applying (bool) data.elements — { fire (int), earth (int), air (int), water (int), dominant (string) } data.modalities — { cardinal (int), fixed (int), mutable (int), dominant (string) } data.hemisphere — { eastern (int), western (int), northern (int), southern (int) } data.ayanamsa_value (float — 0.0 for tropical) data.ayanamsa_used (string — 'tropical') data.birth_time_unknown (bool — always false)

SECTION: RESPONSE FORMAT response_format=json serialises the complete response as indented JSON — use this for programmatic parsing, typed clients, and downstream tool chaining. response_format=markdown renders the same data as a human-readable natal report. Both modes return identical underlying data.

SECTION: COMPUTE CLASS MEDIUM_COMPUTE (~300ms)

SECTION: ERROR CONTRACT INVALID_PARAMS (local — caught before upstream call): — WesternBirthData Pydantic violations (date pattern, time pattern, lat/lon bounds) → MCP INVALID_PARAMS INVALID_PARAMS (upstream): — None expected for valid coordinates and dates post-1800. INTERNAL_ERROR: — Any upstream API failure or timeout → MCP INTERNAL_ERROR Edge cases: — Polar latitudes (above ~65°N or below ~65°S) may cause Placidus house calculation failure; use whole_sign or equal house system for polar births. — time='00:00' accepted; lagna-sensitive results are unreliable for unknown birth times.

SECTION: DO NOT CONFUSE WITH asterwise_get_natal_chart — Vedic sidereal chart using Lahiri ayanamsa; different zodiac, different house system, different planet set (9 grahas vs 10 tropical planets). asterwise_get_western_aspects — takes raw longitudes as input; use when you already have positions and don't need full chart computation.

ParametersJSON Schema
NameRequiredDescriptionDefault
birthYesBirth data for Western astrology tools (tropical zodiac).
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint, idempotentHint, etc. The description adds extensive behavioral details: 10 planets, Placidus house system, Hand Table 2 orbs for aspects, dignities per Ptolemy/Lilly/Hand, elements/modalities/hemispheres. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is long but well-organized into sections (WHAT THIS TOOL COVERS, INPUT/OUTPUT CONTRACT, DO NOT CONFUSE). It is front-loaded with the main purpose and each section adds value. Slightly verbose but appropriate for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (10 planets, aspects, dignities, statistics), the description is remarkably complete: it covers input requirements, output structure with shapes, sibling differentiation, and exceptions like ignored ayanamsa. Output schema exists, so return values are fully documented.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Despite 50% schema coverage, the description provides detailed parameter semantics: birth.date format (YYYY-MM-DD), time (HH:MM), lat/lon with coordinate ranges, timezone IANA, house_system options including default. It also clarifies that ayanamsa is ignored, compensating for the schema gaps.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool calculates a complete Western natal chart using tropical zodiac and Swiss Ephemeris, listing detailed outputs. It explicitly distinguishes from the sibling tool asterwise_get_natal_chart (Vedic sidereal), ensuring correct selection.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The 'DO NOT CONFUSE WITH' section directly tells when not to use this tool and provides the exact sibling name, giving explicit guidance on usage context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_western_planetary_returnB
Read-onlyIdempotent
Inspect

Next return chart for any planet after a given date. Finds the exact moment the specified planet returns to its natal tropical longitude and builds a complete Western natal chart for that moment at the birth location.

SECTION: WHAT THIS TOOL COVERS Generalised return for any of the 10 classical tropical bodies — Mercury returns (yearly-ish), Venus (~1 year), Mars (~2 years), Jupiter (~12 years), Saturn (~29 years), through Pluto (~248 years).

SECTION: WORKFLOW BEFORE: asterwise_get_western_natal. AFTER: None.

SECTION: INPUT CONTRACT birth — WesternBirthData. planet — one of Sun, Moon, Mercury, Venus, Mars, Jupiter, Saturn, Uranus, Neptune, Pluto. after_date (optional YYYY-MM-DD) — defaults to today.

SECTION: OUTPUT CONTRACT Same shape as solar return: planet name, natal_longitude, return_utc, return_jd, chart

SECTION: RESPONSE FORMAT response_format=json serialises the complete response as indented JSON. response_format=markdown renders the same data as a human-readable report. Both modes return identical underlying data.

SECTION: COMPUTE CLASS SLOW_COMPUTE for outer planets (Jupiter+ return can search years ahead)

SECTION: ERROR CONTRACT INVALID_PARAMS (local): planet not in the valid 10-planet set → MCP INVALID_PARAMS locally. INTERNAL_ERROR: Any upstream API failure or timeout → MCP INTERNAL_ERROR Edge cases: Neptune return takes ~165 years — only useful for generational analysis, not individual lifetime prediction. Pluto return: ~248 years, never completes in one lifetime.

SECTION: DO NOT CONFUSE WITH asterwise_get_western_solar_return — Sun-only shortcut. asterwise_get_western_lunar_return — Moon-only shortcut.

ParametersJSON Schema
NameRequiredDescriptionDefault
birthYesBirth data for Western astrology tools (tropical zodiac).
planetYes
after_dateNo
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations (readOnlyHint, idempotentHint) already indicate safety. Description adds output shape details but no extra behavioral context beyond what annotations imply.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is concise, well-structured with input/output sections, and contains no unnecessary words. Front-loaded with main purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite output schema existing, the description does not fully address all input parameters (e.g., birth object details) given the low schema coverage. Incomplete for the parameter complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Description covers 'planet' and 'after_date' parameters partially, but the birth object (with 6 fields) is not elaborated beyond schema. Schema coverage is low (25%), so description should compensate but does not.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states it finds the next return chart for a planet after a given date, builds a Western natal chart for that moment, and lists allowed planets. This clearly distinguishes from siblings like solar return.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool vs. alternatives like solar or lunar returns. No when-to-use or when-not-to-use conditions provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_western_secondary_progressionsA
Read-onlyIdempotent
Inspect

Secondary progressed chart using the day-for-a-year method. Each day after birth symbolises one year of life (1 ephemeris day = 1 tropical year = 365.2421904 days). Returns all 10 progressed planet positions, progressed Ascendant and MC (True Quotidian method), and the solar arc.

SECTION: WHAT THIS TOOL COVERS Naibod-style secondary directions: Moon advances ~12°/year of life, Mercury/Venus track close to Sun speed, outer planets crawl. Includes progressed lunation phases internally.

SECTION: WORKFLOW BEFORE: asterwise_get_western_natal. AFTER: asterwise_get_western_solar_arc — compare uniform arc vs individual motion.

SECTION: INPUT CONTRACT birth — WesternBirthData. target_date (optional YYYY-MM-DD) — the date to progress to. Defaults to today.

SECTION: OUTPUT CONTRACT data.target_date, data.progressed_jd, data.age_years data.solar_arc (float — ~1° per year) data.natal_sun_longitude, data.progressed_sun_longitude data.progressed_planets[] — 10 objects: name, longitude, sign, degree_in_sign, is_retrograde, dignity, dignity_score data.progressed_ascendant, data.progressed_ascendant_sign data.progressed_mc, data.progressed_mc_sign

SECTION: RESPONSE FORMAT response_format=json serialises the complete response as indented JSON. response_format=markdown renders the same data as a human-readable report. Both modes return identical underlying data.

SECTION: COMPUTE CLASS MEDIUM_COMPUTE (~400ms)

SECTION: ERROR CONTRACT INVALID_PARAMS (local): WesternBirthData validation failures. INTERNAL_ERROR: Any upstream API failure or timeout → MCP INTERNAL_ERROR Edge cases: Progressed ASC uses True Quotidian method (actual house calculation at progressed JD) not Solar Arc approximation — this is the more accurate approach. Progressed MC uses Solar Arc approximation (natal MC + solar arc) which is standard.

SECTION: DO NOT CONFUSE WITH asterwise_get_western_solar_arc — all planets move by one uniform arc. asterwise_get_western_transits_daily — real-time sky, not symbolic progression.

ParametersJSON Schema
NameRequiredDescriptionDefault
birthYesBirth data for Western astrology tools (tropical zodiac).
target_dateNo
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint, idempotentHint, and destructiveHint false. The description adds specific behavioral details: the day-for-a-year method, that it returns all 10 planet positions, and that it uses True Quotidian for angles. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with sections (INPUT CONTRACT, OUTPUT CONTRACT, DO NOT CONFUSE WITH), making it easy to parse. It is somewhat verbose but every section adds value. Front-loaded with the method explanation.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of secondary progressions, the description fully explains the method, input defaults, output structure, and differentiates from a similar tool. The output contract provides a clear understanding of return values, even though no formal output schema is provided.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is low (33%) but the description compensates by explaining the target_date parameter (optional, defaults to today). The birth object's inner fields are described in the schema. The description does not add further semantics for birth parameters or response_format beyond what schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool computes secondary progressed charts using the day-for-a-year method, with explicit listing of outputs. It also distinguishes from the sibling tool asterwise_get_western_solar_arc, making purpose unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides a dedicated 'DO NOT CONFUSE WITH' section that differentiates from solar arc progressions. It also explains the default behavior for target_date. However, it does not broadly discuss when to use secondary progressions versus other tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_western_solar_arcA
Read-onlyIdempotent
Inspect

Solar Arc Directions for a target date. The solar arc (progressed Sun minus natal Sun) is applied uniformly to every natal planet and angle — approximately 1° per year. Unlike secondary progressions, all planets advance at the same rate.

SECTION: WHAT THIS TOOL COVERS Naibod arc or solar arc directions — one delta longitude applied to every natal body and the angles. Classic predictive technique for timing outer events.

SECTION: WORKFLOW BEFORE: asterwise_get_western_natal. AFTER: None.

SECTION: INPUT CONTRACT birth — WesternBirthData. target_date (optional YYYY-MM-DD) — defaults to today.

SECTION: OUTPUT CONTRACT data.target_date, data.solar_arc, data.age_years data.natal_sun_longitude, data.progressed_sun_longitude data.directed_planets[] — 10 objects: name, natal_longitude, directed_longitude, sign, degree_in_sign, dignity, dignity_score data.directed_ascendant, data.directed_ascendant_sign data.directed_mc, data.directed_mc_sign

SECTION: RESPONSE FORMAT response_format=json serialises the complete response as indented JSON. response_format=markdown renders the same data as a human-readable report. Both modes return identical underlying data.

SECTION: COMPUTE CLASS MEDIUM_COMPUTE (~400ms)

SECTION: ERROR CONTRACT INVALID_PARAMS (local): WesternBirthData validation failures. INTERNAL_ERROR: Any upstream API failure or timeout → MCP INTERNAL_ERROR Edge cases: All planets advance at the same rate (solar arc ≈ 1°/year). This is Solar Arc Directions — different from secondary progressions where each planet moves at its own astronomical speed.

SECTION: DO NOT CONFUSE WITH asterwise_get_western_secondary_progressions — each planet moves at its own rate. asterwise_get_western_transits_daily — real-time transits, not arc directions.

ParametersJSON Schema
NameRequiredDescriptionDefault
birthYesBirth data for Western astrology tools (tropical zodiac).
target_dateNo
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint, idempotentHint, and non-destructive. The description adds behavioral details: uniform advancement rate (~1°/year), output structure (directed planets, angles, etc.). No contradictions. The added context is meaningful but not critical beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (INPUT/OUTPUT CONTRACT, DO NOT CONFUSE WITH). It is front-loaded with the core purpose. Every sentence adds value; no fluff. Despite length, it remains scannable and efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (astrology calculation with multiple outputs), the description provides a comprehensive input contract, output contract, and sibling differentiation. The existence of an output schema reduces the burden on the description, but it still lists output fields clearly. Complete for reliable agent usage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is only 33% (birth object is described in schema, but target_date and response_format lack schema descriptions). The description partially compensates by explaining target_date format and default, but does not address response_format or birth subfields beyond what schema provides. Baseline 3 for low coverage with partial compensation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Solar Arc Directions for a target date' and explains the mechanism (uniform ~1°/year). It distinguishes from the sibling tool 'asterwise_get_western_secondary_progressions' in a dedicated section, leaving no ambiguity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The 'DO NOT CONFUSE WITH' section explicitly tells when to use this tool vs secondary progressions. It also notes that target_date defaults to today. However, it does not provide broader when-to-use or when-not-to-use guidance beyond the sibling comparison.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_western_solar_returnA
Read-onlyIdempotent
Inspect

Solar return chart for a given year. Finds the exact moment the Sun returns to its natal tropical longitude and builds a complete Western natal chart for that moment at the birth location. Provide the year as an integer (e.g. 2026).

SECTION: WHAT THIS TOOL COVERS Annual solar return — the chart cast for the precise instant the transiting Sun reaches the natal Sun's longitude, relocated to birth place (not relocated charts). The embedded data.chart is a full tropical chart at that instant.

SECTION: WORKFLOW BEFORE: asterwise_get_western_natal — understand natal chart before reading return. AFTER: None.

SECTION: INPUT CONTRACT birth — WesternBirthData. house_system ignored (chart uses return computation defaults). year (int) — calendar year of the return (e.g. 2026), not age.

SECTION: OUTPUT CONTRACT data.planet — 'Sun' data.natal_longitude (float — natal Sun tropical longitude) data.return_utc (string — ISO 8601 UTC moment of return) data.return_jd (float — Julian Day of return) data.chart — full Western natal chart at return moment

SECTION: RESPONSE FORMAT response_format=json serialises the complete response as indented JSON. response_format=markdown renders the same data as a human-readable report. Both modes return identical underlying data.

SECTION: COMPUTE CLASS MEDIUM_COMPUTE (~800ms, iterative Sun longitude search)

SECTION: ERROR CONTRACT INVALID_PARAMS (local): WesternBirthData validation failures. INTERNAL_ERROR: Any upstream API failure or timeout → MCP INTERNAL_ERROR Edge cases: year param is the calendar year of the return (e.g., 2026), not the age. Feeding age instead of year silently produces the wrong return chart.

SECTION: DO NOT CONFUSE WITH asterwise_get_western_lunar_return — Moon return, ~monthly. asterwise_get_varshaphal — Vedic Tajika solar return — different system.

ParametersJSON Schema
NameRequiredDescriptionDefault
yearYes
birthYesBirth data for Western astrology tools (tropical zodiac).
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnlyHint=true and idempotentHint=true, and the description adds behavioral context: it finds the exact return moment and builds a full chart. The output contract details the return data fields, providing transparency beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is fairly concise with a clear structure: a purpose sentence, usage example, and an output contract section. While the output contract could be considered redundant if an output schema exists, it adds clarity and is well-organized.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity and the presence of an output schema (implied), the description covers the essential aspects: what it does, the required input (year and birth data), and the output contract. It does not explain edge cases, but it is sufficient for the agent to use the tool correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema covers 3 parameters, but the description adds meaning by explaining the 'year' parameter and including an example. It also outlines the output contract, which compensates for the schema's moderate coverage (33%). The description provides more context than the schema alone.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool computes a solar return chart for a given year, specifying the exact moment of Sun's return and building a full Western natal chart. This distinguishes it from sibling return tools like lunar or planetary returns.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear usage by specifying the 'year' input with an example (2026). It implicitly guides the agent to use this tool for solar returns versus other return types, but does not explicitly state when not to use it or compare to alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_western_synastryA
Read-onlyIdempotent
Inspect

Aspect grid between two natal charts using the tropical zodiac. Returns all inter-chart aspects using Robert Hand Table 2 orbs. Useful for relationship compatibility analysis.

SECTION: WHAT THIS TOOL COVERS Bidirectional aspect matrix: every person1 planet to every person2 planet within orb. Does not produce a compatibility score — raw geometry only. House overlays are not included.

SECTION: WORKFLOW BEFORE: asterwise_get_western_natal per person — understand charts individually first. AFTER: asterwise_get_western_composite — midpoint chart for the relationship itself.

SECTION: INPUT CONTRACT person1, person2 — each WesternBirthData (date, time, lat, lon, timezone). house_system ignored for synastry payload.

SECTION: OUTPUT CONTRACT data.aspects[] — person1_planet, person2_planet, type, exact_angle, orb data.total_aspects

SECTION: RESPONSE FORMAT response_format=json serialises the complete response as indented JSON. response_format=markdown renders the same data as a human-readable report. Both modes return identical underlying data.

SECTION: COMPUTE CLASS MEDIUM_COMPUTE (~600ms, two natal charts + aspect grid)

SECTION: ERROR CONTRACT INVALID_PARAMS (local): WesternBirthData validation failures. INTERNAL_ERROR: Any upstream API failure or timeout → MCP INTERNAL_ERROR

SECTION: DO NOT CONFUSE WITH asterwise_get_western_composite — one merged midpoint chart vs synastry (two charts overlaid). asterwise_get_western_compatibility — proprietary 0–100 score vs raw aspects.

ParametersJSON Schema
NameRequiredDescriptionDefault
person1YesBirth data for Western astrology tools (tropical zodiac).
person2YesBirth data for Western astrology tools (tropical zodiac).
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only, idempotent, and non-destructive behavior. The description adds that it uses specific Robert Hand Table 2 orbs and returns a structured set of aspects, which provides behavioral context beyond the annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise at about four sentences, with clear sections for input and output contracts. It front-loads the primary purpose and avoids unnecessary elaboration.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of annotations (readOnlyHint, idempotentHint) and a well-detailed input schema, the description covers the essential purpose, parameters, and output structure. It lacks mention of potential limitations (e.g., planetary scope or orb details) but is sufficient for typical use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 67% (response_format lacks a description). The description repeats the parameter structure from the schema without adding new semantic details (e.g., format constraints or examples). Thus, it adds minimal value beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it computes an aspect grid between two natal charts using tropical zodiac and Robert Hand Table 2 orbs, specifically for relationship compatibility. This distinguishes it from sibling tools like single-chart aspects or composite charts.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description mentions it is useful for relationship compatibility analysis, but does not explicitly state when to use this tool versus siblings like asterwise_get_western_aspects or asterwise_get_western_compatibility. No exclusions or alternatives are provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_western_transits_dailyA
Read-onlyIdempotent
Inspect

Current sky positions vs natal chart for a single day. Returns all 10 planets with tropical longitudes and active aspects to natal positions using transit orbs (Hand Planets in Transit): major 3°, sextile 2°, minor 1°. Provide start_date for a specific day; defaults to today.

SECTION: WHAT THIS TOOL COVERS Single-day transit snapshot: where each planet is now versus where each natal planet was at birth, with applying/separating transit-to-natal aspects using tighter transit orbs than natal chart orbs. Excludes progressions and solar arc — pure transit ephemeris.

SECTION: WORKFLOW BEFORE: asterwise_get_western_natal — establish natal chart first. AFTER: asterwise_get_western_transits_weekly — for week view.

SECTION: INPUT CONTRACT birth — WesternBirthData (date, time, lat, lon, timezone). house_system ignored for this endpoint. start_date (optional YYYY-MM-DD) — defaults to today.

SECTION: OUTPUT CONTRACT data.date, data.transit_planets[] — name, longitude, sign, is_retrograde, aspects_to_natal[] data.aspects[] — transit_planet, natal_planet, type, orb, is_applying data.total_aspects

SECTION: RESPONSE FORMAT response_format=json serialises the complete response as indented JSON. response_format=markdown renders the same data as a human-readable report. Both modes return identical underlying data.

SECTION: COMPUTE CLASS MEDIUM_COMPUTE (~400ms)

SECTION: ERROR CONTRACT INVALID_PARAMS (local): WesternBirthData validation failures. INTERNAL_ERROR: Any upstream API failure or timeout → MCP INTERNAL_ERROR Edge cases: Transit orbs use Hand Planets in Transit table (smaller than natal orbs): major 3°, sextile 2°, minor 1°.

SECTION: DO NOT CONFUSE WITH asterwise_get_western_transits_weekly — 7 days vs 1 day. asterwise_get_western_transits_monthly — 30-day window vs single day.

ParametersJSON Schema
NameRequiredDescriptionDefault
birthYesBirth data for Western astrology tools (tropical zodiac).
start_dateNo
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations (readOnlyHint, idempotentHint, destructiveHint) already establish safety. The description adds value by detailing the orbs (major 3°, sextile 2°, minor 1°) and providing an output contract, which goes beyond annotations. No contradictions detected.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with a clear output contract, but it is somewhat verbose. The main purpose is front-loaded, but the output section could be more concise. Every sentence adds value, though some redundancy exists.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity, output schema exists, and sibling tools are numerous, the description covers purpose, orbs, and output structure adequately. However, it lacks comparative context with siblings and could better guide the agent on when this tool is the best choice.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With schema coverage at 33%, most parameters lack description in the schema. The description partially compensates by explaining start_date (defaults to today) and referencing the birth object in context, but it does not detail all sub-parameters. Overall, it adds meaningful context beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool computes 'current sky positions vs natal chart for a single day' with specific details (10 planets, tropical longitudes, aspects using transit orbs). The name 'daily' distinguishes it from siblings 'weekly' and 'monthly', providing clear differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description mentions providing a start_date with a default of today, but it does not explicitly guide when to use this tool versus alternatives like weekly or monthly transit tools. Usage is implied through the name but lacks explicit when-not or comparative guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_western_transits_monthlyA
Read-onlyIdempotent
Inspect

30-day transit window vs natal chart. Returns day-by-day transit snapshots plus peak aspects (active 10+ days in the window). Use start_date to set the month start; defaults to today.

SECTION: WHAT THIS TOOL COVERS Full-month transit calendar: 30 consecutive daily snapshots vs the same natal chart, with peak_aspects for long-duration hits. Largest payload of the transit trio.

SECTION: WORKFLOW BEFORE: asterwise_get_western_natal. AFTER: None.

SECTION: INPUT CONTRACT birth — WesternBirthData. house_system ignored. start_date (optional YYYY-MM-DD) — month start; defaults to today.

SECTION: OUTPUT CONTRACT data.start_date, data.end_date data.days[] — 30 daily transit objects data.peak_aspects[] — aspects active on 10 or more days

SECTION: RESPONSE FORMAT response_format=json serialises the complete response as indented JSON. response_format=markdown renders the same data as a human-readable report. Both modes return identical underlying data.

SECTION: COMPUTE CLASS SLOW_COMPUTE (~15s for 30 days)

SECTION: ERROR CONTRACT INVALID_PARAMS (local): WesternBirthData validation failures. INTERNAL_ERROR: Any upstream API failure or timeout → MCP INTERNAL_ERROR Edge cases: Large payload — 30 daily snapshots. Use json format for downstream processing.

SECTION: DO NOT CONFUSE WITH asterwise_get_western_transits_daily — 1 day. asterwise_get_western_transits_weekly — 7 days.

ParametersJSON Schema
NameRequiredDescriptionDefault
birthYesBirth data for Western astrology tools (tropical zodiac).
start_dateNo
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnly, idempotent, non-destructive. Description adds value by specifying the output includes peak aspects active 10+ days and an output contract. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences plus a structured output contract. Front-loaded with key details. Efficient, though the output contract could be more readable.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with an output schema and three parameters (one nested), the description covers purpose, parameter usage, and return structure. The output contract provides clarity on what is returned, making it complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Only 33% of parameters are described in the schema (birth has full description, start_date and response_format lack descriptions). The description adds minimal parameter guidance (start_date usage) but does not compensate for the low coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool retrieves a 30-day transit window against a natal chart, returning daily snapshots and peak aspects. The name and description distinguish it from daily/weekly transit siblings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Description mentions using start_date to set month start with default today. While it does not explicitly exclude other options, the sibling names imply monthly vs daily/weekly, and the guidance is sufficient.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_western_transits_weeklyA
Read-onlyIdempotent
Inspect

7-day transit window vs natal chart. Returns day-by-day transit snapshots plus peak aspects (active 4+ days in the window). Use start_date to set the week start; defaults to today.

SECTION: WHAT THIS TOOL COVERS Rolling seven-day transit analysis with one snapshot per day (same structure as daily transits) plus peak_aspects highlighting aspects that persist across multiple days.

SECTION: WORKFLOW BEFORE: asterwise_get_western_natal. AFTER: asterwise_get_western_transits_monthly — for full month.

SECTION: INPUT CONTRACT birth — WesternBirthData. house_system ignored. start_date (optional YYYY-MM-DD) — week start; defaults to today.

SECTION: OUTPUT CONTRACT data.start_date, data.end_date data.days[] — 7 daily transit objects (same shape as daily transits) data.peak_aspects[] — aspects active on 4 or more days

SECTION: RESPONSE FORMAT response_format=json serialises the complete response as indented JSON. response_format=markdown renders the same data as a human-readable report. Both modes return identical underlying data.

SECTION: COMPUTE CLASS MEDIUM_COMPUTE (~2s for 7 days)

SECTION: ERROR CONTRACT INVALID_PARAMS (local): WesternBirthData validation failures. INTERNAL_ERROR: Any upstream API failure or timeout → MCP INTERNAL_ERROR

SECTION: DO NOT CONFUSE WITH asterwise_get_western_transits_daily — single day. asterwise_get_western_transits_monthly — 30 days vs 7.

ParametersJSON Schema
NameRequiredDescriptionDefault
birthYesBirth data for Western astrology tools (tropical zodiac).
start_dateNo
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate readOnlyHint, openWorldHint, idempotentHint are true and destructiveHint false. The description adds behavioral details about the output (day-by-day snapshots and peak aspects active 4+ days) and includes an output contract, which goes beyond the annotations. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is very concise with three sentences plus a clear 'OUTPUT CONTRACT' section. It front-loads the core purpose and adds structured detail without wasteful text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of an output schema (as per context signals), the description provides a useful output contract. The tool has three parameters and a decent amount of guidance. However, the abundance of sibling transit tools means more explicit differentiation would improve completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 33%, with only the 'birth' object fields described. The description adds meaning to the 'start_date' parameter (defaults to today, sets week start). However, 'response_format' is not explained in the description, partially compensating for low coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it provides a 7-day transit window compared to the natal chart, returning daily snapshots and peak aspects. It distinguishes from other transit tools by specifying the weekly scope, but does not explicitly differentiate from daily or monthly siblings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description mentions using start_date to set the week start but provides no explicit guidance on when to use this tool versus alternatives like daily or monthly transit tools. The usage context is implied but not clarified with when-not-to-use or alternative recommendations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_western_zodiac_compatibilityA
Read-onlyIdempotent
Inspect

Sign-to-sign compatibility without birth data. Based on element and modality affinity. Fast — no ephemeris calculation required.

SECTION: WHAT THIS TOOL COVERS Lookup table compatibility using sign elements (fire/earth/air/water) and modalities (cardinal/fixed/mutable). No houses, no Moon phase, no Venus Mars geometry.

SECTION: WORKFLOW BEFORE: None — no birth data needed. AFTER: asterwise_get_western_compatibility — when full charts are available.

SECTION: INPUT CONTRACT sign1, sign2 — English zodiac names (Aries … Pisces).

SECTION: OUTPUT CONTRACT data.sign1, data.sign2 data.element1, data.element2 data.modality1, data.modality2 data.element_affinity, data.modality_affinity — 'harmonious'|'neutral'|'challenging' data.overall_score (int 0-100) data.description (string)

SECTION: RESPONSE FORMAT response_format=json serialises the complete response as indented JSON. response_format=markdown renders the same data as a human-readable report. Both modes return identical underlying data.

SECTION: COMPUTE CLASS FAST_LOOKUP — no ephemeris, pure table lookup.

SECTION: ERROR CONTRACT INVALID_PARAMS (local): None — sign validation upstream. INTERNAL_ERROR: Any upstream API failure → MCP INTERNAL_ERROR

SECTION: DO NOT CONFUSE WITH asterwise_get_western_compatibility — requires full birth data, more accurate. asterwise_get_western_synastry — aspect geometry between two full charts.

ParametersJSON Schema
NameRequiredDescriptionDefault
sign1Yes
sign2Yes
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate safe read-only idempotent operation. The description adds that it's fast (no ephemeris) and based on element/modality affinity, providing behavioral insight beyond annotations. No contradiction.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with sections (INPUT CONTRACT, OUTPUT CONTRACT), concise sentences, and no wasted words. It is front-loaded with key purpose and fast nature.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool has an output schema, so description needn't detail return values. It provides a clear output contract listing fields. Given the simple zodiac compatibility task, the description is complete and sufficient.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0%, but the description's INPUT CONTRACT lists valid zodiac sign names for sign1 and sign2, which adds key meaning. It doesn't describe response_format in text, but its enum values are self-explanatory. Overall, description compensates well for missing schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it's for sign-to-sign compatibility without birth data, based on element and modality affinity. It distinguishes from sibling tools like asterwise_get_compatibility that likely require birth data. The verb 'get' and resource 'western_zodiac_compatibility' are specific.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies when to use (quick compatibility without birth data) and contrasts with tools needing ephemeris. It doesn't explicitly list alternatives, but the context of 'without birth data' and 'fast' gives clear usage context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_yogasA
Read-onlyIdempotent
Inspect

Evaluates the natal chart for named classical yogas and returns category, formation text, classical results, modern summaries, and keywords per hit.

SECTION: WHAT THIS TOOL COVERS Uses Parashari-style yoga detection (BPHS / Phaladeepika lineage) to emit an open-ended data.yogas[] list; categories include raja_yogas, dhana_yogas, pancha_mahapurusha, chandra_yogas, surya_yogas, kartari_yogas, or unknown while metadata is pending. It does not return the Graha Drishti matrix (see asterwise_get_natal_chart data.graha_drishti), Shadbala scores, or divisional charts. New yogas may appear without API versioning.

SECTION: WORKFLOW BEFORE: RECOMMENDED — asterwise_get_natal_chart — same birth tuple should be understood before interpreting yoga names. AFTER: asterwise_get_doshas — complementary affliction scan on the same chart.

SECTION: INPUT CONTRACT BirthData follows the global contract. time='00:00' is accepted without flag; yoga house logic may be wrong if true birth time is unknown.

SECTION: OUTPUT CONTRACT data.yogas[] — each object: yoga_name (string) category (string — 'raja_yogas', 'dhana_yogas', 'pancha_mahapurusha', 'chandra_yogas', 'surya_yogas', 'kartari_yogas', or 'unknown' while metadata is pending) formation (string — empty when yoga has no JSON entry yet) classical_results (string — empty when yoga has no JSON entry yet) modern_summary (string) keywords[] (string array) data.birth_time_unknown (bool)

SECTION: RESPONSE FORMAT response_format=json serialises the complete response as indented JSON — use this for programmatic parsing, typed clients, and downstream tool chaining. response_format=markdown renders the same data as a human-readable report. Both modes return identical underlying data — no fields are added, removed, or filtered by either mode.

SECTION: COMPUTE CLASS MEDIUM_COMPUTE

SECTION: ERROR CONTRACT INVALID_PARAMS (local — caught before upstream call): None — all validation is upstream.

INVALID_PARAMS (upstream): — Dates before 1800 or after 2100 → MCP INTERNAL_ERROR at the tool layer.

INTERNAL_ERROR: — Any upstream API failure or timeout → MCP INTERNAL_ERROR

Edge cases: — Graha Drishti lives only on asterwise_get_natal_chart, not in data.yogas[].

SECTION: DO NOT CONFUSE WITH asterwise_get_natal_chart — supplies graha_drishti and base chart rows, not the yoga catalogue. asterwise_get_panchanga — Panchanga yoga (Sun+Moon sum) is unrelated to these natal yogas.

ParametersJSON Schema
NameRequiredDescriptionDefault
birthYesBirth data for a single person.
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds significant behavioral context beyond annotations: it discloses that 'New yogas may appear without API versioning' (open-world behavior), explains compute class (MEDIUM_COMPUTE), details error contracts (INVALID_PARAMS, INTERNAL_ERROR), and clarifies edge cases like 'Graha Drishti lives only on asterwise_get_natal_chart'. While annotations cover readOnly, openWorld, idempotent, and destructive hints, the description provides practical implementation details that enhance transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (WHAT THIS TOOL COVERS, WORKFLOW, INPUT CONTRACT, etc.), making it easy to navigate. While comprehensive, some sections could be more concise (e.g., the OUTPUT CONTRACT lists all fields but repeats schema information). Overall, it's front-loaded with purpose and efficiently organized.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity, rich annotations, and output schema, the description provides excellent completeness. It covers purpose, usage guidelines, behavioral details, input/output contracts, error handling, compute class, and sibling distinctions. The output schema exists, so the description appropriately focuses on explaining the yoga evaluation context rather than return value details.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 50% schema description coverage, the description adds some parameter context: it explains that 'BirthData follows the global contract' and clarifies that 'time="00:00" is accepted without flag; yoga house logic may be wrong if true birth time is unknown'. However, it doesn't fully compensate for the schema coverage gap by explaining all parameter meanings or providing examples beyond what's in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'Evaluates the natal chart for named classical yogas' and specifies what it returns (category, formation text, classical results, etc.). It explicitly distinguishes from siblings like asterwise_get_natal_chart and asterwise_get_panchanga in the 'DO NOT CONFUSE WITH' section, providing specific differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description includes explicit workflow guidance: 'BEFORE: RECOMMENDED — asterwise_get_natal_chart' and 'AFTER: asterwise_get_doshas'. It also clearly states what the tool does NOT cover (Graha Drishti, Shadbala, divisional charts) and provides specific sibling tool distinctions, giving comprehensive when-to-use and when-not-to-use guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

asterwise_get_yogini_dashaA
Read-onlyIdempotent
Inspect

Computes the eight-Yogini, 36-year Yogini Dasha cycle with two-level period trees and DD/MM/YYYY boundaries from birth data.

SECTION: WHAT THIS TOOL COVERS Returns Mahadasha rows under data.periods.root[] (not data.periods[]), each with Yogini name, ruling planet, Julian and calendar dates, and sub-periods for Antar only (two levels total). The eight Yoginis map to year-lengths 1–8 summing to 36 years per cycle. It does not validate or refuse charts outside classical Yogini applicability; it does not output Vimshottari or Char Dasha.

SECTION: WORKFLOW BEFORE: RECOMMENDED — asterwise_get_natal_chart — establishes birth context for interpreting Yogini lords. AFTER: asterwise_get_dasha — optional Vimshottari comparison for the same native.

SECTION: INPUT CONTRACT Tree lives at data.periods.root[] — agents must not expect a top-level data.periods array. Calendar strings in periods use DD/MM/YYYY. BirthData follows the global contract.

SECTION: OUTPUT CONTRACT data.periods.root[] — array of Mahadasha objects: yogini (string — e.g. 'Pingala') planet (string — ruling planet) start_jd (float) end_jd (float) start_date (string — DD/MM/YYYY) end_date (string — DD/MM/YYYY) sub[] — Antardasha objects with the same fields (max two levels total) data.birth_time_unknown (bool)

SECTION: RESPONSE FORMAT response_format=json serialises the complete response as indented JSON — use this for programmatic parsing, typed clients, and downstream tool chaining. response_format=markdown renders the same data as a human-readable report. Both modes return identical underlying data — no fields are added, removed, or filtered by either mode.

SECTION: COMPUTE CLASS MEDIUM_COMPUTE

SECTION: ERROR CONTRACT INVALID_PARAMS (local — caught before upstream call): None — all validation is upstream.

INVALID_PARAMS (upstream): — None — upstream rejection surfaces as MCP INTERNAL_ERROR at the tool layer.

INTERNAL_ERROR: — Any upstream API failure or timeout → MCP INTERNAL_ERROR

Edge cases: — Root key is data.periods.root, not data.periods.

SECTION: DO NOT CONFUSE WITH asterwise_get_dasha — Vimshottari planet periods with data.periods[] and optional levels 1–5, not Yogini names. asterwise_get_ashtottari_dasha — 108-year system with data.periods.root[] but planet-based rows, not Yoginis.

ParametersJSON Schema
NameRequiredDescriptionDefault
birthYesBirth data for a single person.
response_formatYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations provide readOnlyHint=true, destructiveHint=false, etc., covering safety. The description adds valuable behavioral context: 'It does not validate or refuse charts outside classical Yogini applicability', 'Tree lives at data.periods.root[] — agents must not expect a top-level data.periods array', 'Calendar strings in periods use DD/MM/YYYY', and 'response_format=json serialises... response_format=markdown renders...'. These clarify constraints and output behavior beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (WHAT THIS TOOL COVERS, WORKFLOW, etc.), making it easy to scan. However, it is moderately long due to detailed output and error contracts; some redundancy exists (e.g., repeating root array location). Most sentences add value, but it could be more concise in places.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (astrological computation), rich annotations, and detailed output schema, the description is highly complete. It covers purpose, workflow, input/output contracts, error handling, compute class, and sibling distinctions. The output schema exists, so the description appropriately focuses on structural nuances (e.g., root array location) rather than re-listing return values.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 50% (birth object well-described, response_format not). The description adds minimal parameter semantics: 'BirthData follows the global contract' and mentions response_format options but without explaining the enum values. It compensates somewhat for the coverage gap but doesn't fully detail the two parameters beyond what's implied by context.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'Computes the eight-Yogini, 36-year Yogini Dasha cycle with two-level period trees and DD/MM/YYYY boundaries from birth data.' This specifies the exact computation (Yogini Dasha), resource (birth data), output structure (two-level period trees), and date format. It explicitly distinguishes from siblings like 'asterwise_get_dasha' (Vimshottari) and 'asterwise_get_ashtottari_dasha' (planet-based).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description includes explicit 'SECTION: WORKFLOW' with 'BEFORE' and 'AFTER' recommendations, naming specific sibling tools ('asterwise_get_natal_chart' for context, 'asterwise_get_dasha' for comparison). 'SECTION: DO NOT CONFUSE WITH' explicitly lists alternative tools and explains differences in output structure and content, providing clear when-to-use guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources