Skip to main content
Glama

CoverSavvy Term Life Insurance Rates

Ownership verified

Server Details

Get realistic term life insurance rates from the CoverSavvy Term Life Index. Query monthly premiums by age (20-65), gender, health class, coverage ($100K-$4M), and term length (10-30 years). Rates are averages of 2-3 lowest carrier quotes from leading U.S. life insurance carriers.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

4 tools
compare_health_classesAInspect

Given a fixed profile (age, gender, coverage, term), returns rates across all health classes (preferred, standard, standard_tobacco). Helps explain the cost impact of health rating or tobacco use.

ParametersJSON Schema
NameRequiredDescriptionDefault
ageYesApplicant age (20–65)
termYesTerm length in years: 10, 15, 20, 25, or 30
genderYes
coverageYesCoverage amount in dollars (e.g. 500000 for $500K). Call list_options for valid values.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without annotations, the description carries the full burden. It discloses the return behavior (rates for preferred, standard, and standard_tobacco classes) but omits error handling, rate limits, caching behavior, or what happens with invalid coverage amounts. Just meets minimum viable disclosure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste. First sentence front-loads the core function; second sentence provides the value proposition. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and no annotations, the description adequately hints at return values by enumerating the health classes, but falls short of describing the response structure, format, or error conditions. Sufficient for basic invocation but leaves operational gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 75% schema description coverage, the baseline is 3. The description merely lists the parameters ('age, gender, coverage, term') without adding semantic context beyond the schema (e.g., explaining that gender and age drive the rate differentials, or clarifying the coverage constraint referencing list_options).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('returns rates across all health classes') and the resource (health class ratings) given a fixed profile. It implicitly distinguishes from siblings like query_rates (single rate lookup) and compare_terms (cross-term comparison) by specifying the comparison dimension is health classes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides an implied use case ('Helps explain the cost impact of health rating or tobacco use'), but lacks explicit when-to-use guidance versus alternatives like query_rates or compare_terms. Also fails to mention the prerequisite of valid coverage amounts from list_options despite the schema hint.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

compare_termsAInspect

Given a fixed profile (age, gender, health class, coverage), returns rates across all available term lengths (10, 15, 20, 25, 30 years). Helps answer 'should I get a 20 or 30 year term?'

ParametersJSON Schema
NameRequiredDescriptionDefault
ageYesApplicant age (20–65)
genderYes
coverageYesCoverage amount in dollars (e.g. 500000 for $500K). Call list_options for valid values.
health_classYespreferred = best non-tobacco rate, standard = average non-tobacco rate, standard_tobacco = tobacco user rate
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full disclosure burden. It lists specific available term lengths (10-30 years) and indicates multiple rates are returned, but lacks details on idempotency, side effects, rate limits, or the specific return data structure that would be expected for a read-only lookup tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two efficient sentences with zero waste. The first sentence front-loads the core functionality with parameter grouping and specific outputs; the second provides the concrete use case. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema exists, the description partially compensates by indicating 'rates' are returned across multiple term lengths, but lacks specifics on return structure (objects vs arrays, field names). The input requirements are well-covered through schema and description. Adequate but gaps remain for a 4-parameter tool without output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 75% (high). The description adds conceptual value by grouping the four parameters as a 'fixed profile,' helping the agent understand they function as a unit. However, it doesn't add syntax details or examples beyond what the schema already provides for individual fields.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action (returns rates), resource (insurance rates), and unique scope (across term lengths 10-30 years). It distinguishes from siblings by emphasizing the 'fixed profile' constraint versus varying term lengths, clearly differentiating it from compare_health_classes and query_rates.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit usage context through the example question 'should I get a 20 or 30 year term?' and the 'fixed profile' framing implies when to use this (holding demographics constant). References list_options for valid coverage values. Lacks explicit 'when not to use' or direct sibling comparisons, but the use case is clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_optionsAInspect

Returns all valid parameter values. Call this first to know what ages, coverages, terms, genders, and health classes are available.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It successfully discloses the data domains returned (ages, coverages, terms, genders, health classes), but omits operational details like return format structure, caching behavior, or idempotency that would be useful for a discovery endpoint.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste: first states purpose, second states workflow position and enumerates return domains. Front-loaded with critical information and appropriately sized for the tool's simplicity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given zero parameters and no output schema, the description adequately compensates by listing the specific value domains returned. Sufficient for tool complexity, though return format specification would strengthen it further.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Tool has 0 parameters and 100% schema coverage trivially. Baseline 4 applies as there are no parameters requiring semantic clarification beyond the empty schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'Returns' with clear resource 'all valid parameter values' and distinguishes from siblings by enumerating specific domains (ages, coverages, terms, genders, health classes) that are likely inputs to the comparison tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit workflow guidance with 'Call this first', clearly positioning it as a prerequisite step. Lacks explicit naming of sibling alternatives, but the sequencing instruction effectively signals when to use this versus the comparison tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

query_ratesAInspect

Returns the monthly and annual premium for a specific profile. All 5 parameters are required.

ParametersJSON Schema
NameRequiredDescriptionDefault
ageYesApplicant age (20–65)
termYesTerm length in years: 10, 15, 20, 25, or 30
genderYes
coverageYesCoverage amount in dollars (e.g. 500000 for $500K). Call list_options for valid values.
health_classYespreferred = best non-tobacco rate, standard = average non-tobacco rate, standard_tobacco = tobacco user rate
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses the return values (monthly/annual premiums) but omits safety characteristics (read-only/idempotent), error behaviors for invalid profiles, or rate limiting details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste: front-loaded with purpose, followed immediately by critical invocation constraint. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema, the description appropriately specifies the return values (monthly/annual premiums). It acknowledges all parameters are required, matching the schema complexity. Minor gap: does not reference sibling tools for comparative workflows.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 80% (high), establishing a baseline of 3. The description emphasizes that all 5 parameters are required, reinforcing the schema's 'required' array, but does not add syntax guidance, formatting details, or semantic relationships beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific action ('Returns') and resource ('monthly and annual premium') for a 'specific profile'. Implicitly distinguishes from comparison siblings (compare_health_classes, compare_terms) via 'specific profile' phrasing, though it does not explicitly name alternatives.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Notes the constraint that 'All 5 parameters are required,' which aids invocation. However, it lacks guidance on when to use this single-profile query versus the comparison siblings (compare_health_classes, compare_terms) or when to call list_options first.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources