CoverSavvy Term Life Insurance Rates
Server Details
Get realistic term life insurance rates from the CoverSavvy Term Life Index. Query monthly premiums by age (20-65), gender, health class, coverage ($100K-$4M), and term length (10-30 years). Rates are averages of 2-3 lowest carrier quotes from leading U.S. life insurance carriers.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
4 toolscompare_health_classesAInspect
Given a fixed profile (age, gender, coverage, term), returns rates across all health classes (preferred, standard, standard_tobacco). Helps explain the cost impact of health rating or tobacco use.
| Name | Required | Description | Default |
|---|---|---|---|
| age | Yes | Applicant age (20–65) | |
| term | Yes | Term length in years: 10, 15, 20, 25, or 30 | |
| gender | Yes | ||
| coverage | Yes | Coverage amount in dollars (e.g. 500000 for $500K). Call list_options for valid values. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description carries the full burden. It discloses the return behavior (rates for preferred, standard, and standard_tobacco classes) but omits error handling, rate limits, caching behavior, or what happens with invalid coverage amounts. Just meets minimum viable disclosure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste. First sentence front-loads the core function; second sentence provides the value proposition. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and no annotations, the description adequately hints at return values by enumerating the health classes, but falls short of describing the response structure, format, or error conditions. Sufficient for basic invocation but leaves operational gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 75% schema description coverage, the baseline is 3. The description merely lists the parameters ('age, gender, coverage, term') without adding semantic context beyond the schema (e.g., explaining that gender and age drive the rate differentials, or clarifying the coverage constraint referencing list_options).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('returns rates across all health classes') and the resource (health class ratings) given a fixed profile. It implicitly distinguishes from siblings like query_rates (single rate lookup) and compare_terms (cross-term comparison) by specifying the comparison dimension is health classes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides an implied use case ('Helps explain the cost impact of health rating or tobacco use'), but lacks explicit when-to-use guidance versus alternatives like query_rates or compare_terms. Also fails to mention the prerequisite of valid coverage amounts from list_options despite the schema hint.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
compare_termsAInspect
Given a fixed profile (age, gender, health class, coverage), returns rates across all available term lengths (10, 15, 20, 25, 30 years). Helps answer 'should I get a 20 or 30 year term?'
| Name | Required | Description | Default |
|---|---|---|---|
| age | Yes | Applicant age (20–65) | |
| gender | Yes | ||
| coverage | Yes | Coverage amount in dollars (e.g. 500000 for $500K). Call list_options for valid values. | |
| health_class | Yes | preferred = best non-tobacco rate, standard = average non-tobacco rate, standard_tobacco = tobacco user rate |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full disclosure burden. It lists specific available term lengths (10-30 years) and indicates multiple rates are returned, but lacks details on idempotency, side effects, rate limits, or the specific return data structure that would be expected for a read-only lookup tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two efficient sentences with zero waste. The first sentence front-loads the core functionality with parameter grouping and specific outputs; the second provides the concrete use case. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema exists, the description partially compensates by indicating 'rates' are returned across multiple term lengths, but lacks specifics on return structure (objects vs arrays, field names). The input requirements are well-covered through schema and description. Adequate but gaps remain for a 4-parameter tool without output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 75% (high). The description adds conceptual value by grouping the four parameters as a 'fixed profile,' helping the agent understand they function as a unit. However, it doesn't add syntax details or examples beyond what the schema already provides for individual fields.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action (returns rates), resource (insurance rates), and unique scope (across term lengths 10-30 years). It distinguishes from siblings by emphasizing the 'fixed profile' constraint versus varying term lengths, clearly differentiating it from compare_health_classes and query_rates.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit usage context through the example question 'should I get a 20 or 30 year term?' and the 'fixed profile' framing implies when to use this (holding demographics constant). References list_options for valid coverage values. Lacks explicit 'when not to use' or direct sibling comparisons, but the use case is clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_optionsAInspect
Returns all valid parameter values. Call this first to know what ages, coverages, terms, genders, and health classes are available.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It successfully discloses the data domains returned (ages, coverages, terms, genders, health classes), but omits operational details like return format structure, caching behavior, or idempotency that would be useful for a discovery endpoint.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste: first states purpose, second states workflow position and enumerates return domains. Front-loaded with critical information and appropriately sized for the tool's simplicity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given zero parameters and no output schema, the description adequately compensates by listing the specific value domains returned. Sufficient for tool complexity, though return format specification would strengthen it further.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Tool has 0 parameters and 100% schema coverage trivially. Baseline 4 applies as there are no parameters requiring semantic clarification beyond the empty schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'Returns' with clear resource 'all valid parameter values' and distinguishes from siblings by enumerating specific domains (ages, coverages, terms, genders, health classes) that are likely inputs to the comparison tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit workflow guidance with 'Call this first', clearly positioning it as a prerequisite step. Lacks explicit naming of sibling alternatives, but the sequencing instruction effectively signals when to use this versus the comparison tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
query_ratesAInspect
Returns the monthly and annual premium for a specific profile. All 5 parameters are required.
| Name | Required | Description | Default |
|---|---|---|---|
| age | Yes | Applicant age (20–65) | |
| term | Yes | Term length in years: 10, 15, 20, 25, or 30 | |
| gender | Yes | ||
| coverage | Yes | Coverage amount in dollars (e.g. 500000 for $500K). Call list_options for valid values. | |
| health_class | Yes | preferred = best non-tobacco rate, standard = average non-tobacco rate, standard_tobacco = tobacco user rate |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses the return values (monthly/annual premiums) but omits safety characteristics (read-only/idempotent), error behaviors for invalid profiles, or rate limiting details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste: front-loaded with purpose, followed immediately by critical invocation constraint. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema, the description appropriately specifies the return values (monthly/annual premiums). It acknowledges all parameters are required, matching the schema complexity. Minor gap: does not reference sibling tools for comparative workflows.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 80% (high), establishing a baseline of 3. The description emphasizes that all 5 parameters are required, reinforcing the schema's 'required' array, but does not add syntax guidance, formatting details, or semantic relationships beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific action ('Returns') and resource ('monthly and annual premium') for a 'specific profile'. Implicitly distinguishes from comparison siblings (compare_health_classes, compare_terms) via 'specific profile' phrasing, though it does not explicitly name alternatives.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Notes the constraint that 'All 5 parameters are required,' which aids invocation. However, it lacks guidance on when to use this single-profile query versus the comparison siblings (compare_health_classes, compare_terms) or when to call list_options first.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!