Skip to main content
Glama

Server Details

Insurance brokerage for AI agents — quote, bind, and settle in USDC

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
srotzin/hive-mcp-insurance-broker
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.1/5 across 3 of 3 tools scored. Lowest: 3.5/5.

Server CoherenceA
Disambiguation5/5

Each tool has a clear, distinct purpose: listing products, requesting quotes, and daily summary. No overlap in functionality.

Naming Consistency5/5

All tool names follow a consistent 'insurance_' prefix followed by a noun (products, quote, today), making them easy to distinguish.

Tool Count4/5

Three tools is minimal but appropriate for a broker-only service that forwards quotes and lists products without policy management.

Completeness3/5

Covers core broker functions but lacks tools for quote history comparison or user account management, though domain scope justifies limited surface.

Available Tools

3 tools
insurance_productsAInspect

List all available coverage products across providers (Nexus Mutual, Sherlock, Risk Harbor, InsurAce). Returns provider, type, capacity, and current cost-of-coverage where the upstream exposes it. Real third-party listings — Hive is broker-only and does not underwrite.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Describes return fields and notes data completeness ('where the upstream exposes it'). States real third-party listings and broker-only role, giving behavioral context beyond the schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences: first covers purpose and providers, second covers returns and nature. No filler, every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple list tool with no parameters and no output schema, the description is complete. It specifies providers, return fields, and the broker-only role.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

No parameters in schema, so baseline is 4. Description adds no parameter info, but that's appropriate as there are none. Schema coverage is 100%.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states verb 'List' and resource 'all available coverage products across providers' with explicit provider names. Distinguishes from siblings by implying a broad listing vs. quoting or current data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides context on usage: returns overview of all products. No explicit when-not or alternative tools, but the description clarifies the broker-only nature, helping the agent infer appropriate use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

insurance_quoteAInspect

Route a quote request to one or all underwriters. Hive forwards the request to the underwriter's own quote endpoint and returns the response verbatim. Hive does NOT bind coverage, accept premium, or take custody.

ParametersJSON Schema
NameRequiredDescriptionDefault
protocolYesProtocol/product identifier (e.g. '2' for Nexus Mutual Aave v2, or the productId from /products)
providerNoProvider key. If omitted, quote routes to all four providers. One of: nexus_mutual, sherlock, risk_harbor, insurace
duration_daysYesCoverage duration in days (1–365)
cover_amount_usdYesNotional coverage in USD
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It explains that the tool forwards the request verbatim to the underwriter and returns the response without modification. It also explicitly states that Hive does not bind coverage, accept premium, or take custody, which are crucial behavioral traits. Missing details on error handling or timeouts, but still substantial transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is three concise sentences with no fluff. The first sentence states the purpose, the second explains the mechanism, and the third clarifies limitations. Every sentence earns its place, and the most important information is front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 4 parameters, no output schema, and no annotations, the description provides a good level of completeness. It covers purpose, behavior, and non-actions. However, it lacks details about the response format or error scenarios. Still, it is sufficient for an AI agent to understand the tool's basic functionality.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, so the description adds no extra meaning. The schema already describes each parameter adequately. The description's mention of 'one or all' is already implied by the provider parameter description. Baseline score of 3 is appropriate since no additional semantic value is provided.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Route a quote request to one or all underwriters.' It specifies the action (route), resource (quote request), and target (underwriters). It distinguishes from siblings by focusing on quoting, while siblings likely deal with products and today's data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description gives clear context on when to use the tool: for obtaining a quote, and notes that omitting the provider routes to all. It explicitly states what the tool does NOT do (bind coverage, accept premium, take custody), which helps avoid misuse. However, it does not mention alternatives or when not to use it beyond the negative statements.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

insurance_todayAInspect

24-hour rollup: total listing count + top providers by capacity. Returns request count and quote count for the rolling window.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses it's a rollup with counts and top providers, but lacks details on refresh rate, time zone, or whether data is cached. No annotations to override, description carries full burden.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence of 17 words, directly states purpose and outputs. No unnecessary text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple 0-parameter rollup, the description adequately covers return values. Could mention refresh interval or time window specifics, but sufficient.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

No parameters, so schema provides no semantics. Description adds meaningful context by explaining what the tool returns (listing count, top providers, request/quote counts).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it's a 24-hour rollup returning listing count, top providers, request count, and quote count. It distinguishes from siblings (insurance_products and insurance_quote) by being an aggregate summary, but does not explicitly contrast them.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus siblings. It mentions 'rolling window' but does not specify when it is appropriate or provide exclusion criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.