Hive Insurance Broker
Server Details
Insurance brokerage for AI agents — quote, bind, and settle in USDC
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- srotzin/hive-mcp-insurance-broker
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.1/5 across 3 of 3 tools scored. Lowest: 3.5/5.
Each tool has a clear, distinct purpose: listing products, requesting quotes, and daily summary. No overlap in functionality.
All tool names follow a consistent 'insurance_' prefix followed by a noun (products, quote, today), making them easy to distinguish.
Three tools is minimal but appropriate for a broker-only service that forwards quotes and lists products without policy management.
Covers core broker functions but lacks tools for quote history comparison or user account management, though domain scope justifies limited surface.
Available Tools
3 toolsinsurance_productsAInspect
List all available coverage products across providers (Nexus Mutual, Sherlock, Risk Harbor, InsurAce). Returns provider, type, capacity, and current cost-of-coverage where the upstream exposes it. Real third-party listings — Hive is broker-only and does not underwrite.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Describes return fields and notes data completeness ('where the upstream exposes it'). States real third-party listings and broker-only role, giving behavioral context beyond the schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences: first covers purpose and providers, second covers returns and nature. No filler, every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple list tool with no parameters and no output schema, the description is complete. It specifies providers, return fields, and the broker-only role.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No parameters in schema, so baseline is 4. Description adds no parameter info, but that's appropriate as there are none. Schema coverage is 100%.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states verb 'List' and resource 'all available coverage products across providers' with explicit provider names. Distinguishes from siblings by implying a broad listing vs. quoting or current data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides context on usage: returns overview of all products. No explicit when-not or alternative tools, but the description clarifies the broker-only nature, helping the agent infer appropriate use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
insurance_quoteAInspect
Route a quote request to one or all underwriters. Hive forwards the request to the underwriter's own quote endpoint and returns the response verbatim. Hive does NOT bind coverage, accept premium, or take custody.
| Name | Required | Description | Default |
|---|---|---|---|
| protocol | Yes | Protocol/product identifier (e.g. '2' for Nexus Mutual Aave v2, or the productId from /products) | |
| provider | No | Provider key. If omitted, quote routes to all four providers. One of: nexus_mutual, sherlock, risk_harbor, insurace | |
| duration_days | Yes | Coverage duration in days (1–365) | |
| cover_amount_usd | Yes | Notional coverage in USD |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It explains that the tool forwards the request verbatim to the underwriter and returns the response without modification. It also explicitly states that Hive does not bind coverage, accept premium, or take custody, which are crucial behavioral traits. Missing details on error handling or timeouts, but still substantial transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is three concise sentences with no fluff. The first sentence states the purpose, the second explains the mechanism, and the third clarifies limitations. Every sentence earns its place, and the most important information is front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 4 parameters, no output schema, and no annotations, the description provides a good level of completeness. It covers purpose, behavior, and non-actions. However, it lacks details about the response format or error scenarios. Still, it is sufficient for an AI agent to understand the tool's basic functionality.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, so the description adds no extra meaning. The schema already describes each parameter adequately. The description's mention of 'one or all' is already implied by the provider parameter description. Baseline score of 3 is appropriate since no additional semantic value is provided.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Route a quote request to one or all underwriters.' It specifies the action (route), resource (quote request), and target (underwriters). It distinguishes from siblings by focusing on quoting, while siblings likely deal with products and today's data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description gives clear context on when to use the tool: for obtaining a quote, and notes that omitting the provider routes to all. It explicitly states what the tool does NOT do (bind coverage, accept premium, take custody), which helps avoid misuse. However, it does not mention alternatives or when not to use it beyond the negative statements.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
insurance_todayAInspect
24-hour rollup: total listing count + top providers by capacity. Returns request count and quote count for the rolling window.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses it's a rollup with counts and top providers, but lacks details on refresh rate, time zone, or whether data is cached. No annotations to override, description carries full burden.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence of 17 words, directly states purpose and outputs. No unnecessary text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple 0-parameter rollup, the description adequately covers return values. Could mention refresh interval or time window specifics, but sufficient.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No parameters, so schema provides no semantics. Description adds meaningful context by explaining what the tool returns (listing count, top providers, request/quote counts).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it's a 24-hour rollup returning listing count, top providers, request count, and quote count. It distinguishes from siblings (insurance_products and insurance_quote) by being an aggregate summary, but does not explicitly contrast them.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus siblings. It mentions 'rolling window' but does not specify when it is appropriate or provide exclusion criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!