Hive Gateway
Server Details
Unified gateway hosting 5 Hive Civilization MCP servers (evaluator, trade, depin, compute-grid…
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- srotzin/hive-mcp-gateway
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.9/5 across 38 of 38 tools scored. Lowest: 3.1/5.
Tools are cleanly grouped by domain prefixes (e.g., audit-readiness__, compute-grid__), and within each group, every tool has a distinct purpose. For example, morph tools cover different brood operations, trade tools cover invoice lifecycle, and dispute tools offer distinct signals. No two tools appear to do the same thing.
All tools follow a consistent pattern: domain prefix (e.g., morph__, trade__) followed by verb_noun in snake_case. Verbs are descriptive (get, list, create) and nouns are specific. Even repetitive prefixes (audit-readiness__audit_) are applied uniformly, maintaining a predictable structure.
With 38 tools covering multiple domains, the count is high but justified by the breadth of services (audit, compute, DePIN, dispute, evaluation, insurance, morph, trade). The morph domain alone contributes 15 tools, which is reasonable for its complex identity system. Slightly above typical expectations but still well-scoped.
Each domain offers core functionality without critical gaps. For example, evaluator has submit/get/attest, trade has create/get/dispute, and morph provides comprehensive brood management. Minor missing operations (e.g., update/delete for DePIN listings) are gaps, but agents can still complete main workflows.
Available Tools
38 toolsaudit-readiness__audit_get_tier_pricingAInspect
Returns the four published HiveAudit tier prices and bracket thresholds: STARTER ($500, <$500K exposure), STANDARD ($1,500, <$5M), ENTERPRISE ($2,500, <$50M), FEDERAL ($7,500/yr, ≥$50M or federal agency). Inlined — no backend call.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description explicitly states 'Inlined — no backend call,' which informs the agent of fast, reliable execution. This compensates for the lack of annotations. It also discloses the exact return data, so the agent knows what to expect.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
A single sentence packs all essential information: what it returns, the specific data (tier names, prices, thresholds), and a behavioral note. No wasted words; perfect front-loading.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description fully explains the return value with concrete examples. The tool is simple and static, so this level of detail is sufficient and complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has no parameters (input schema empty), and the schema coverage is 100%. The description adds value by detailing the exact return structure (tiers, prices, thresholds) that the schema schema cannot convey. Baseline adjusted upward for clarity.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it returns the four published HiveAudit tier prices and bracket thresholds, with specific dollar amounts and exposure limits. This verb+resource+detail is unambiguous and distinct from the sibling audit_readiness_score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives like audit_readiness_score. However, the purpose is self-explanatory for retrieving pricing, and the sibling tools list is available. A direct 'use this for pricing, not for scoring' would improve it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
audit-readiness__audit_readiness_scoreAInspect
Compute a multi-jurisdictional AI compliance readiness score for an organization. Returns penalty exposure (EUR + USD), specific compliance gaps citing the regulation article, recommended audit tier (STARTER/STANDARD/ENTERPRISE/FEDERAL), and the nearest enforcement deadline. Penalty math sources EU AI Act Art 99, Colorado AI Act SB 24-205, CCPA, Cal SB 942, NYC LL 144, HIPAA. Free, no auth, rate-limited 10/IP/hr.
| Name | Required | Description | Default |
|---|---|---|---|
| company | No | Organization name (optional; populates the assessment record). | |
| sectors | No | Industries: ["finance", "healthcare", "employment", "education", "lending", "insurance", "criminal_justice", "biometric", "critical_infrastructure"]. | |
| frameworks | Yes | Regulations to score: ["eu_ai_act", "co_ai_act", "ccpa", "ca_sb942", "nyc_ll144", "hipaa", "gdpr", "nist_ai_rmf"]. | |
| agent_count | Yes | Number of distinct AI agents in production. | |
| jurisdictions | Yes | Where the system operates: ["EU", "US-CO", "US-CA", "US-NY", ...]. Drives which regulations apply. | |
| data_volume_records | Yes | Total records processed (drives CCPA / GDPR scoping). | |
| organization_country | Yes | ISO 3166-1 alpha-2 country code (e.g. "US", "DE", "FR", "GB"). | |
| monthly_inference_calls | Yes | Inference call volume per month (drives tier selection). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden. It discloses free access, no auth, and rate limits. However, it lacks details on data persistence, idempotency, or potential side effects, making it minimally adequate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Concise at five sentences, front-loaded with the main action, each sentence adds distinct value (purpose, outputs, sources, constraints). No redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema, so description explains return values (penalty exposure, gaps, tier, deadline) and cites specific laws. It covers the main use case but omits error handling or edge cases. Complete enough for a computation tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description adds value by summarizing return values but does not enhance parameter meanings beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it 'Computes a multi-jurisdictional AI compliance readiness score for an organization,' specifying the verb and resource. It distinguishes from the sibling tool 'audit_get_tier_pricing' by focusing on scoring rather than pricing.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides usage guidelines like 'Free, no auth, rate-limited 10/IP/hr,' but does not explicitly contrast with the sibling tool or specify when not to use this tool. Implicit differentiation is present but not explicit.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
compute-grid__computegrid_get_capacityAInspect
Read-only capacity view from the Capacity Listener fleet. Per spec section 8: NO bids, NO hedges, NO positions, NO derivatives — pure read-only telemetry.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must fully convey behavior. It states read-only and references a spec, but omits details like return format or any limitations, leaving gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise with two sentences, front-loading the key information and avoiding any redundant or unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With no output schema, the description should elaborate on return values. It specifies the source (Capacity Listener fleet) but lacks details on the output structure, making it somewhat incomplete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has zero parameters with 100% coverage, so no parameter clarification is needed. The description adds context about the tool's purpose and scope.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the tool as a read-only capacity view from the Capacity Listener fleet, distinguishing it from siblings by explicitly stating 'NO bids, NO hedges, NO positions, NO derivatives'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear exclusion criteria (no bids, hedges, etc.) and references spec section 8 for context, though it does not explicitly state when to use this tool over siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
compute-grid__computegrid_list_agentsAInspect
List the 15-agent compute grid fleet across all 6 driver types. Returns agent type, count, and revenue model. No auth required.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It discloses that no authentication is needed and that it lists all agents and driver types. It does not mention potential side effects or data freshness, but the operation is inherently read-only and safe. Additional details like rate limits or staleness would improve transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence of 15 words, efficiently conveying the core purpose and key details (scope and response content). No extraneous information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simplicity (no parameters, no output schema), the description adequately explains the tool's action and return values. The addition of 'No auth required' is useful context. It could mention if the list is static or real-time, but overall sufficient for a list tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, so the description does not need to add parameter information. The baseline for no parameters is 4, as the tool is fully defined by its purpose and return values.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses the specific verb 'List' with a clear resource: '15-agent compute grid fleet across all 6 driver types'. It specifies what is returned: 'agent type, count, and revenue model'. It distinguishes from siblings like 'get_capacity' and 'verify_proof' which have different purposes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions 'No auth required', which is a helpful precondition. However, it does not provide explicit guidance on when to use this tool over alternatives (e.g., when to use 'get_capacity' instead) or any exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
compute-grid__computegrid_verify_proofAInspect
Submit a compute job for verification by the Verification Fleet (4 agents). Returns Groth16-style proof. $0.001/proof in USDC.
| Name | Required | Description | Default |
|---|---|---|---|
| driver | Yes | Source driver: ionet | render | akash | aleo | custom | |
| job_id | Yes | Job ID to verify | |
| submitter_did | Yes | DID of the submitting agent | |
| claimed_output_hash | Yes | SHA-256 of claimed output |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must carry the full burden. It mentions the Verification Fleet and cost, but does not disclose whether the call is synchronous or asynchronous, side effects, rate limits, or authorization requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences with no wasted words. It effectively communicates the action, fleet size, return type, and cost in a compact form.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and no annotations, the description provides purpose, return type, and cost but lacks details on blocking behavior, error handling, and lifecycle of the proof. It is adequate but has gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% coverage with descriptions for all parameters. The description adds little beyond the schema, only clarifying the overall purpose and return type. Baseline is 3, and the description does not elevate it.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool submits a compute job for verification and returns a Groth16-style proof. It distinguishes from siblings like evaluator__evaluator_submit_job by focusing on verification rather than submission.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for verifying a job result after submission, but it does not explicitly state when to use this tool versus alternatives (e.g., evaluator__evaluator_submit_job) or mention prerequisites like having a job_id from a prior submission.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
depin__depin_create_listingBInspect
List physical infrastructure capacity (storage TB, compute cores, GPU VRAM, bandwidth Mbps, sensor sample rate, etc.). 22 metadata fields supported. Match fee 0.15%.
| Name | Required | Description | Default |
|---|---|---|---|
| kind | No | Listing kind (default: depin_provider) | |
| region | No | Geographic region | |
| vram_gb | No | VRAM for GPU providers (GB) | |
| agent_id | Yes | Operator agent ID | |
| gpu_model | No | GPU model for GPU providers | |
| unit_label | Yes | Pricing unit, e.g. 'per TB-month', 'per GPU-hour' | |
| capacity_gb | No | Capacity for storage providers (GB) | |
| operator_did | Yes | Operator DID for trust scoring | |
| payout_address | Yes | Settlement address | |
| unit_rate_usdc | Yes | Price per unit in USDC | |
| throughput_mbps | No | Throughput for bandwidth providers (Mbps) | |
| provider_category | Yes | storage | compute | gpu | bandwidth | energy | sensor | wireless |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must disclose behavioral traits. It does not mention side effects (e.g., on-chain recording), permissions, rate limits, or what happens upon creation. The only extra info (22 metadata fields) is ambiguous and may confuse agents.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is very short (two sentences) and front-loads the purpose. However, the phrase '22 metadata fields supported' is vague and potentially misleading given the schema only shows 12 parameters. Every sentence should earn its place; the second sentence adds useful fee info but creates confusion.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a creation tool with no output schema and no annotations, the description is incomplete. It does not explain return values (e.g., listing ID), success/failure indicators, or the discrepancy between 22 mentioned fields and 12 schema parameters. Critical behavioral context is missing.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so parameter descriptions are already clear. The description adds value by mentioning the match fee (0.15%) and a count of metadata fields (22), though the latter conflicts with the schema showing 12 parameters. This adds context beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool lists physical infrastructure capacity and provides specifics like '22 metadata fields supported' and 'Match fee 0.15%'. It identifies the verb (list) and resource (infrastructure capacity), distinguishing it from sibling tools like depin_get_match_fee and depin_list_providers, though not explicitly.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for listing infrastructure capacity but provides no explicit guidance on when to use this tool versus alternatives. No exclusions or context about prerequisites are given, relying on the tool's name and sibling context to infer purpose.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
depin__depin_get_match_feeAInspect
Get the current DePIN marketplace match fee (currently 0.15%). Returned alongside settlement currencies and chains.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden. It discloses that the tool returns the fee along with settlement currencies and chains, giving insight into the output. It does not mention side effects (none expected) or permissions, but for a read-only tool with no parameters, this is sufficient.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise: one sentence that front-loads the main purpose. Every word earns its place, with no wasted characters.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description adequately conveys the return information (fee, settlement currencies, chains). For a simple getter with no parameters, it is complete enough for an agent to understand what to expect.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, and the description does not need to add parameter info. The baseline for 0 parameters is 4, and the description adds no further context, which is acceptable.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves the current DePIN marketplace match fee, which is a specific verb+resource combination. It distinguishes itself from siblings by focusing on the match fee, unlike other depin tools that handle listings or providers.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description does not explicitly provide usage guidance, such as when to use this tool versus alternatives. It implies a simple read operation but lacks context on prerequisites or scenarios where this fee data is needed.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
depin__depin_list_providersAInspect
List DePIN provider listings. Filter by category (storage, compute, gpu, bandwidth, energy, sensor, wireless), region, or capacity. No auth required.
| Name | Required | Description | Default |
|---|---|---|---|
| region | No | Filter by region (e.g. us-east, eu-west) | |
| provider_category | No | storage | compute | gpu | bandwidth | energy | sensor | wireless |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, but the description discloses that no authentication is required and implies a read-only operation. It does not mention pagination or rate limits, but the tool is simple enough that these are secondary.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences convey purpose, filtering, and auth requirement. No redundant information; the description is front-loaded and efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description covers listing and filtering, but omits details like whether results are paginated or the meaning of 'capacity'. Given the simplicity and lack of output schema, it mostly sufficient but incomplete on capacity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the description adds little beyond the schema. The mention of 'capacity' as a filter is not backed by a parameter, causing slight confusion. Otherwise, it reinforces valid category values but does not enhance semantics significantly.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'List DePIN provider listings', using a specific verb and resource. It also mentions filtering options, distinguishing it from sibling tools like create and get match fee.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description indicates filtering by category, region, or capacity, but capacity is not a parameter in the schema. It states 'No auth required', providing clear context. However, it does not explicitly exclude alternatives, though siblings are different.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
dispute__dispute_checkAInspect
Given a counterparty address + chain, return public dispute history, active arbitration cases, and on-chain reversal-pattern flags. Sources: Kleros / UMA / Reality.eth subgraphs (with direct on-chain RPC fallback) plus Etherscan-family transaction scans. Observational only — Hive does NOT judge, freeze, or enforce.
| Name | Required | Description | Default |
|---|---|---|---|
| chain | Yes | ethereum | base | arbitrum | |
| address | Yes | 0x-prefixed counterparty address (20 bytes) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Since no annotations are provided, the description fully discloses behavioral traits: it is observational, uses multiple data sources (Kleros/UMA/Reality.eth subgraphs with fallback and Etherscan scans), and does not take enforcement actions. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences with no redundancy. The first sentence explains the tool's purpose, the second clarifies its observational nature.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of dispute checking across multiple blockchain sources, the description adequately explains what is returned and the data sources. However, it lacks details on the return format or how results are structured, which would be helpful but is not critical.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and both parameters have descriptions in the schema. The description restates the purpose but adds 'counterparty' context for address and lists chain values. This adds marginal value beyond the schema, so baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action: 'return public dispute history, active arbitration cases, and on-chain reversal-pattern flags' given an address and chain. It distinguishes itself from siblings like dispute__dispute_providers or dispute__dispute_today by specifying the input and output scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states 'Observational only — Hive does NOT judge, freeze, or enforce,' indicating when to use it and that it's read-only. However, it does not mention alternatives or when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
dispute__dispute_providersAInspect
List available third-party arbitration protocols (Kleros, UMA Optimistic Oracle, Reality.eth) with current case load, intake URLs, and jurisdiction model. Hive is not one of them — Hive only surfaces signal.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It clearly states it is a list operation, mentions what is returned, and clarifies Hive's exclusion. It could mention that it is a read-only operation, but overall it is transparent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with key information, no unnecessary words. Every sentence is essential.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description is complete for a simple list tool with no parameters and no output schema. It explains what the tool returns and distinguishes from related concepts.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
There are no parameters, so the baseline is 4. The description adds meaning by confirming it requires no input and lists what it returns, beyond the empty schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states it lists available third-party arbitration protocols (Kleros, UMA, Reality.eth) with their case load, intake URLs, and jurisdiction model. It explicitly distinguishes from Hive and other sibling dispute tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when needing to see available arbitration providers but does not provide explicit guidance on when to use this tool versus alternatives like dispute_check or dispute_route.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
dispute__dispute_routeAInspect
Given a case description, disputed amount in USD, and optional jurisdiction preference, return ranked arbitration provider options. NO automatic filing — Hive returns options + intake URLs only. The disputing party files directly with the chosen provider.
| Name | Required | Description | Default |
|---|---|---|---|
| amount_usd | Yes | Disputed amount in USD | |
| description | Yes | Free-form case description (max 2000 chars) | |
| jurisdiction | No | Optional preference (e.g. 'decentralized', 'optimistic', 'crowdsourced') |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, description carries full burden. It discloses the tool is non-automatic and returns options only, but lacks details on authorization, rate limits, or whether it's read-only. It implies safety but doesn't confirm.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with the core purpose and key constraint. Every sentence adds value; no filler.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and no annotations, description covers inputs, output nature (ranked options with URLs), and a key behavioral limitation. Could describe response format slightly more, but sufficient for a routing tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with descriptions for all three parameters. Description adds context about the overall behavior but no extra details per parameter beyond what schema provides. Baseline score applies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description explicitly states the tool returns ranked arbitration provider options given case description, amount, and optional jurisdiction. It clearly distinguishes from siblings by stating 'NO automatic filing' and contrasting with the disputing party filing directly.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Description tells when to use: when you need arbitration options. It implies not for filing by stating 'The disputing party files directly.' However, it doesn't explicitly say when not to use or mention alternative tools like dispute__dispute_providers.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
dispute__dispute_todayAInspect
24-hour rollup: flagged-counterparty count + top arbitration providers by active case load.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description describes the output but does not disclose behavioral traits such as whether the operation is read-only, idempotent, or has rate limits. The absence of any disclosure beyond the output limits transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that is perfectly front-loaded with the core concept ('24-hour rollup') and contains no extraneous words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (no parameters, no output schema), the description is mostly complete but lacks specifics such as how the 24-hour period is defined or what 'top' means, leaving some ambiguity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, so schema coverage is 100% by default. The description does not need to add parameter semantics, and the baseline of 4 applies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it is a '24-hour rollup' providing 'flagged-counterparty count' and 'top arbitration providers by active case load,' which precisely identifies the tool's function and distinguishes it from sibling dispute tools like dispute_check or dispute_providers.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for a daily summary ('24-hour rollup') but does not provide explicit guidance on when to use it versus alternatives, nor does it mention when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
evaluator__evaluator_attest_jobAInspect
Trigger settlement and emit the on-chain attestation for a completed job. Settles to the Hive Safe Treasury on the chain selected at submission. Requires EIP-3009 signature for Base/Ethereum.
| Name | Required | Description | Default |
|---|---|---|---|
| job_id | Yes | Job ID returned from evaluator_submit_job | |
| signature | No | EIP-3009 signature (required for EVM chains) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses settlement, attestation emission, and signature requirement, but omits details like idempotency, failure modes, or side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no fluff. First sentence states core action, second adds critical context (destination and chain requirement). Efficient and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 2-parameter tool with no output schema, description covers main behavior, destination, and signature condition. Lacks details on return value or error handling, but adequate given context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage, but the description adds value by linking job_id to evaluator_submit_job and clarifying signature is required for Base/Ethereum, beyond schema's generic 'required for EVM chains'.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool triggers settlement and on-chain attestation for a completed job, specifying destination and chain requirements. Distinct from siblings like evaluator_submit_job and evaluator_get_job.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Describes chain-specific signature requirement, providing clear usage context. However, does not explicitly mention when not to use or alternatives to other evaluator tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
evaluator__evaluator_get_feesAInspect
Get the live evaluator fee schedule (3 tiers, settlement currencies, recipient addresses, ERC-8183 / Virtuals ACP v2.0 spec). No auth required.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden. It declares the tool is a read operation ('Get') and requires no auth, indicating safety. However, it does not mention rate limits, caching, or data freshness beyond 'live', which is adequate for this simple read tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence with essential details, no fluff. It is front-loaded with the verb and resource, then lists key attributes.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a parameterless read tool with no output schema, the description provides sufficient context by naming the output contents and spec reference. It could be slightly more explicit about the return format, but it is complete given the tool's simplicity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, so the baseline is 4. The description adds no parameter info because none are needed; it correctly focuses on the output.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves the live evaluator fee schedule with specific details (3 tiers, settlement currencies, recipient addresses, spec reference). It distinguishes itself from siblings like trade__trade_get_fees by explicitly naming 'evaluator'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description notes that no authentication is required, providing clear guidance on prerequisites. It does not explicitly state when not to use or alternatives, but the evaluator context suffices.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
evaluator__evaluator_get_jobAInspect
Retrieve evaluation status, verdict, and attestation for a previously-submitted job.
| Name | Required | Description | Default |
|---|---|---|---|
| job_id | Yes | Job ID returned from evaluator_submit_job |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It only mentions return values (status, verdict, attestation) but does not disclose behavioral traits like idempotency, authentication needs, or error handling for invalid job IDs.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence that front-loads the action and resource. Every word is essential with no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple retrieval tool with no output schema, the description explains the return values (status, verdict, attestation). However, it could mention error cases or read-only behavior, but overall adequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has one parameter (job_id) with a clear description in the schema itself (100% coverage). The tool description adds no additional meaning beyond what the schema provides, so baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Retrieve' and the resource 'evaluation status, verdict, and attestation for a previously-submitted job'. This distinguishes it from sibling tools like evaluator_submit_job (submit) and evaluator_attest_job (attest).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage after job submission via 'previously-submitted job', but does not explicitly contrast with other evaluator tools or provide when-not-to-use guidance. Still, the context is clear enough for an agent to infer usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
evaluator__evaluator_submit_jobAInspect
Submit a job for evaluation. Choose tier (simple, evaluation, arbitration). Job value is quoted in USDC; fee = max($0.05, value * tier_bps / 10000). Returns job_id and quoted fee.
| Name | Required | Description | Default |
|---|---|---|---|
| tier | Yes | 'simple' (0.5%), 'evaluation' (1.0%), or 'arbitration' (2.0%) | |
| context | No | Free-form context for the evaluator (max 4 KB) | |
| subject_did | Yes | DID of the agent or output being evaluated | |
| submitter_did | Yes | DID of the submitting agent | |
| job_value_usdc | Yes | Notional job value in USDC |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It discloses that a job is created and returns job_id and quoted fee, but it does not mention side effects (e.g., cost incurrence, job lifecycle), required permissions, or idempotency. Adequate but not comprehensive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two concise sentences: first states purpose, second explains fee and return. No redundant or irrelevant information. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a submission tool with 5 parameters and no output schema, the description covers the essential aspects: what it does, how pricing works, and what is returned. It does not detail error conditions or post-submission steps, but it is adequate for an agent to invoke it correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with descriptions. The description adds meaning beyond the schema by explaining how job_value_usdc is used in fee calculation and what the return includes (job_id and quoted fee). This helps the agent understand the purpose of the parameters and the output.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description begins with 'Submit a job for evaluation', which is a specific verb+resource. It distinguishes itself from sibling tools like evaluator__evaluator_attest_job (attest) and evaluator__evaluator_get_fees (query fees) by clearly indicating a submission action.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description tells the agent to choose a tier and explains fee calculation, but it does not explicitly state when to use this tool versus other evaluator tools (e.g., attest or get fees). There is no guidance on prerequisites or typical use cases.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
insurance-broker__insurance_productsAInspect
List all available coverage products across providers (Nexus Mutual, Sherlock, Risk Harbor, InsurAce). Returns provider, type, capacity, and current cost-of-coverage where the upstream exposes it. Real third-party listings — Hive is broker-only and does not underwrite.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses that listings are real third-party data, that Hive does not underwrite, and that some fields (like cost-of-coverage) may be missing if not exposed upstream. This is sufficient behavioral transparency for a read-only listing tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two tight sentences. The first sentence front-loads the core purpose (list all coverage products) and specifies providers and return fields. The second sentence adds critical context (third-party, broker-only). No filler or repetition.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description adequately explains what is returned (provider, type, capacity, cost-of-coverage) and notes upstream limitations. For a simple parameterless listing tool, this provides sufficient context for an agent to understand the output and limitations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, so the description does not need to add parameter semantics. The baseline for 0-parameter tools is 4, and the description does not attempt to add unnecessary info.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it lists all available coverage products across specific providers (Nexus Mutual, Sherlock, Risk Harbor, InsurAce) and what data is returned (provider, type, capacity, cost-of-coverage). It distinguishes itself from siblings like insurance-broker__insurance_quote by focusing on listings rather than pricing.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implicitly suggests using this tool to get a list of products, but it does not explicitly state when to use it vs. alternatives like insurance_quote or insurance_today. It provides context that Hive is broker-only, which helps avoid misunderstanding, but lacks direct usage guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
insurance-broker__insurance_quoteAInspect
Route a quote request to one or all underwriters. Hive forwards the request to the underwriter's own quote endpoint and returns the response verbatim. Hive does NOT bind coverage, accept premium, or take custody.
| Name | Required | Description | Default |
|---|---|---|---|
| protocol | Yes | Protocol/product identifier (e.g. '2' for Nexus Mutual Aave v2, or the productId from /products) | |
| provider | No | Provider key. If omitted, quote routes to all four providers. One of: nexus_mutual, sherlock, risk_harbor, insurace | |
| duration_days | Yes | Coverage duration in days (1–365) | |
| cover_amount_usd | Yes | Notional coverage in USD |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses that Hive does not bind coverage, accept premium, or take custody, and returns response verbatim. For a tool with no annotations, this adds significant context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences, front-loaded with main purpose, no filler. Every sentence provides value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers main function and return behavior. Lacks error handling details but sufficient for typical use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage, so parameters are already well-described. Description adds no extra meaning beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states 'Route a quote request' and specifies it does not bind coverage. Distinguishes action from related tools like insurance-broker__insurance_products.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool vs alternatives. Does not mention when to use insurance-broker__insurance_products or other related tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
insurance-broker__insurance_todayAInspect
24-hour rollup: total listing count + top providers by capacity. Returns request count and quote count for the rolling window.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description implies a read-only operation but does not explicitly state safety, side effects, or data freshness. It mentions 'rolling window' but lacks details on caching or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief, consisting of two short sentences that front-load the purpose and outputs. Every word is necessary and directly informative.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no parameters, output schema, or annotations, the description provides the core functionality. However, it could specify the data source or clarify 'top providers by capacity' for full completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, so the description is not required to add parameter semantics. According to guidance, absence of parameters merits a baseline score of 4.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool provides a '24-hour rollup' with 'total listing count' and 'top providers by capacity', and specifies it returns 'request count and quote count'. This distinguishes it from sibling tools like insurance_products and insurance_quote.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is given on when to use this tool versus alternatives such as insurance_products or insurance_quote. The description fails to provide context for appropriate usage or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
morph__morph_audit_recentBInspect
Read recent rows from the polymorphic audit log: outcome, shape, counterparty hash, token id, revenue. Anonymized counterparty (hashed). No auth.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max rows to return (default 50, max 500) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description notes that the tool is read-only ('Read'), data is anonymized ('Anonymized counterparty (hashed)'), and no authentication is required. However, it does not discuss potential side effects, rate limits, or behavior when parameters exceed maximum.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that efficiently conveys the tool's purpose and key attributes. No wasted words, though it could benefit from bullet points for clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple read tool with one parameter, the description covers the returned fields, anonymization, and auth. However, it lacks explanation of what 'polymorphic audit log' entails and whether the data is scoped to a user or global.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage for the single parameter 'limit', which already states default and max. The description adds no additional semantic value beyond what is already in the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool reads recent rows from the polymorphic audit log and lists the specific fields returned. The verb 'Read' and resource 'recent rows' are precise and differentiate it from sibling tools which focus on other aspects like brood management or carousel.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives. While the description implies it is for reading audit data, it does not mention when not to use it or provide alternative tool names for different scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
morph__morph_auto_cull_scanAInspect
Run the auto-cull dry-run scan: which brood variants the system would cull based on conversion floors, ROI thresholds, and saturation. READ-ONLY — does not actually cull.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It explicitly states read-only and does not cull, but lacks details on output format, error conditions, or prerequisites. Still, it provides essential behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with purpose, no wasted words. Highly concise and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a zero-parameter, no-output-schema tool, the description covers purpose, read-only nature, and criteria. No gaps for a dry-run scan of this complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 0 parameters, so no parameter explanation is needed. The description adds no extra parameter info, which is appropriate. Baseline 4 applies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool performs a dry-run scan of auto-cull, specifying the criteria (conversion floors, ROI thresholds, saturation) and that it does not actually cull. It distinguishes from sibling tools like morph_spawn_monitor_scan.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for previewing cull actions without performing them, emphasizing read-only. While no explicit alternatives are given, the context of sibling tools and the 'dry-run' phrase provide guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
morph__morph_brood_allAInspect
List all brood variants across every supermodel. Useful for fleet-wide population scans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the burden of behavioral disclosure. It indicates a read-only listing operation ('list'), but provides no details about side effects, permissions, or limitations. It is adequate but not thorough.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two short sentences, no superfluous words. Every piece of information serves a purpose. Front-loaded with action and resource.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simplicity of the tool (no params, no output schema), the description covers the essential purpose. Could mention if the list is paginated or the return format, but the description is acceptable for a straightforward listing tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No parameters exist, so per guidelines baseline is 4. The description adds context ('across every supermodel') that clarifies what the empty schema implies, adding value beyond the schema alone.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the action (list), the resource (brood variants), and scope (across all supermodels). It distinguishes from sibling tools like morph_brood_for_supermodel, which targets a specific supermodel.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Description explicitly says 'fleet-wide population scans,' implying when to use. It doesn't mention when not to use or explicitly reference alternatives, but the context of siblings provides differentiation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
morph__morph_brood_conversionBInspect
Per-variant conversion table: parent supermodel, variant id, kit version, offers shown, settles, revenue (USDC), first-offer timestamp.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure, but it only describes the output columns. It does not mention side effects, authentication requirements, rate limits, or data freshness, leaving significant gaps for an agent to infer behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, front-loaded sentence that efficiently conveys the tool's output. Every part is essential, with no redundant information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given that the tool has no parameters and no output schema, the description provides a reasonable overview of the returned data. However, it lacks explanation of field semantics (e.g., what 'settles' means) and does not specify if the data is live or cached, which would improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has no parameters, so the description adds value by listing the columns returned (e.g., parent supermodel, variant id, kit version), which helps the agent understand the output without needing parameter details. The baseline is 3 due to 100% schema coverage, and the description provides additional context.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states that the tool returns a per-variant conversion table with listed columns, making the purpose understandable. However, it does not differentiate itself from the sibling tool 'morph__morph_brood_conversion_leaderboard', which likely provides aggregated data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives, such as the sibling leaderboard tool or other conversion-related tools. The description does not include any usage context or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
morph__morph_brood_conversion_leaderboardAInspect
Conversion leaderboard across all brood variants ranked by settle rate × revenue. Drives the auto-cull and auto-promote signals.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so the description carries full burden. It mentions the tool drives auto signals but does not disclose side effects, read-only status, or return format. This is insufficient for a tool with no output schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with purpose, no redundant information. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given zero parameters and no output schema, the description adequately explains the tool's function and system role. Could mention output format, but acceptable for its simplicity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No parameters in schema, so no additional parameter information needed. Baseline is 4; description correctly has no param details.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it is a conversion leaderboard across all brood variants ranked by settle rate × revenue, and that it drives auto-cull and auto-promote signals. This is specific and distinguishes from siblings like morph_brood_conversion.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage for viewing the conversion leaderboard and understanding auto-cull/promote, but does not explicitly state when to use this over sibling tools like morph_brood_conversion or morph_brood_all.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
morph__morph_brood_for_supermodelAInspect
List brood variants for a specific supermodel (e.g. MONROE / W1).
| Name | Required | Description | Default |
|---|---|---|---|
| supermodel | Yes | Supermodel name or id |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It describes a read operation ('list') without disclosing permissions, rate limits, or side effects, but is adequate for a simple list tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, front-loaded sentence with no wasted words. It efficiently communicates the tool's purpose and scope.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple tool with one parameter and no output schema, the description provides sufficient context to understand and invoke the tool. However, it does not describe the return format, which would be helpful.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the schema already describes the parameter. The description adds an example value ('MONROE / W1'), which provides helpful context but is not essential beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it lists brood variants for a specific supermodel, with an example ('MONROE / W1'). It distinguishes from sibling tools like morph__morph_brood_all by specifying 'for a specific supermodel'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when a specific supermodel is targeted, but does not explicitly state when to use this tool versus alternatives like morph__morph_brood_all or when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
morph__morph_brood_pending_approvalsAInspect
List brood variants currently awaiting human/operator approval before promotion. Read-only — approval itself is performed via the private hivemorph operator surface.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It clearly states the read-only nature, which is a key behavioral trait. It also clarifies that approval is not part of this tool, adding transparency beyond the basic purpose.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, each adding essential information. No superfluous words. The first sentence clearly states the purpose, and the second adds behavioral context. Highly efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple listing tool with no parameters and no output schema, the description covers the necessary context: what it lists, that it's read-only, and where approval happens. It is complete enough for an agent to select and invoke correctly, though it could mention if the list is exhaustive or paginated, but that is likely implied by the absence of parameters.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, so schema description coverage is 100% trivially. Per the guidelines, baseline is 4 for 0 parameters. The description adds no parameter-specific information, which is acceptable.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('List') and clearly identifies the resource ('brood variants currently awaiting human/operator approval before promotion'). It distinguishes from sibling tools like 'morph__morph_brood_all' by specifying the pending approval state.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
It explicitly states the tool is read-only and indicates that approval is performed elsewhere ('via the private hivemorph operator surface'), giving clear context on when to use this tool vs. other actions. However, it does not explicitly name alternative tools for approval.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
morph__morph_carouselAInspect
Read the polymorphic carousel: primary shape + the 7 verticals (Merchant / Provenancer / Attestor / Refunder / Creditor / Oracle / Guardian) and contrails for each.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description says 'Read', indicating a non-destructive operation, but no further behavioral details are given (e.g., auth requirements, rate limits, or side effects). Since no annotations are provided, the description bears the full burden, but it is minimal.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that front-loads the action and lists key details without any wasted words. It is highly concise and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description is fairly complete given there are no parameters and no output schema. It lists the main return elements, but does not specify detailed structure or data types for 'contrails'. Slightly more detail could improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
There are no parameters, and the schema description coverage is 100%. According to guidelines, the baseline is 4. The description adds value by explaining the return components, which helps the agent understand what to expect.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states that the tool reads a polymorphic carousel, listing the primary shape and the seven verticals with contrails. This distinguishes it from other morph tools that handle audits, broods, supermodels, etc.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus others, nor are there any exclusions or context for use. The description simply states what the tool does.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
morph__morph_get_identityAInspect
Look up a Morph Identity Index (MII) record. Returns the polymorphic envelope: capabilities, current shape, supermodel parent, trust score.
| Name | Required | Description | Default |
|---|---|---|---|
| mii_id | Yes | MII identifier |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full behavioral burden. It describes the return envelope structure but does not disclose side effects, authentication needs, rate limits, or error scenarios. The tool appears to be a read-only lookup, but this is not explicitly stated.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with purpose and return info, no superfluous text. Every part is relevant and earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given simple single-parameter lookup with no output schema, the description covers the core purpose and return structure. It could mention handling of invalid IDs or empty results, but overall sufficient for a straightforward tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, and the description adds no additional meaning beyond the schema's 'MII identifier' for the mii_id parameter. Baseline of 3 applies as the schema already handles parameter documentation adequately.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool looks up a Morph Identity Index (MII) record, specifying the return envelope contents. It distinguishes itself from sibling morph tools by focusing on a single record lookup by ID.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage with an mii_id parameter but does not provide explicit guidance on when to use this tool versus alternatives like morph_get_supermodel or morph_brood_all. No when-not or exclusion criteria are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
morph__morph_get_supermodelAInspect
Fetch a single supermodel by name or id (e.g. 'MONROE', 'W1'). Returns full role, lane, lead-shape, tagline, address, and brood conversion summary.
| Name | Required | Description | Default |
|---|---|---|---|
| name_or_id | Yes | Supermodel name (e.g. MONROE) or id (e.g. W1) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided. Description lists returned fields but lacks behavioral details like authentication needs or side effects. Acceptable for a read operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with purpose, no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Single parameter, no output schema, but description lists return fields. Sufficient for the tool's simplicity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and description adds examples (e.g. 'MONROE', 'W1') beyond schema's description, aiding understanding.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description states 'Fetch a single supermodel by name or id' with clear verb and resource. Distinguishes from sibling tools like morph__morph_list_supermodels which lists multiple.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit when or when-not to use, but purpose is clear. Implied that this is for single supermodel retrieval, contrasting with list tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
morph__morph_list_supermodelsAInspect
List all supermodels (W1 through W19): id, name, role, lane, lead shape, tagline. The supermodel directory is the canonical lineage for every spawned brood variant.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so description must disclose behavior. It notes the tool is non-destructive ('list'), defines scope (W1-W19), and adds value by stating the supermodel directory is the canonical lineage for every spawned brood variant, which implies a key behavioral role. Lacks details on auth, rate limits, or cache behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences with no redundant information. Front-loaded with the core action and details follow immediately. Every word adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has no parameters, no annotations, and no output schema, the description adequately explains what the tool returns and its significance. However, it could mention if the list is ordered, paginated, or has performance considerations, but these are minor gaps for a simple list tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No parameters exist, so schema coverage is 100%. Description adds meaning by listing the fields returned, which is useful since no output schema is provided. Baseline for zero parameters is 4, and description fulfills that.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description explicitly states 'List all supermodels' with specific range (W1 through W19) and lists returned fields (id, name, role, lane, lead shape, tagline). Clearly distinguishes from sibling tools like morph_get_supermodel by indicating it returns the full directory.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Description implies usage for viewing all supermodels and their lineage, but does not explicitly state when to use this versus alternative tools (e.g., morph_get_supermodel for a single entry, morph_brood_all for brood variants). No guidance on filtering or prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
morph__morph_money_flavor_probeAInspect
Probe the money-flavor classifier on the most recent settlement window: asset class histogram, chain class histogram, flow class histogram, arb-high count, refused count, non-USDC share.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Discloses output components but does not mention behavioral traits like idempotency, side effects, authentication needs, or rate limits. For a probe tool, even minimal safety info is missing.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence front-loading the verb and resource, then colon-separated list of outputs. No redundancy, every word adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no parameters and no output schema, the description covers input (none) and lists all output components. It is complete for a simple probe, though lacks return format details (e.g., data types or structure).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has zero parameters, so no parameter descriptions needed. Baseline is 4 per rules. Description adds no param info but does not need to.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'probe' and names the resource 'money-flavor classifier on the most recent settlement window', listing outputs. It clearly states what the tool does but does not distinguish from the sibling 'morph__morph_money_flavor_stats', which may have similar purpose.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage when wanting asset/chain/flow histograms, arb-high count, refused count, non-USDC share. No explicit guidance on when not to use or alternatives; the sibling tool 'money_flavor_stats' is not mentioned, leaving the agent without differentiation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
morph__morph_money_flavor_statsBInspect
Rolling aggregate money-flavor statistics (default 1 hour window): asset class, chain class, flow class distributions, plus arb/refused/non-USDC ratios. Drives the CLEAN-MONEY gate.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses rolling nature with default 1-hour window and lists included statistics. However, with no annotations, the description should also clarify read-only status, side effects, or data freshness. It falls short of full behavioral disclosure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
One sentence with clear front-loading of purpose. Dense but effective. Could be slightly restructured for readability but is appropriately concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and no annotations, the description provides a reasonable overview of statistical outputs. However, it lacks details on return format, data types, or limitations, leaving some gaps for a no-parameter tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has zero parameters, so schema coverage is 100%. Baseline for 0 parameters is 4. Description does not need to add parameter info.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it provides rolling aggregate money-flavor statistics with specific distributions and ratios. Mentions driving the CLEAN-MONEY gate. However, it does not explicitly differentiate from sibling morph__morph_money_flavor_probe.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool vs alternatives. The description mentions it drives the CLEAN-MONEY gate but does not provide explicit usage context, exclusions, or when-not scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
morph__morph_spawn_monitor_scanAInspect
Run the spawn-monitor scan: detect supermodels eligible for new brood spawns based on conversion gaps, opportunity surface, and saturation. READ-ONLY — does not actually spawn.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description must disclose behavioral traits. It correctly states read-only and no spawning, but doesn't cover potential side effects, rate limits, or what happens during execution. Basic but not comprehensive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences front-load purpose and safety. Every sentence provides value: first explains what it does, second clarifies it's read-only. No redundancy or unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description should mention what the scan returns or how results are used. It explains the criteria but omits output format or interpretation, leaving the agent uncertain about the response.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No parameters exist, and schema coverage is 100%. The description adds no param info, which is acceptable as none are needed. Baseline 4 applies since no compensation required.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool scans to detect supermodels eligible for spawns, using specific criteria (conversion gaps, opportunity, saturation). It distinguishes itself from spawning actions by stating READ-ONLY, but does not differentiate from sibling scan tools like auto_cull_scan.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for checking eligibility without performing spawning, but lacks explicit when-to-use or alternatives. The READ-ONLY note provides some guidance, but no context on when this scan is preferred over other scans.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
trade__trade_create_invoiceAInspect
Create a cross-border invoice. Buyer (SMB) and supplier exchange DIDs; invoice amount + currency + chain selected at creation. Fee auto-computed from tier.
| Name | Required | Description | Default |
|---|---|---|---|
| memo | No | Free-form invoice memo (PO number, goods description) | |
| buyer_did | Yes | DID of the SMB buyer | |
| amount_usd | Yes | Invoice amount in USD (max $250,000 for MVP) | |
| settle_chain | Yes | base, ethereum, or solana | |
| supplier_did | Yes | DID of the overseas supplier | |
| settle_currency | Yes | USDT or USDC | |
| supplier_payout_address | Yes | Supplier's receiving address on the chosen chain |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses fee auto-computation from tier, but lacks details on side effects, confirmation, or failure handling. Acceptable but not comprehensive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with core action, no extraneous information. Every word adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema or annotations, the description covers creation flow but omits what the tool returns (e.g., invoice ID). Adequate for a straightforward creation tool but leaves a gap.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with all parameters described. The description adds context about auto-fee computation, slightly enhancing beyond schema. Baseline of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool creates a cross-border invoice, specifying key components like buyer/supplier DIDs, amount, currency, chain, and auto-computed fee. It is distinct from sibling tools like trade dispute or get invoice.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description outlines the creation process and required elements (DIDs, amount, currency, chain), implying when to use. However, it does not explicitly exclude scenarios or mention alternatives beyond the implicit context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
trade__trade_dispute_invoiceAInspect
Open a dispute on an invoice. Routes to HiveLaw arbitration if buyer and supplier cannot resolve. Settlement is held in escrow until resolution.
| Name | Required | Description | Default |
|---|---|---|---|
| reason | Yes | Dispute reason | |
| invoice_id | Yes | Invoice ID | |
| claimant_did | Yes | DID of the disputing party (buyer or supplier) | |
| evidence_url | No | Optional URL to supporting documents |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden. It discloses arbitration escalation and escrow holding, but omits side effects like invoice freezing, notifications, or permission requirements. Some transparency, but incomplete.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences efficiently convey purpose, process, and escrow. No wasted words, though the information is front-loaded. Could be slightly more structured but is concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 4-param tool with no output schema and no annotations, the description covers the basic flow but lacks return value details, prerequisites (e.g., invoice status), and operational context. Not fully complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description does not add meaning beyond the schema descriptions (e.g., 'reason' remains vague). No extra elaboration on parameter usage or constraints.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Open a dispute on an invoice') and the resource ('invoice'). It adds context about arbitration and escrow, distinguishing it from sibling dispute tools that check or route disputes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when buyer and supplier cannot resolve a dispute, but it does not explicitly guide when to use this tool over alternatives like 'dispute_check' or 'dispute_route'. No when-not-to-use or alternative naming.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
trade__trade_get_feesAInspect
Get the cross-border invoice fee schedule and SWIFT wire comparison (Hive vs. typical SMB wire fees, FX spreads, settlement times).
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
In the absence of annotations, the description adequately discloses that the tool provides fee schedule and comparison data. It does not explicitly state it is a read-only operation, but the verb 'Get' strongly implies that.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that conveys all necessary information with no wasted words. It is front-loaded with the main action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has no parameters and no output schema, the description is complete. It fully explains what the tool does and what information it provides.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
There are no parameters, so the description adds value by specifying what the tool returns. The baseline for zero-parameter tools is 4, and the description meets that.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves a cross-border invoice fee schedule and SWIFT wire comparison, with specific details about Hive vs typical SMB fees, FX spreads, and settlement times. This distinguishes it from siblings like trade_create_invoice or trade_get_invoice.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While the description implies it is for checking fees, it does not explicitly state when to use this tool versus alternatives or provide exclusions. The context is clear but lacks explicit guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
trade__trade_get_invoiceAInspect
Retrieve invoice status, settlement transaction hash, and dispute history.
| Name | Required | Description | Default |
|---|---|---|---|
| invoice_id | Yes | Invoice ID returned from trade_create_invoice |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Since no annotations are present, the description carries the full burden. It indicates a read operation ('Retrieve') and lists returned data, which is adequate. However, it doesn't mention permissions, idempotency, or any side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single concise sentence, front-loaded with the verb, no wasted words. Efficiently communicates purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given one required parameter and no output schema, the description adequately covers what the tool returns. Could possibly mention if there are limits, but overall sufficient.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% (invoice_id described). The description adds no extra meaning beyond the schema, so baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Retrieve') and resource ('invoice'), and lists specific data points (status, settlement transaction hash, dispute history). It distinguishes from sibling tools like trade_create_invoice and trade_dispute_invoice.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool vs alternatives. The name and description imply retrieval, but no when-to-use or when-not-to-use context is provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
trade__trade_get_listingAInspect
Get the public listing metadata (target user, fee schedule, settlement currencies/chains, cumulative volume, council origin).
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It only states what data is returned, not any behavioral traits like idempotency, auth needs, or side effects. Acceptable for a simple read, but lacks disclosure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, front-loaded with action, no wasted words. Efficiently conveys purpose and scope.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema or complex params; description lists metadata fields sufficiently. Does not explain return format but is adequate given simplicity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has zero parameters (100% coverage trivially). Description adds value by listing the metadata fields returned, even though no param info needed.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb 'Get' and specific resource 'public listing metadata' with explicit data fields (target user, fee schedule, etc.). Distinguishes from sibling tools like trade__trade_create_invoice.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit when-to-use or alternatives provided. However, the tool is straightforward (no params), so usage context is implied.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!