Hive Morph
Server Details
HiveMorph polymorphic identity and capability tokens for autonomous agents
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- srotzin/hive-mcp-morph
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.9/5 across 14 of 14 tools scored.
Most tools have clearly distinct purposes, targeting different data sets (audit log, brood variants, supermodels, money flavor). Only slight overlap between 'morph_brood_all' and 'morph_brood_for_supermodel' could cause confusion, but descriptions clarify scope (fleet-wide vs. per-supermodel).
All tools follow the consistent pattern 'morph_verb_noun' in snake_case (e.g., 'morph_audit_recent', 'morph_brood_all'). No mixed conventions or irregular verbs.
14 tools is well within the ideal 3-15 range. Each tool serves a distinct purpose in the domain of monitoring and analytics, without unnecessary overlap or bloat.
The server covers all major read operations for the Hive Morph domain: audit, brood variants, supermodels, money flavor, and scanning. As a monitoring interface, it lacks write/mutation tools, which seems intentional (only dry-run scans). Minor gaps like missing direct access to a single audit entry or historical data are acceptable.
Available Tools
14 toolsmorph_audit_recentBInspect
Read recent rows from the polymorphic audit log: outcome, shape, counterparty hash, token id, revenue. Anonymized counterparty (hashed). No auth.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max rows to return (default 50, max 500) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description must convey behavioral traits. It mentions 'No auth' and 'Read' implying non-destructive use, but does not discuss idempotency, rate limits, or side effects. Basic transparency is present but lacks depth.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that efficiently conveys purpose and key details. It is front-loaded and contains no filler, but lacks sectioned structure or bullet points for easier scanning.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description covers the basic functionality and field list, but fails to specify ordering (e.g., 'recent' by time) or pagination behavior. For a simple read tool with one parameter, it is mostly complete but leaves some ambiguity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage for the single parameter 'limit', explaining its purpose and constraints. The description adds no additional parameter semantics beyond what the schema already provides, so baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool reads recent rows from the polymorphic audit log and lists the specific fields returned (outcome, shape, counterparty hash, token id, revenue). It also notes anonymization and lack of auth. This distinguishes it from sibling tools like morph_auto_cull_scan or morph_brood_all, which serve different purposes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description does not provide guidance on when to use this tool versus alternatives, nor does it explain limitations or prerequisites. It only states what the tool does without context for appropriate usage scenarios or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
morph_auto_cull_scanAInspect
Run the auto-cull dry-run scan: which brood variants the system would cull based on conversion floors, ROI thresholds, and saturation. READ-ONLY — does not actually cull.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full responsibility for behavioral transparency. It explicitly states 'READ-ONLY — does not actually cull,' which is a strong signal of safety. However, it does not disclose potential runtime duration or side effects like logging or state changes, but the read-only guarantee covers the most critical aspect.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences with the key information front-loaded: first sentence states the action and criteria, second sentence emphasizes read-only nature. No superfluous words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description explains the purpose and safety, but lacks detail on output format or what the agent can expect to receive (e.g., a list of variants, scores, or a summary). Since there is no output schema and no annotation, the description should briefly mention the nature of the output to avoid ambiguity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has no parameters, and schema description coverage is 100%. The description does not need to add parameter details since none exist. The baseline for 0-parameter tools is 4, and the description mentions the criteria (conversion floors, ROI thresholds, saturation) implicitly, but these are not parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it runs a dry-run scan for auto-culling based on specific criteria (conversion floors, ROI thresholds, saturation) and explicitly distinguishes it from actual culling by labeling it READ-ONLY. This differentiates it from sibling tools that likely perform real actions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While the description implies usage for previewing culling decisions before execution, it does not explicitly state when to use this tool versus sibling tools like morph_audit_recent or morph_spawn_monitor_scan. No alternatives or exclusions are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
morph_brood_allAInspect
List all brood variants across every supermodel. Useful for fleet-wide population scans.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must fully disclose behavior. It only says 'list,' which implies read-only, but lacks details on pagination, rate limits, performance impact, or what 'all' means. This is insufficient for a tool with no annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with no unnecessary words. The first sentence states the purpose, and the second provides a use case. Perfectly concise and front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple list-all tool with no parameters and no output schema, the description adequately covers purpose and usage. However, it could be improved by mentioning if the output is paginated or includes summary details, but overall it is complete enough.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, so schema description coverage is 100%. The description doesn't need to add parameter info. Baseline for 0 params is 4, and no additional param details are required.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'List all brood variants across every supermodel,' specifying the verb (list) and resource (brood variants across supermodels). It distinguishes from siblings like 'morph_brood_for_supermodel' which likely filters by a specific supermodel.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description includes 'Useful for fleet-wide population scans,' implying when to use for a broad overview. It doesn't explicitly mention alternatives or when not to use, but the context and sibling names make the distinction clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
morph_brood_conversionAInspect
Per-variant conversion table: parent supermodel, variant id, kit version, offers shown, settles, revenue (USDC), first-offer timestamp.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided. Description implies a read operation but does not disclose behavioral details such as whether it requires authentication, rate limits, or the effect of calling it.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence with a colon-separated list is clear and efficient. No extraneous information, but could be more structured (e.g., bullet points).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and zero parameters, the description adequately explains the return format (table with listed columns). Could mention if it returns all data or ordering, but overall sufficient.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Zero parameters; baseline is 4. Description adds value by listing the output columns, compensating for the empty schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states it returns a per-variant conversion table with specific columns like parent supermodel, variant id, kit version, etc. It is distinct from siblings like morph_brood_all (all broods) and morph_brood_conversion_leaderboard.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives. No mention of prerequisites, context, or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
morph_brood_conversion_leaderboardBInspect
Conversion leaderboard across all brood variants ranked by settle rate × revenue. Drives the auto-cull and auto-promote signals.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It states the tool is a leaderboard and drives signals, but it does not disclose whether invocation causes side effects, if it is read-only, or any safety implications. The phrase 'drives the auto-cull and auto-promote signals' could imply it triggers actions, but it is ambiguous.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, both front-loaded. The first sentence defines the tool's output, and the second adds valuable context. No extraneous words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description does not specify the fields or format of the leaderboard. While it mentions ranking by settle rate × revenue, it lacks details on what data each row contains. Sibling tools exist but are not compared.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
There are no parameters, so baseline is 4. The description adds meaning by explaining the ranking criteria (settle rate × revenue) and the purpose (drives signals), which goes beyond the empty schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it is a leaderboard across brood variants ranked by settle rate × revenue. It also mentions it drives auto-cull and auto-promote signals, which adds context. However, it does not explicitly differentiate from sibling tools like morph_brood_conversion, which might also provide conversion data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies it is used for internal signals (auto-cull and auto-promote), but it does not give explicit guidance on when to use this tool versus alternatives such as morph_brood_all or morph_brood_conversion. The usage context is suggested but not clearly delineated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
morph_brood_for_supermodelAInspect
List brood variants for a specific supermodel (e.g. MONROE / W1).
| Name | Required | Description | Default |
|---|---|---|---|
| supermodel | Yes | Supermodel name or id |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
As no annotations exist, the description must convey behavioral traits. It states 'List', which implies a read-only, non-destructive operation. This is sufficient for a simple retrieval tool, though it could mention response format or pagination if applicable.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence of 10 words, including useful example. Every word earns its place, with no redundancy. Excellent conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple tool with one required parameter and no output schema, the description is complete: it explains the operation and provides a usage example. No additional information is necessary for an agent to use it correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema describes the 'supermodel' parameter as 'name or id'. The description adds value by providing a concrete example (MONROE / W1), which clarifies the expected format. Schema coverage is 100%, so baseline is 3; the example elevates to 4.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action 'List brood variants' and the scope 'for a specific supermodel', with a concrete example (MONROE / W1). It distinguishes itself from siblings like morph_brood_all (all broods) and morph_brood_conversion.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides usage guidance via an example (e.g. MONROE / W1), implying the tool is for querying a specific supermodel. It does not explicitly state alternatives or when not to use, but the context of sibling tools makes it clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
morph_brood_pending_approvalsAInspect
List brood variants currently awaiting human/operator approval before promotion. Read-only — approval itself is performed via the private hivemorph operator surface.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Explicitly declares the tool is read-only, which is crucial for behavioral understanding. With no annotations, the description carries full burden and adds this key trait. Could mention pagination or response format, but essentially covers the core behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with purpose, no wasted words. Highly efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a parameterless listing tool with no output schema, the description covers purpose, behavior (read-only), and boundary (approval not included). Complete within its scope.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has zero parameters with 100% coverage, so baseline score of 4 applies. Description adds no param info, but none is needed.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the tool lists brood variants pending human approval, using specific verb 'list' and resource 'brood variants pending approvals'. Distinguishes from siblings like morph_brood_all by specifying the filter on approval status.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit context on when to use (to see pending approvals) and directs that approval itself is done elsewhere. Lacks mention of alternative sibling tools for comparison, but the guidance is sufficient.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
morph_carouselAInspect
Read the polymorphic carousel: primary shape + the 7 verticals (Merchant / Provenancer / Attestor / Refunder / Creditor / Oracle / Guardian) and contrails for each.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description only states the tool is a read operation. It lacks details on side effects, authentication needs, rate limits, or data freshness, providing minimal behavioral insight.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence of 15 words, efficiently conveying the tool's purpose and output components without wasted text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no parameters and no output schema, the description sufficiently explains the returned data structure. However, lacking details on format or nesting, it could be more complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
There are no parameters, so schema coverage is 100%. The description does not need to add parameter info, meeting the baseline of 4.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Read' and specifies the resource 'polymorphic carousel' with detailed components (primary shape, 7 verticals, contrails). It distinguishes from sibling tools by being specific to the carousel data structure.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives. The description implies its purpose but does not provide when/when-not conditions or mention sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
morph_get_identityAInspect
Look up a Morph Identity Index (MII) record. Returns the polymorphic envelope: capabilities, current shape, supermodel parent, trust score.
| Name | Required | Description | Default |
|---|---|---|---|
| mii_id | Yes | MII identifier |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description must convey behavioral traits. It only states what is returned, not whether the operation is safe, idempotent, or requires permissions. No read-only or side-effect disclosure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, each adding distinct value: first sentence states purpose, second lists return fields. No fluff, front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (1 param, no output schema, no annotations), the description fully explains what it does and what it returns. No gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and the single parameter description is adequate. The description adds no extra meaning beyond the schema, earning the baseline 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action 'Look up' and the resource 'Morph Identity Index (MII) record', and specifies the return fields, distinguishing it from sibling tools that focus on other functions like auditing or scanning.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives like morph_get_supermodel or other lookup tools. The description does not mention exclusions or prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
morph_get_supermodelAInspect
Fetch a single supermodel by name or id (e.g. 'MONROE', 'W1'). Returns full role, lane, lead-shape, tagline, address, and brood conversion summary.
| Name | Required | Description | Default |
|---|---|---|---|
| name_or_id | Yes | Supermodel name (e.g. MONROE) or id (e.g. W1) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description must carry the full burden. It discloses that the tool returns full details, but does not mention read-only nature, error handling for missing entities, or any side effects. Adequate but not rich.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
A single sentence that front-loads the action, parameters, and return info. No waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple get tool with no output schema, the description covers what it fetches and how to identify the supermodel. However, it omits behavior when the entity is not found, which is a common edge case.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the description adds only examples ('MONROE', 'W1') beyond the schema's basic string definition. This is helpful but does not provide significant additional meaning.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states 'Fetch a single supermodel by name or id' with examples, clearly specifying the verb, resource, and identifier. It also lists the return fields, distinguishing it from sibling tools like morph_list_supermodels.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Usage is implied by describing a single-object fetch, but no explicit guidance is given on when to prefer this over siblings like morph_list_supermodels or other tools. The description lacks when-not-to-use or alternative conditions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
morph_list_supermodelsAInspect
List all supermodels (W1 through W19): id, name, role, lane, lead shape, tagline. The supermodel directory is the canonical lineage for every spawned brood variant. Free up to 100 calls/day per agent-DID; $0.001/call thereafter via x402 USDC settlement on Base.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses that the tool lists all supermodels with specific fields and notes the canonical lineage, implying a read-only, authoritative source. However, it does not disclose details like pagination, ordering, or rate limits. For a simple list with no parameters, this is adequate but not exhaustive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with the main action and scope. No extraneous information. Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given zero parameters, no output schema, and low complexity, the description is complete. It tells the agent what the tool lists, the fields, and its significance as the canonical lineage. Nothing essential is missing.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has no parameters, so schema coverage is 100% but empty. The description adds value by specifying the output fields (id, name, etc.) and the context of canonical lineage, which is beyond what the schema provides. With zero parameters, the description compensates effectively.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'List', the resource 'supermodels', the scope 'all (W1 through W19)', and enumerates the fields returned (id, name, role, lane, lead shape, tagline). It also distinguishes from siblings like morph_get_supermodel by implying it returns all supermodels.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage as the definitive lineage directory but does not explicitly state when to use this tool versus alternatives like morph_brood_all or morph_brood_for_supermodel. No when-not-to-use guidance is provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
morph_money_flavor_probeAInspect
Probe the money-flavor classifier on the most recent settlement window: asset class histogram, chain class histogram, flow class histogram, arb-high count, refused count, non-USDC share.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided. Description does not disclose whether this tool is read-only or has side effects. It only lists outputs without explaining behavioral traits or prerequisites.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence is clear but somewhat long. Could be split for readability. Mentions all key information without extraneous content.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema, so description rightly explains return values. However, missing context like read-only nature, permissions, or frequency. Adequate for a simple probe tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has zero parameters, so description does not need to explain parameter semantics. It adds value by detailing the return values, which are not in the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the action ('probe') and the resource ('money-flavor classifier on the most recent settlement window') and lists specific outputs. It distinguishes itself from sibling 'morph_money_flavor_stats' by focusing on the latest window.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit when/when-not or alternative guidance. The context 'most recent settlement window' implies a specific use case, but no comparison with siblings like 'morph_money_flavor_stats'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
morph_money_flavor_statsAInspect
Rolling aggregate money-flavor statistics (default 1 hour window): asset class, chain class, flow class distributions, plus arb/refused/non-USDC ratios. Drives the CLEAN-MONEY gate.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description discloses the rolling aggregate behavior over a 1-hour window and lists the metrics. With no annotations, the description carries the full burden, but it does not explicitly state that the tool is read-only, non-destructive, or any authentication/rate-limit requirements. The behavioral traits are partially disclosed but not comprehensively.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence packed with relevant information (window, metrics, purpose). There is no wasted text, and the key details are front-loaded. It is appropriately concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 0 parameters and no output schema, the description explains what the tool does and the types of metrics. It could be more complete by describing the output format (e.g., whether it returns percentages or counts, single summary or time series). However, for a simple aggregation tool, it is largely sufficient.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0 parameters, so schema coverage is 100% (vacuously). The description does not need to add parameter meaning. Baseline is 4 for zero parameters, and the description does not detract.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it provides rolling aggregate money-flavor statistics with specific metrics (asset class, chain class, flow class distributions, arb/refused/non-USDC ratios) and drives the CLEAN-MONEY gate. It distinguishes from sibling 'morph_money_flavor_probe' by being a statistical aggregation vs a probe.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions a default 1-hour window and that it drives the CLEAN-MONEY gate, implying a specific use case. However, it does not explicitly state when to use this tool versus alternatives like 'morph_money_flavor_probe' or provide conditions for not using it. Guidelines are implied but not explicit.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
morph_spawn_monitor_scanAInspect
Run the spawn-monitor scan: detect supermodels eligible for new brood spawns based on conversion gaps, opportunity surface, and saturation. READ-ONLY — does not actually spawn.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description takes full responsibility for behavioral disclosure. It explicitly states the tool is READ-ONLY and does not spawn, which are critical behavioral traits. This adds value beyond any structured fields and informs the agent of safety.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise: two sentences with no wasted words. The first sentence clearly states the action and criteria, and the second sentence adds the critical read-only constraint. It is front-loaded and efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has no parameters and no output schema, the description is mostly sufficient. However, it does not explain the output format or what happens after the scan, which could be important for an agent. Minimal context is provided, leaving some gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, so the description has nothing to add. According to guidelines, 0 parameters gets a baseline of 4. The description does not need to compensate, and it does not repeat schema information.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states that the tool runs a spawn-monitor scan to detect supermodels eligible for new brood spawns based on conversion gaps, opportunity surface, and saturation. It clearly defines the verb 'run/detect' and the resource 'spawn-monitor scan,' and distinguishes itself from siblings by declaring it is READ-ONLY and does not actually spawn.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies the tool is for inspection without spawning, which helps an agent decide when to use it. It explicitly states it is read-only, preventing misuse for actual spawning. However, it does not explicitly list alternative tools for spawning, so it misses some guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!