Cascade Protocol
Server Details
Zero-value tracer token system that tracks AI agent activity across the internet. Agents earn tokens by submitting threat intelligence traces, with free trust verification (verify_trust) and paid threat intelligence feeds. 8 tools: submit_trace, check_token_balance, mutate_token, get_trace_schema, verify_trust (free) + threat_intelligence_feed, bulk_verify_trust, query_trace_analytics (paid).
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4/5 across 8 of 8 tools scored. Lowest: 3.4/5.
Most tools have distinct purposes, but there is some overlap between verify_trust and bulk_verify_trust, and between get_threat_feed and query_trace_analytics, which could cause confusion. The descriptions help clarify the differences (e.g., batch vs. single verification, raw feed vs. analytics), but the boundaries are not entirely clear at first glance.
The naming follows a consistent verb_noun pattern throughout (e.g., check_token_balance, get_trace_schema, submit_trace), with only minor deviations like mutate_token (which could be more descriptive) and the use of underscores consistently. This makes the tools readable and predictable.
With 8 tools, the count is well-scoped for a trust and trace analytics server. Each tool appears to serve a specific function within the domain, such as verification, token management, trace submission, and analytics, without feeling bloated or insufficient.
The tool set covers key operations for trust verification, token handling, trace submission, and analytics, with clear workflows. However, there are minor gaps, such as no explicit tools for managing API keys or handling token transfers, which agents might need to work around using existing tools or external methods.
Available Tools
8 toolsbulk_verify_trustAInspect
[PAID] Enterprise batch trust verification. Accepts an array of agent IDs or token IDs and returns trust results for all in one call. Requires a paid API key. Pricing: $0.05 per agent/token checked, minimum 10 per call. Use for fleet management, supply-chain trust audits, or any scenario where you need to verify hundreds of agents at once. Obtain an API key at /api-keys.
| Name | Required | Description | Default |
|---|---|---|---|
| api_key | Yes | Your paid API key. Obtain one at /api-keys. Credits are deducted per agent/token checked. | |
| targets | Yes | Array of targets to verify. Each entry must have either agent_id or token_id (or both). Minimum 10 entries per call. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and does well by disclosing pricing ($0.05 per agent/token, minimum 10 per call), authentication requirements (requires paid API key), and rate/volume constraints. It could improve by specifying response format or error handling, but covers key operational aspects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with key information front-loaded (paid enterprise batch verification), followed by pricing, usage scenarios, and authentication details. Every sentence adds value, though it could be slightly more concise by integrating some details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a paid API tool with no annotations and no output schema, the description provides good context about authentication, pricing, and usage scenarios. It could improve by describing the return format or error cases, but covers the essential operational constraints well given the complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already fully documents both parameters. The description adds minimal value beyond the schema - it mentions the API key requirement and minimum 10 entries, but these are already covered in the schema descriptions. Baseline 3 is appropriate when schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool performs 'batch trust verification' on 'agent IDs or token IDs', specifying it's for 'hundreds of agents at once'. This distinguishes it from the sibling 'verify_trust' tool by emphasizing bulk/enterprise scale and batch processing capabilities.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool: 'for fleet management, supply-chain trust audits, or any scenario where you need to verify hundreds of agents at once'. It also distinguishes from alternatives by being the bulk/enterprise version compared to the sibling 'verify_trust' tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
check_token_balanceAInspect
Query the token balance and trust metrics for a given agent identity. Returns total tokens, breakdown by origin (earned/bought/trace-credited), and aggregate trust score.
| Name | Required | Description | Default |
|---|---|---|---|
| agent_id | Yes | The agent identity to query |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It discloses the return structure (total tokens, breakdown, trust score) which is valuable behavioral information. However, it doesn't mention permissions, rate limits, or whether this is a read-only operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences with zero waste. First sentence states the action and target, second sentence details the return structure. Perfectly front-loaded and appropriately sized.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter query tool with no output schema, the description provides good completeness by detailing the return structure. It could be more complete by mentioning whether this requires authentication or has rate limits, but it adequately covers the core functionality.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with only one parameter documented in the schema. The description adds context by specifying this is for 'agent identity' querying, which aligns with the schema's 'agent_id' description. For a single parameter tool, this provides adequate semantic context.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('query', 'returns') and resources ('token balance', 'trust metrics', 'agent identity'). It distinguishes from siblings by focusing on balance/trust querying rather than verification, mutation, or analytics operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context (querying balance/trust for an agent) but doesn't explicitly state when to use this versus alternatives like 'verify_trust' or 'bulk_verify_trust'. No exclusions or prerequisites are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_threat_feedAInspect
[PAID] Aggregated threat intelligence feed from all trace submissions across the Cascade network. Returns threat patterns, top reporting agents, severity breakdown, and recent alerts. Requires a paid API key with sufficient credits. Pricing: $0.10 per query. Obtain an API key at /api/api-keys.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max number of individual threat entries to return (1-100). Default: 25 | |
| api_key | Yes | Your Cascade API key. Required for access. Obtain at /api/api-keys. | |
| severity | No | Filter by severity level (from trace metadata.severity). Default: "all" | |
| time_range | No | Time window for the feed. Default: "24h" | |
| threat_type | No | Filter by trace type. "alert" for security threats, "observation" for environmental scans, or "all" for everything. Default: "all" |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Since no annotations are provided, the description carries full burden. It discloses important behavioral traits: paid service requirement, pricing ($0.10 per query), and API key acquisition method. However, it doesn't mention rate limits, error handling, or response format details that would be helpful for an agent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized with two sentences. The first sentence efficiently describes the tool's purpose and output. The second sentence provides necessary behavioral context (paid service, pricing, key acquisition). No wasted words, though it could be slightly more structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (5 parameters, no output schema, no annotations), the description provides adequate but incomplete context. It covers the core purpose and payment requirements but lacks details about response format, error conditions, or how this tool relates to sibling tools. For a paid query tool with multiple parameters, more guidance would be helpful.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description doesn't add any parameter-specific information beyond what's already documented in the schema. It mentions API key requirements but this is already covered in the schema's api_key parameter description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: retrieving aggregated threat intelligence feed with specific content (threat patterns, top reporting agents, severity breakdown, recent alerts). It specifies the source (all trace submissions across Cascade network) but doesn't differentiate from sibling tools like 'query_trace_analytics' which might have overlapping functionality.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions that it requires a paid API key with sufficient credits and provides pricing information, which gives some usage context. However, it doesn't explicitly state when to use this tool versus alternatives like 'query_trace_analytics' or 'submit_trace', nor does it provide exclusion criteria or comparison with sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_trace_schemaAInspect
Returns the expected trace submission format so agents know how to structure their data for the submit_trace tool.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It indicates this is a read operation ('Returns') but doesn't disclose behavioral traits like rate limits, authentication needs, or response format details. The description adds some context about supporting submit_trace but lacks comprehensive behavioral disclosure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence that efficiently communicates the tool's purpose and usage without any wasted words. It's appropriately sized and front-loaded with the core functionality.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (0 parameters, no output schema), the description is reasonably complete. It explains what the tool returns and why it's useful. However, without annotations or output schema, it could benefit from more detail about the return format or any constraints.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0 parameters and 100% schema description coverage, the baseline is 4. The description appropriately doesn't discuss parameters since there are none, and the schema fully documents the empty input structure.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verb ('Returns') and resource ('expected trace submission format'), and distinguishes it from its sibling 'submit_trace' by explaining it provides the format needed for that tool.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool: 'so agents know how to structure their data for the submit_trace tool.' It provides clear context and names the alternative tool (submit_trace) that this tool supports.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mutate_tokenAInspect
Trigger a deterministic SHA-256 state transition on a token. Records the full mutation trail. New state = SHA-256(previous_state + ":" + handler_id). Mutations are immutable and append-only.
| Name | Required | Description | Default |
|---|---|---|---|
| token_id | Yes | UUID of the token to mutate | |
| handler_id | Yes | Identifier of the agent/handler performing the mutation |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and does well by disclosing key behavioral traits: it explains the deterministic nature of the mutation, records a full trail, specifies the exact state transition formula, and notes that mutations are immutable and append-only. This covers important aspects like mutability constraints and logging behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly concise with three sentences that each add distinct value: the core action, the recording behavior, and the technical details plus immutability. There's zero wasted text and it's front-loaded with the main purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a mutation tool with no annotations and no output schema, the description covers the core operation well but leaves gaps. It explains what the tool does and key behavioral constraints, but doesn't address potential side effects, error conditions, or what the agent should expect as a result (since there's no output schema). This is adequate but has clear room for improvement.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents both parameters adequately. The description doesn't add any parameter-specific information beyond what's in the schema (e.g., it doesn't explain how token_id or handler_id relate to the mutation process). Baseline 3 is appropriate when the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Trigger a deterministic SHA-256 state transition on a token') and distinguishes it from sibling tools like 'check_token_balance' or 'verify_trust' by focusing on mutation rather than querying or verification. It specifies the exact cryptographic operation and resource being modified.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'submit_trace' or 'bulk_verify_trust'. While it implies this is for state transitions, it doesn't specify use cases, prerequisites, or exclusions that would help an agent choose between sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
query_trace_analyticsAInspect
[PAID] Intelligence layer on top of the raw threat feed. Runs pattern analysis across all trace submissions: trending threats, anomaly detection (activity spikes), and correlated activity (multi-agent coordination patterns). More powerful than get_threat_feed — this tool reveals WHY patterns are happening, not just what was reported. Pricing: $0.25 per query. Obtain an API key at /api/api-keys. Example queries: "all alert traces in last 24h", "agents with spike in activity", "coordinated sources this week".
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max results per analysis section (default 20, max 100). | |
| api_key | Yes | Your Cascade API key. Obtain at /api/api-keys and top up with /api/api-keys/:key/topup. | |
| end_time | No | ISO 8601 end of time range (e.g., "2026-04-15T23:59:59Z"). Defaults to now. | |
| start_time | No | ISO 8601 start of time range (e.g., "2026-04-14T00:00:00Z"). Defaults to 24 hours ago. | |
| threat_type | No | Filter by trace type. Use "alert" to focus on threat traces. Omit to analyze all types. | |
| agent_pattern | No | SQL LIKE pattern to filter by agent_id (e.g., "%threat-agent%"). Use % as wildcard. | |
| analysis_type | No | Which analysis to run. "all" runs all three sections. Default: "all". | |
| source_pattern | No | SQL LIKE pattern to filter by source (e.g., "%gmail%", "github-%"). Use % as wildcard. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden of behavioral disclosure and does well by revealing it's a paid service ($0.25 per query), requires an API key, and provides example queries showing expected usage patterns. However, it doesn't mention rate limits, error behaviors, or what happens when queries exceed limits, leaving some behavioral aspects unspecified.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with purpose first, differentiation from sibling tool, pricing information, and practical examples. Every sentence adds value, though the pricing and API key information could be slightly more integrated with the functional description rather than appended. Overall, it's appropriately sized for an 8-parameter analytical tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a complex analytical tool with 8 parameters, no annotations, and no output schema, the description does well by explaining the analytical capabilities, providing usage examples, and disclosing cost implications. However, it doesn't describe the output format or structure of analysis results, which would be important for an agent to understand what to expect from this tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description adds minimal parameter-specific information beyond the schema, though it provides example queries ('all alert traces in last 24h', 'agents with spike in activity') that help illustrate how parameters might be combined in practice. No additional parameter semantics are explicitly explained.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose as an 'Intelligence layer on top of the raw threat feed' that 'runs pattern analysis across all trace submissions' with specific capabilities: trending threats, anomaly detection, and correlated activity. It explicitly distinguishes from sibling tool 'get_threat_feed' by stating this tool reveals 'WHY patterns are happening, not just what was reported'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool versus alternatives: 'More powerful than get_threat_feed' and specifies it's for pattern analysis rather than raw data retrieval. It also includes practical usage context with pricing information ($0.25 per query) and API key requirements, helping the agent understand when this tool is appropriate.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
submit_traceAInspect
Submit a structured trace to the Cascade Trust Engine. Validates against schema, deduplicates via SHA-256 fingerprint, and credits tokens automatically. Public/solo agents: 2 unique traces = 1 token. HopTrace partner code holders: 1 trace = 1 token.
| Name | Required | Description | Default |
|---|---|---|---|
| source | Yes | Origin system (e.g., "gmail-mcp", "github-copilot", "custom-agent") | |
| content | Yes | The trace data payload | |
| agent_id | Yes | Unique identifier for the submitting agent | |
| metadata | No | Optional structured metadata | |
| trace_type | Yes | Category of trace | |
| partner_code | No | HopTrace partner code for 1:1 token exchange rate (omit for standard 2:1 rate) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and does well by disclosing key behavioral traits: validation against schema, deduplication via SHA-256 fingerprint, automatic token crediting, and different token exchange rates based on user type. It doesn't cover error conditions or response formats, but provides substantial operational context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste - first covers core functionality, second covers token economics. Every element earns its place, and the description is appropriately sized for a tool with this complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a mutation tool with no annotations and no output schema, the description provides good coverage of what the tool does, behavioral characteristics, and token economics. It could benefit from mentioning response format or error conditions, but covers the essential context given the structured data available.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all 6 parameters thoroughly. The description doesn't add any parameter-specific information beyond what's in the schema (like explaining 'trace_type' enum values or 'content' format). Baseline 3 is appropriate when schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('submit a structured trace'), target resource ('Cascade Trust Engine'), and core functionality (validation, deduplication, token crediting). It distinguishes from sibling tools like 'verify_trust' or 'get_trace_schema' by focusing on submission rather than verification or retrieval.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context about when to use this tool (to submit traces for token crediting) and includes implicit guidance about token exchange rates based on user type (public/solo vs. partner code holders). However, it doesn't explicitly mention when NOT to use it or name specific alternatives among the sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
verify_trustBInspect
Check if a given token or agent has a valid trust trail. Returns trust score, verification status, mutation history depth, and trust assessment. Provide either token_id or agent_id (or both).
| Name | Required | Description | Default |
|---|---|---|---|
| agent_id | No | Agent identity to check trust for (returns aggregate across all tokens) | |
| token_id | No | UUID of a specific token to verify |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions the return values ('trust score, verification status, mutation history depth, and trust assessment'), which adds useful context beyond the input schema. However, it lacks details on error handling, rate limits, authentication needs, or whether this is a read-only operation (though implied by 'Check'). For a tool with no annotations, this is a significant gap in behavioral transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose, followed by return details and parameter guidance. Every sentence earns its place: the first defines the action, the second lists outputs, and the third provides parameter usage. It's appropriately sized with zero waste, making it highly concise and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description partially compensates by listing return values and parameter semantics. However, it lacks details on error cases, performance expectations, or how the trust assessment is derived. For a tool with 2 parameters and no structured output, this is adequate but leaves clear gaps in completeness, especially for behavioral aspects.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents both parameters. The description adds value by clarifying the relationship between parameters: 'Provide either token_id or agent_id (or both),' and noting that agent_id 'returns aggregate across all tokens.' This semantic insight compensates for the high schema coverage, making it more helpful than the baseline of 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Check if a given token or agent has a valid trust trail.' It specifies the verb ('Check') and resource ('token or agent'), and distinguishes it from siblings like 'check_token_balance' or 'mutate_token' by focusing on trust verification rather than balance checking or mutation. However, it doesn't explicitly differentiate from 'bulk_verify_trust' beyond the singular vs. bulk aspect.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides implied usage guidance by stating 'Provide either token_id or agent_id (or both),' which helps in parameter selection. It doesn't explicitly say when to use this tool versus alternatives like 'bulk_verify_trust' for multiple checks or 'get_threat_feed' for threat-related data, nor does it mention any prerequisites or exclusions. This leaves some ambiguity in tool selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!