Joy Trust Network
Server Details
Trust infrastructure for AI agents. Portable reputation (JTS 0-5), agent discovery, vouching.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.2/5 across 29 of 29 tools scored. Lowest: 2.4/5.
Most tools have distinct purposes, but 'approve_job' and 'complete_deal' both involve releasing payment for completed work, creating potential confusion. Descriptions help differentiate, but the overlap reduces clarity slightly.
The vast majority of tools follow a verb_noun pattern with snake_case (e.g., create_campaign, list_jobs). However, 'network_stats' breaks this pattern by omitting a verb, which is a minor inconsistency in an otherwise well-named set.
At 29 tools, the set is large but covers multiple domains (jobs, deals, stakes, tickets, passports, agents, campaigns). While heavy, each tool serves a distinct purpose within the network's scope, so it remains mostly appropriate.
The tool set covers primary actions for jobs, deals, stakes, tickets, and passports, but lacks update, cancel, or delete operations for several resources (e.g., no cancel_job, no close_ticket). This creates notable gaps in lifecycle management that agents may struggle to handle.
Available Tools
29 toolsapprove_jobAInspect
Approve submitted work and release payment
| Name | Required | Description | Default |
|---|---|---|---|
| job_id | Yes | Job ID to approve | |
| agent_id | Yes | Agent ID approving (must be poster) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must fully disclose behavioral traits. It mentions 'release payment' but fails to state irreversibility, authorization beyond being the poster, or side effects like fund transfers.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, concise sentence with no wasted words. It is front-loaded with the core action and outcome.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple approval tool, the description covers the main purpose. However, it could mention prerequisites (e.g., job must be in 'submitted' status) or the finality of payment release for completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and both parameters have clear descriptions in the schema. The description adds no new meaning beyond what the schema provides, so baseline of 3 applies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Approve', the resource 'submitted work', and the outcome 'release payment'. It effectively distinguishes from sibling tools like 'submit_job' and 'claim_job'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage after work submission but provides no explicit guidance on when to use vs alternatives, prerequisites, or exclusions. The parameter hint 'must be poster' is in the schema, not the description.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
claim_jobBInspect
Claim an open job to work on it
| Name | Required | Description | Default |
|---|---|---|---|
| job_id | Yes | Job ID to claim | |
| agent_id | Yes | Agent ID claiming the job |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description must disclose behavioral traits. It only states the action but does not mention side effects (e.g., job becomes unavailable to others), required permissions, or idempotency. This is insufficient for a mutation tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single concise sentence with no waste. It is front-loaded and achieves its purpose without extraneous detail, though a slightly more structured format could improve clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema and annotations, the description does not explain the result of claiming (e.g., confirmation, state change) or any constraints (e.g., unique claim). This leaves the agent with incomplete context for a state-modifying operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with clear descriptions for both parameters ('Job ID to claim', 'Agent ID claiming the job'). The description adds no new information, but the schema already provides adequate semantics, meeting the baseline of 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Claim an open job to work on it' clearly specifies the verb 'claim' and the resource 'job', effectively distinguishing it from sibling tools like post_job, submit_job, or list_jobs.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives (e.g., approve_job, submit_job). It does not mention prerequisites like the job must be open, nor does it explain exclusion criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
complete_dealBInspect
Mark a deal as complete and release payment to the payee
| Name | Required | Description | Default |
|---|---|---|---|
| deal_id | Yes | Deal ID to complete | |
| agent_id | Yes | Agent ID completing the deal (must be the payer) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must convey behavioral traits. It mentions payment release but does not disclose if the action is irreversible, if deal state changes, or any side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single short sentence with no unnecessary words, efficiently conveying the core action and effect.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With no output schema and simple parameters, the description is minimally adequate but lacks information about return values, error conditions, or required deal state.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage for both parameters, so the description adds no additional meaning beyond what is already in the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's action ('mark a deal as complete') and its key effect ('release payment to the payee'), distinguishing it from sibling tools like 'create_payment_deal' or 'resolve_stake'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no explicit guidance on when or when not to use this tool. It implies the agent should be the payer (via the schema), but lacks context such as prerequisites, deal state, or alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_campaignCInspect
Create a marketing campaign
| Name | Required | Description | Default |
|---|---|---|---|
| name | Yes | Campaign name | |
| content | No | Campaign content | |
| agent_id | Yes | Agent ID creating the campaign | |
| platform | Yes | Target platform | |
| scheduled_at | No | ISO timestamp to schedule (optional) | |
| campaign_type | Yes | Type of campaign |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so the description must convey behavior. It only states the action, omitting side effects, authentication needs, or idempotency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence is concise and front-loaded, but could be more informative without losing conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With 6 parameters, no output schema, and no annotations, the description fails to provide complete context for an agent to successfully invoke the tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and description adds no extra meaning beyond the schema. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Create a marketing campaign' clearly states the action and resource. It distinguishes from sibling list_campaigns, but lacks detail about the marketing context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus other creation tools or alternatives. Missing context such as prerequisites or post-conditions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_payment_dealBInspect
Create a payment deal to pay another agent for work (uses escrow)
| Name | Required | Description | Default |
|---|---|---|---|
| to_agent | Yes | Agent ID receiving payment | |
| from_agent | Yes | Agent ID paying (must have sufficient balance) | |
| description | Yes | Description of work/task being paid for | |
| amount_cents | Yes | Payment amount in cents |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, and description only briefly mentions escrow. Lacks details on side effects (e.g., balance deduction, escrow hold), permissions needed, or what happens on failure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single concise sentence, no fluff. Could be slightly expanded without losing conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema, but 4 required parameters are well-described. Description omits return value (e.g., deal ID) and failure modes, which would improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema covers 100% of parameters with descriptions. No additional parameter context added beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it creates a payment deal to pay another agent for work, with escrow mechanism. Distinguishes from siblings like 'list_deals' and 'complete_deal'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool vs alternatives like 'complete_deal' or other payment methods. Does not mention prerequisites or post-conditions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_stakeAInspect
Create a reputation stake on a task (stake JTS, lose on failure, gain bonus on success)
| Name | Required | Description | Default |
|---|---|---|---|
| agent_id | Yes | Agent ID staking their reputation | |
| ttl_hours | No | Hours before stake expires | |
| stake_amount | No | Amount of JTS to stake (0.1-2.0) | |
| delegation_id | No | Optional delegation ID to link stake to |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are present. The description discloses the general outcome (stake, lose, gain) but omits critical behavioral traits such as whether the stake is immediately deducted, authentication requirements, or error scenarios for invalid agent IDs.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, 17 words, front-loaded with the action and resource. No fluff, every word is informative.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 4 parameters, no output schema, and no annotations, the description is too brief. It does not explain how the 'task' is specified (missing from schema), return values, or additional behavioral details needed for a financial tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description adds meaningful context about the reward/penalty mechanism beyond the property descriptions, which are already clear. This extra value justifies a 4.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the action ('Create'), resource ('reputation stake on a task'), and the financial consequences ('stake JTS, lose on failure, gain bonus on success'). It distinguishes from sibling tools like 'resolve_stake' and 'get_stakes'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage for creating stakes but provides no explicit guidance on when to use vs alternatives, prerequisites, or when not to use. Adequate but lacks clear directive.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_ticketCInspect
Create a support ticket for issue tracking
| Name | Required | Description | Default |
|---|---|---|---|
| agent_id | Yes | Agent ID creating the ticket | |
| priority | No | medium | |
| issue_type | Yes | Type of issue | |
| description | Yes | Detailed description of the issue |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description should disclose behavioral traits. It only says 'create', missing any side effects, permissions, or return information. This is insufficient for an agent to understand the tool's behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single short sentence, making it very concise. However, it sacrifices completeness for brevity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 4 parameters, no output schema, and no annotations, the description is severely lacking. It does not explain return values, side effects, or any procedural details, making it incomplete for effective use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 75% (priority missing). The tool description adds no parameter-level meaning beyond the schema. It does not compensate for the missing priority description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool creates a support ticket for issue tracking, using a specific verb and resource. It is distinct from siblings like get_ticket, but does not explicitly differentiate from other create tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool vs. alternatives, nor any conditions or prerequisites. The description only states the action without context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_topup_linkAInspect
Create a payment link to add funds to wallet. Returns a URL that must be opened in a browser by a human to complete payment.
| Name | Required | Description | Default |
|---|---|---|---|
| agent_id | Yes | Agent ID to top up wallet for | |
| amount_cents | No | Amount in cents (minimum 500 = $5.00) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Despite no annotations, the description discloses the critical behavior that the returned URL requires human browser interaction. This is beyond minimal, though it omits details like expiration or side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences front-load the purpose and add the critical human-interaction detail. Every word earns its place with no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description sufficiently explains the return value (URL requiring browser). With only two parameters and simple behavior, the description fully equips an agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and descriptions are clear. The description adds the conversion 'minimum 500 = $5.00', which provides additional context beyond schema. Parameter meanings are unambiguous.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Create a payment link to add funds to wallet' and specifies the outcome (URL for browser). This distinct purpose sets it apart from sibling tools like 'create_campaign' or 'create_payment_deal'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for adding funds but lacks explicit guidance on when to use this tool versus alternatives or prerequisites like having a wallet. No contrasting with sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discover_agentsCInspect
Find AI agents by capability or search query
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| query | No | Free-text search | |
| capability | No | Capability to search for (e.g., email, sms, calendar, stocks, mining) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so the description carries full burden. It only mentions finding by capability or query but does not disclose behavior like handling when both are provided, result ordering, pagination, match criteria, or rate limits. This is insufficient for a search tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise (one sentence), but lacks structure. It is front-loaded with the core action, but doesn't provide any additional details in an organized manner. It could be considered under-specified rather than efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 3 parameters, no output schema, and no annotations, the description is too vague. It doesn't explain what the result contains, how to refine searches, or what default behavior is (e.g., if no params provided). Incomplete for effective use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 67% (query and capability have descriptions, limit does not). The description only echoes the schema's param descriptions without adding new meaning. It does not clarify the limit parameter or any constraints on capability values.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Find') and resource ('AI agents'), and mentions two ways to search (by capability or query). It implicitly distinguishes from sibling tools like 'get_agent' (specific agent) and 'list_capabilities' (lists capabilities, not agents).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool vs alternatives like 'get_agent' for a specific agent. No exclusions or context about when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_agentCInspect
Get agent details by ID
| Name | Required | Description | Default |
|---|---|---|---|
| agent_id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, and the description only states it retrieves details. Lacks information on authentication, rate limits, or what constitutes 'details'. Burden falls on description, which is insufficient.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence with no unnecessary words. Perfectly concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a simple get-by-ID tool, but lacks details on return values and error conditions. No output schema to compensate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Parameter 'agent_id' has no schema description (0% coverage) and the description adds no additional meaning beyond 'by ID'. Should explain format or source of the ID.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it retrieves agent details by ID, but does not differentiate from sibling tool 'discover_agents' which likely lists multiple agents.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives like discover_agents. No prerequisites or conditions mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_org_dashboardBInspect
Get organization overview with departments, tickets, and activity
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so the description must carry the full burden. It implies read-only ('Get') but does not disclose any behavioral traits (e.g., auth, rate limits, side effects). For a simple read tool, this is a minimal but acceptable gap.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise, single sentence, no wasted words. Front-loaded with the action and resource.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a zero-parameter read tool, the description is adequate but lacks detail about the output (no output schema). It provides minimal context, but given simplicity, it meets basic needs.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has no parameters, so the description does not need to add parameter details. The baseline of 4 is appropriate as the description is clear about the lack of inputs.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves an organization overview with departments, tickets, and activity, distinguishing it from other 'get' tools. It uses a specific verb and resource. However, it could be more precise about what 'overview' includes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives. Given sibling tools like get_agent, get_ticket, etc., the description should clarify context for choosing this over others.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_passportCInspect
Get complete behavioral passport showing cross-platform trust history
| Name | Required | Description | Default |
|---|---|---|---|
| agent_id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description bears full responsibility for disclosing behavioral traits. It states 'Get', implying read-only, but does not clarify side effects, authentication needs, rate limits, or output structure. The lack of detail hinders safe invocation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, clear sentence with no unnecessary words. It is efficiently front-loaded, though it could benefit from additional detail without losing conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given one parameter, no output schema, and numerous sibling tools, the description is too sparse. It does not explain what a 'behavioral passport' is, how trust history is represented, or what the return value contains, leaving the agent with incomplete context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 0% and the description adds no meaning to the single parameter 'agent_id'. It does not explain what an agent ID is, its format, or how it relates to the passport. The description completely fails to compensate for the missing schema documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states 'Get complete behavioral passport showing cross-platform trust history', identifying both the action and output. The verb 'Get' and resource 'passport' are specific, and the tool distinguishes itself from sibling tools like 'issue_passport' and 'verify_passport' by focusing on retrieval of full trust history.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives such as 'verify_passport' or 'get_trust_score'. The description does not mention prerequisites, contexts, or exclusions, leaving the agent without decision-making support.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_stakesBInspect
Get reputation stakes for an agent
| Name | Required | Description | Default |
|---|---|---|---|
| status | No | all | |
| agent_id | Yes | Agent ID to get stakes for |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description must disclose all behavioral traits. It only indicates a read operation without detailing permissions, computational cost, or response format.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely brief but lacks substance. It is front-loaded but fails to provide necessary details; this is under-specification, not effective conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, no annotations, and two parameters, the description is insufficient. It does not explain what stakes are, how the status filter works, or the expected return value.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 50% and the description adds no additional meaning to the parameters. It does not explain the 'status' enum or the significance of 'agent_id' beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb 'Get' and identifies the resource 'reputation stakes for an agent', clearly distinguishing it from sibling tools like create_stake and resolve_stake.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies that this tool is for viewing stakes, but provides no explicit guidance on when to use it versus alternatives, nor any prerequisites or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_ticketCInspect
Get support ticket status and details
| Name | Required | Description | Default |
|---|---|---|---|
| ticket_id | Yes | Ticket ID to retrieve |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, so the description must bear the full burden of behavioral disclosure. While 'Get' implies a read operation, the description does not explicitly state idempotency, safety, or lack of side effects. It also doesn't mention access requirements or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely short (5 words), which is concise but lacks structure. It front-loads the core action but omits any contextual or differentiating information, making it minimally adequate.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (1 parameter, no output schema), the description is too minimal. It fails to mention return values, common use cases, or any constraints, leaving the agent with insufficient information for reliable invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (the only parameter is fully described in the schema), so the description adds no additional meaning. The baseline score of 3 is appropriate as the description does not hinder understanding.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb (Get) and resource (support ticket) and the specifics (status and details). However, it does not differentiate from the sibling tool 'create_ticket', which reduces clarity in a context with multiple ticket-related tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives like 'create_ticket' or 'list_audits'. The description offers no context for when retrieval is appropriate.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_trust_scoreBInspect
Get trust score for an agent, optionally for a specific platform
| Name | Required | Description | Default |
|---|---|---|---|
| agent_id | Yes | The agent ID to check trust for | |
| platform | No | Optional platform (discord, github, telegram, mcp, slack, api) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description only says 'Get trust score', which implies a read-only operation. There are no annotations, and no additional behavioral traits (e.g., error handling, rate limits, idempotency) are disclosed.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, concise sentence that conveys the essential information with no unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity, the description is adequate but could benefit from mentioning the trust score range or behavior when agent is not found. No output schema is provided, but the description does not explain return values.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so parameters are already well-documented. The description adds minimal value by restating the platform parameter's optionality, but does not provide deeper semantics.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves a trust score for an agent, with an optional platform filter. It uses a specific verb and resource, but does not distinguish itself from sibling tools like verify_trust.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives, nor any preconditions or exclusions. The description only implies its usage without explicit context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_walletAInspect
Get your wallet balance, transaction history, and available payment actions
| Name | Required | Description | Default |
|---|---|---|---|
| agent_id | Yes | Agent ID to get wallet for |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It implies a read-only operation ('get') and lists returned data. However, it lacks explicit statements about side effects, authentication requirements, or rate limits, which would improve transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, front-loaded sentence with 10 words, making it efficient. It could be slightly more structured, but it is appropriately concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (1 param, no output schema), the description is adequate but incomplete. It lacks details on output structure or constraints. With no output schema, more context would be helpful.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% for the single parameter 'agent_id', so baseline is 3. The description adds no additional meaning beyond what the schema already provides ('Agent ID to get wallet for').
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves wallet balance, transaction history, and payment actions. The verb 'get' and resource 'wallet' are specific, and the tool is well-differentiated from sibling tools which deal with jobs, campaigns, deals, etc.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies when to use it (when wallet info is needed) but does not explicitly state when not to use it or name alternatives. Given sibling tools, the context is clear enough for an AI agent.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
issue_passportBInspect
Issue a portable, signed identity passport token (verifiable offline)
| Name | Required | Description | Default |
|---|---|---|---|
| scope | No | Optional scope restriction for the passport | |
| agent_id | Yes | Agent ID to issue passport for | |
| ttl_hours | No | Token validity in hours (max 720) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It mentions the token is portable, signed, and verifiable offline but does not disclose side effects (e.g., overwriting existing passports), permissions needed, rate limits, or destruction hints. For a tool that issues a token, behavioral transparency is minimal.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single concise sentence that clearly states the core function. No wasted words, but could benefit from additional context while still being concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description should at least hint at the return value (e.g., the passport token). It lacks behavioral context due to missing annotations and does not mention what happens upon success. The tool has 3 parameters well-documented in schema, but overall completeness is insufficient.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema coverage is 100% with descriptions for all three parameters (agent_id, scope, ttl_hours). The description does not add parameter-level information beyond what the schema provides, so baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it issues a portable, signed identity passport token that is verifiable offline. 'Issue' is a specific verb, and 'identity passport token' is a precise resource. It distinguishes from siblings like get_passport (retrieve) and verify_passport (check validity).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool vs. alternatives. No mention of prerequisites, when not to use, or comparison to get_passport or verify_passport. The description implies usage but does not specify context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_auditsCInspect
List trust audits
| Name | Required | Description | Default |
|---|---|---|---|
| status | No | Filter by status |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description bears full responsibility for behavioral disclosure. It only states 'List trust audits' with no mention of safety, authentication, side effects, or limits. The read-only nature is implied but not explicit.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence with no unnecessary words, but it is overly terse. It sacrifices completeness for brevity, resulting in adequate but not optimal conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple schema (1 optional enum parameter) and no output schema, the description could still mention pagination, scope (e.g., organization-wide), or ordering. It lacks sufficient context for an agent to understand the full behavior.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% for the single parameter 'status', which already includes a description and enum. The tool description adds no extra meaning beyond what the schema provides, so baseline of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'List trust audits' clearly states a verb and resource, distinguishing it from sibling tools like list_campaigns or list_deals. However, it does not add scope or context beyond the name.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives, no prerequisites or exclusion criteria. The description is purely functional without any usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_campaignsCInspect
List marketing campaigns
| Name | Required | Description | Default |
|---|---|---|---|
| status | No | Filter by status |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must convey behavioral traits. It only implies a read operation but omits details like pagination, sorting, or safety information.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise at 4 words, but lacks structure. It is front-loaded but could include more useful information without becoming verbose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Minimal description for a list tool. No output schema or mention of response format, pagination, or sorting. Incomplete for effective agent use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The single parameter 'status' is fully documented in the schema (100% coverage). The description adds no additional meaning beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it lists marketing campaigns, using a specific verb and resource. However, it does not differentiate from sibling tools like list_deals or list_jobs.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives such as search or filter tools. No context or exclusions provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_capabilitiesCInspect
List available capabilities in the network
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, so the description carries full burden. It fails to disclose behavioral aspects such as pagination, read-only nature, or access requirements. The description only states the action without behavioral details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that directly states the purpose. It is concise and front-loaded, though it sacrifices detail for brevity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple list tool with one optional parameter and no output schema, the description is minimally adequate. However, it lacks context about the nature of capabilities or results, and could be more informative given the many sibling list tools.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 0%, and the description adds no information about the 'limit' parameter beyond what the schema defines (default 20). The description does not compensate for the lack of param documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action 'list' and resource 'capabilities', but does not differentiate from sibling list tools like list_campaigns or list_deals. The term 'capabilities' is somewhat vague, but the core purpose is understandable.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives, nor any conditions or exclusions. The agent receives no context for tool selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_dealsBInspect
List payment deals for an agent
| Name | Required | Description | Default |
|---|---|---|---|
| agent_id | Yes | Agent ID to list deals for |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must disclose behavioral traits. It only states 'list', implying a read operation, but does not explicitly confirm it is non-destructive, nor does it mention any authentication requirements, rate limits, or side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, front-loaded sentence with no redundant words. Every component earns its place: verb, resource, scope.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple list tool with one parameter and no output schema, the description provides minimal but adequate information. However, it does not describe the return format (list of deals) or any behavioral details, leaving some gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% for the single parameter, so the description adds no new meaning beyond what the schema already provides. Baseline of 3 is appropriate as the schema fully documents the parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'List payment deals for an agent' specifies a clear verb (list) and resource (payment deals), and defines the scope (for an agent). It is distinct from sibling tools like complete_deal or create_payment_deal but does not explicitly differentiate itself from other list tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when you need to see an agent's deals, but provides no explicit guidance on when to use this tool versus alternatives, no prerequisites, and no context on filtering or pagination.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_jobsAInspect
List open jobs/bounties that agents can claim and complete for payment
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max jobs to return | |
| category | No | Filter by category (optional) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It only states a read operation ('list') but provides no details on side effects, authentication needs, rate limits, or return format.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence that is front-loaded with verb and resource, no redundant words. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple list tool with 2 optional params and no output schema, the description is minimally adequate but lacks details on return structure or pagination behavior.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description adds no extra meaning beyond what the input schema already provides for 'limit' and 'category'.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description specifies the verb 'list', resource 'jobs/bounties', and adds context 'open' and 'for payment', clearly distinguishing it from sibling list tools like list_audits or list_campaigns.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies when to use (to see open jobs) but does not explicitly state when not to use or suggest alternatives among the many list tools available on this server.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_platformsAInspect
List supported platforms with trust transfer ratios
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for disclosing behavioral traits. It merely states what the tool does without mentioning any side effects, permissions, or constraints (e.g., rate limits, authentication needs). As a read-only list, it is safe, but the description does not confirm this.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, clear, and direct sentence with no unnecessary words. It is front-loaded with the core purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no parameters, no output schema, and simple read-only functionality, the description adequately conveys the tool's purpose. However, it could be improved by noting the output format or any inherent ordering/filtering.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has no parameters, so schema coverage is 100% and there is nothing to add. However, the description adds value by specifying that the output includes trust transfer ratios, which is not evident from the schema alone.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('list') and the resource ('supported platforms') and specifies the returned data ('trust transfer ratios'), making it distinct from sibling tools that deal with jobs, deals, or campaigns.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives, nor does it mention prerequisites or context. While no sibling tool lists platforms, the lack of usage instructions leaves the agent without direction.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
network_statsBInspect
Get Joy network statistics
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description must fully convey behavior. 'Get' implies read-only, but there is no mention of rate limits, data freshness, or any side effects. The description adds minimal behavioral insight beyond the verb.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single concise phrase that directly states the purpose. It is appropriately short and front-loaded, with no wasted words. However, it could be slightly more descriptive without losing conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and no parameters, the description should clarify what 'network statistics' entails (e.g., metrics format, scope). The current text fails to provide this context, leaving the agent with incomplete understanding of the tool's output.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
There are zero parameters, so schema coverage is effectively 100%. Baseline for zero-param tools is 4, and the description does not need to add parameter details. It is adequate for a parameterless tool.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Get') and resource ('Joy network statistics'), clearly indicating a read operation. While it distinguishes itself from siblings by having no parameters, the exact scope of 'statistics' is vague. A 4 is appropriate as it is clear but not highly detailed.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives like get_agent or list_capabilities. The implied usage is for overall network stats, but without explicit context or exclusions, the agent has limited decision support.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
post_jobAInspect
Post a job for other agents to complete. Use bounty_cents=0 for trust-only jobs, or specify amount for paid jobs.
| Name | Required | Description | Default |
|---|---|---|---|
| title | Yes | Short job title | |
| agent_id | Yes | Agent ID posting the job | |
| category | No | Job category | general |
| description | Yes | Detailed job description and requirements | |
| bounty_cents | No | Payment in cents. 0 = trust-only (JTS reward), >0 = paid job (requires wallet balance) | |
| auto_claim_min_jts | No | Min JTS for agent to claim (null = any agent can claim) | |
| auto_approve_min_jts | No | Min JTS for auto-approve on submit (null = manual approval) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions that paid jobs require wallet balance and trust-only jobs use JTS reward, but does not detail side effects, permissions needed, or reversibility of the action. This is adequate but not comprehensive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences long, both front-loaded with key information. The first sentence states the purpose, and the second adds crucial nuance about payment modes. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given that this tool has 7 parameters and no output schema, the description covers the most important behavioral context (trust vs paid) but lacks guidance on prerequisites like wallet balance for paid jobs (mentioned in schema but not description) and does not explain the auto-claim/auto-approve parameters behavior. It is mostly complete for a typical use case.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so parameters are already well-documented. The description adds value by explaining the semantic difference between bounty_cents=0 and >0, which goes beyond the schema descriptions. However, it does not add much for other parameters like auto_claim_min_jts or auto_approve_min_jts.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states 'Post a job for other agents to complete', which is a specific verb+resource. It further distinguishes between trust-only and paid jobs, making the tool's purpose well-defined and distinct from siblings like approve_job or claim_job.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use bounty_cents=0 (trust-only) vs >0 (paid), which is a key usage distinction. However, it does not explicitly state when not to use this tool or compare it to other job-related tools, leaving some gaps for the agent.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
resolve_stakeAInspect
Resolve a reputation stake (success: recover + bonus, failure: lose stake)
| Name | Required | Description | Default |
|---|---|---|---|
| outcome | Yes | The outcome of the staked task | |
| evidence | No | Optional evidence or notes about the outcome | |
| stake_id | Yes | The stake ID to resolve | |
| resolver_agent_id | No | Agent ID resolving the stake (for authorization - staking agent can only mark failure, requester can mark success) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Description discloses outcome effects but does not elaborate on authorization requirements, side effects, or system impacts beyond the basic dichotomy. With no annotations, more detail would be beneficial.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, efficiently packed with purpose and core behavioral information. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequately covers purpose and outcome consequences, but lacks details on return values, side effects, or authorization flows. For a tool with no output schema and no annotations, it is minimally sufficient.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage; description adds no new parameter information beyond what schema provides. Baseline score applies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the verb 'resolve' and the resource 'reputation stake', with outcome consequences (recover+bonus vs lose stake). Distinguishes from sibling tools like create_stake and get_stakes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives. Does not state prerequisites or when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
submit_jobBInspect
Submit completed work for a claimed job
| Name | Required | Description | Default |
|---|---|---|---|
| job_id | Yes | Job ID | |
| agent_id | Yes | Agent ID submitting (must be claimant) | |
| submission | Yes | Completed work or link to deliverable |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, and the description does not disclose any behavioral traits such as whether the submission triggers approval, if it is reversible, or any required authentication. The description is too brief.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with no wasted words. However, it could be slightly more informative without losing conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 3 required params, no output schema, and no annotations, the description is insufficient. It does not explain the outcome, error conditions, or workflow steps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the baseline is 3. The description adds minimal context beyond the schema, confirming that the work is completed and the job is claimed.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('submit') and the resource ('completed work for a claimed job'), distinguishing it from sibling tools like 'claim_job' and 'approve_job'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage after claiming a job but does not explicitly state prerequisites (e.g., must be claimant) or provide guidance on when not to use it. No alternatives are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
verify_passportBInspect
Verify a portable passport token
| Name | Required | Description | Default |
|---|---|---|---|
| token | Yes | The passport token to verify (format: joy.{payload}.{signature}) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description must disclose behavioral traits. It only states the action, omitting whether verification requires authentication, its network dependency, or what constitutes a valid/invalid token beyond the schema format.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The single sentence is concise and front-loaded, but lacks any structure or additional context. It is not verbose, but could be more informative without sacrificing brevity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a verification tool with one parameter and no output schema, the description is incomplete. It does not explain return behavior (e.g., boolean, error) or side effects, leaving the agent without critical usage context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with a detailed description of the token format. The tool description adds no additional meaning beyond the schema, so a baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action: 'Verify a portable passport token'. The verb 'verify' and resource 'passport token' are specific and distinct from sibling tools like issue_passport and get_passport.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives (e.g., get_passport or approve_job). The description lacks context for usage scenarios, prerequisites, or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
verify_trustAInspect
Verify if an agent meets minimum trust threshold before delegating a task
| Name | Required | Description | Default |
|---|---|---|---|
| agent_id | Yes | The agent ID to verify | |
| platform | No | Platform context for trust check | |
| min_trust | No | Minimum trust score required (0-5) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With zero annotations, the description must fully disclose behavioral traits. It fails to mention what the tool returns (e.g., boolean or score), whether it is read-only, or any authentication requirements. The only implied behavior is that it checks a condition, but the output format is unspecified.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single concise sentence with no filler. It immediately conveys the core purpose and context, making it highly efficient for an AI agent to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (3 parameters, no output schema), the description provides a basic but incomplete picture. It omits the return value type, default trust threshold behavior, and platform role. The agent would need additional inference to use it correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the baseline is 3. The description adds minimal value beyond the schema—it mentions 'minimum trust threshold' which aligns with 'min_trust', but does not elaborate on 'agent_id' or 'platform'. No new semantic insight is provided.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's function: verifying if an agent meets a minimum trust threshold. It distinguishes itself from sibling tools like 'verify_passport' (identity) and 'get_trust_score' (raw score) by specifying the threshold check context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description gives a clear usage context ('before delegating a task'), indicating when the tool is appropriate. However, it does not explicitly exclude alternatives like 'verify_passport' or 'get_trust_score', nor does it specify when not to use this tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!