Skip to main content
Glama
Ownership verified

Server Details

MCP server exposing T54 XRPL x402 seller SKUs (OpenAPI) as tools: structured query, research brief, commerce data, and related routes. HTTP 402 + x402 settlement via broker (XRPL / Base USDC per env). Unified HTTPS host with health at /health.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.5/5 across 12 of 12 tools scored. Lowest: 2.6/5.

Server CoherenceB
Disambiguation3/5

The tool set has two distinct groups: contract security tools (audit, monitor, triage) and T54/x402 API tools (commerce data, airdrop intelligence, etc.), which are clearly separated. However, within the T54 group, tools like 't54_structured_query' and 't54_research_brief' have overlapping purposes for information retrieval, and 't54_x402_request' is a generic fallback that could be confused with specific tools. This creates some ambiguity in agent selection.

Naming Consistency2/5

Naming is highly inconsistent across the tool set. The first three tools use a clear 'verb_noun' pattern (e.g., contract_audit), while the remaining nine tools use a 't54_*' prefix with mixed styles: some are descriptive (t54_airdrop_intelligence), others are vague (t54_hello_ping), and one is generic (t54_x402_request). This mix of conventions (snake_case with and without prefixes) and lack of a uniform verb pattern makes the naming chaotic and unpredictable.

Tool Count4/5

With 12 tools, the count is reasonable for the server's broad purpose of contract security and agent commerce data. It's not excessive, and each tool appears to serve a specific function, though some overlap exists. The number aligns well with the dual-domain scope, avoiding the extremes of being too thin or too heavy.

Completeness3/5

For contract security, the surface is fairly complete with audit, monitor, and triage tools covering key workflows. For the T54/x402 API domain, tools cover data retrieval, health checks, and operations listing, but there are notable gaps: no update or delete operations for managing data, and the reliance on a generic 't54_x402_request' tool suggests missing specialized endpoints. This limits full lifecycle coverage and may lead to agent workarounds.

Available Tools

12 tools
contract_auditFull contract audit (Base USDC x402)A
Idempotent
Inspect

Run a full 5-phase security audit on up to 3 EVM contract addresses. Handles EIP-1167 proxy/implementation pairs. Returns complete intelligence card including Slither findings, Echidna fuzz results, deployer profiling, EIP-7702 delegate detection, money flow trace, and holder analysis.

ParametersJSON Schema
NameRequiredDescriptionDefault
chainIdNoCAIP-2 chain id, default eip155:8453eip155:8453
addressesYesComma-separated addresses (max 3)

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate non-read-only, open-world, idempotent, and non-destructive behavior. The description adds valuable context beyond this: it specifies the 5-phase audit process, details the returned intelligence card components (e.g., Slither findings, Echidna fuzz results), and mentions handling of EIP-1167 pairs. This enriches understanding of the tool's operational scope and outputs, though it doesn't cover rate limits or auth needs.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence, followed by specific details in a compact second sentence. Every sentence earns its place by conveying essential information without redundancy, making it highly efficient and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity, rich annotations (non-read-only, open-world, idempotent, non-destructive), and the presence of an output schema, the description is complete enough. It outlines the audit process, constraints (up to 3 addresses), and detailed return components, compensating for any gaps without needing to explain return values due to the output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with clear documentation for both parameters. The description adds minimal parameter semantics beyond the schema, only implying that 'addresses' are EVM contract addresses and up to 3 are allowed, which is partially covered in the schema's description. This meets the baseline for high schema coverage without significant added value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Run a full 5-phase security audit') and resource ('on up to 3 EVM contract addresses'), distinguishing it from siblings like 'contract_triage' or 't54_constitution_audit_lite' by emphasizing comprehensiveness and specific outputs. It explicitly mentions handling EIP-1167 proxy/implementation pairs, which further differentiates its scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool: for comprehensive security audits on EVM contracts, with a limit of up to 3 addresses. However, it does not explicitly state when not to use it or name alternatives among siblings, such as 'contract_triage' for lighter analysis or 't54_constitution_audit_lite' for less intensive audits.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

contract_monitor_subscribeContract monitoring subscription (Base USDC x402)A
Idempotent
Inspect

Subscribe to 30-day continuous monitoring of up to 10 EVM contract addresses. Fires webhook alerts on admin key movement, liquidity drops, claim condition changes, or new critical Slither findings.

ParametersJSON Schema
NameRequiredDescriptionDefault
addressesYesUp to 10 contract addresses
thresholdsNoOptional thresholds object
webhookUrlYesHTTPS webhook URL
durationDaysNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond annotations: it specifies the 30-day duration, webhook alert triggers (admin key movement, liquidity drops, etc.), and monitoring scope (up to 10 addresses). While annotations cover idempotency and non-destructive nature, the description provides specific operational details that enhance understanding.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise - a single sentence that efficiently communicates the core functionality, constraints, and alert types. Every element earns its place with zero wasted words, and it's front-loaded with the main purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (subscription service with multiple alert types), the description provides complete context about what the tool does, its duration, constraints, and alert mechanisms. With annotations covering safety aspects and an output schema presumably handling return values, the description is appropriately comprehensive.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 75% schema description coverage, the baseline is 3. The description mentions 'up to 10 EVM contract addresses' which aligns with the addresses parameter, and 'webhook alerts' which relates to webhookUrl, but doesn't add significant meaning beyond what the schema already documents for thresholds and durationDays.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Subscribe to 30-day continuous monitoring'), resource ('EVM contract addresses'), and scope ('up to 10 addresses'). It distinguishes from siblings by focusing on subscription-based monitoring rather than one-time audits or other operations listed in sibling tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context for continuous monitoring with webhook alerts, but doesn't explicitly state when to use this versus alternatives like contract_audit or contract_triage. It provides clear functional context but lacks explicit comparative guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

contract_triageEVM contract triage (Base USDC x402)A
Idempotent
Inspect

Screen an EVM smart contract address for malicious patterns, honeypots, rug mechanics, and known scam frameworks. Returns risk score (0-100), verdict (SAFE/SUSPICIOUS/MALICIOUS), and top threat flags. Completes in under 30 seconds.

ParametersJSON Schema
NameRequiredDescriptionDefault
chainIdNoCAIP-2 chain id, default eip155:8453eip155:8453
contractAddressYes0x-prefixed EVM address

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations provide hints (non-read-only, open-world, idempotent, non-destructive), but the description adds valuable behavioral context: it specifies a performance constraint ('Completes in under 30 seconds') and outlines the return format (risk score, verdict, threat flags), which are not covered by annotations. No contradiction with annotations exists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence, followed by return details and a performance note. Every sentence adds value without redundancy, making it efficient and well-structured for quick understanding.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (security screening), rich annotations, and the presence of an output schema (which handles return values), the description is complete. It covers purpose, scope, performance, and output at a high level, aligning well with the structured data without unnecessary detail.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with clear documentation for both parameters (contractAddress and chainId). The description does not add meaning beyond the schema, such as explaining parameter interactions or constraints, so it meets the baseline of 3 without compensating for gaps.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Screen an EVM smart contract address') and the resource type ('EVM smart contract'), distinguishing it from siblings like contract_audit or contract_monitor_subscribe by focusing on rapid triage for malicious patterns rather than comprehensive auditing or monitoring.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for quick security screening ('Completes in under 30 seconds') and mentions specific threat types (malicious patterns, honeypots, etc.), providing clear context. However, it does not explicitly state when to use alternatives like contract_audit for deeper analysis or when not to use this tool, missing explicit exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

t54_agent_commerce_dataVerifiable swarm proof + x402 commerce bundle (premium)C
Idempotent
Inspect

Verifiable swarm proof + x402 commerce bundle (premium)

HTTP GET /x402/v1/agent-commerce-data (operationId agentCommerceData). Paid routes may return HTTP 402 until the x402 broker settles on the configured rail.

ParametersJSON Schema
NameRequiredDescriptionDefault
depthNoQuery parameter `depth` for operation `agentCommerceData` (GET /x402/v1/agent-commerce-data). allowed values: standard, full; default: 'standard'.standard

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover key behavioral traits (readOnlyHint=false, openWorldHint=true, idempotentHint=true, destructiveHint=false), so the description's burden is lower. It adds context about potential HTTP 402 responses for paid routes, which is useful beyond annotations. However, it doesn't detail rate limits, authentication needs, or other operational behaviors, offering only moderate additional value.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is brief but front-loaded with the title restatement, followed by technical details. However, it includes redundant information (e.g., repeating the title) and lacks clear structure to guide the agent. While not verbose, it could be more efficiently organized to emphasize key points.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (implied by 'premium' and payment notes), annotations provide good coverage, and an output schema exists, reducing the description's burden. The description mentions payment behavior but omits details on return values, error handling, or integration with siblings. It's minimally adequate but leaves gaps in contextual understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, fully documenting the single parameter 'depth' with its enum values and default. The description adds no parameter-specific information beyond what's in the schema. With high schema coverage, the baseline score is 3, as the description doesn't compensate but also doesn't need to.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description restates the title with minimal elaboration, providing only the HTTP method and endpoint. It mentions 'verifiable swarm proof' and 'x402 commerce bundle' but doesn't explain what these are or what the tool actually does. The purpose is vague and doesn't distinguish this tool from siblings like 't54_x402_request' or 't54_structured_query'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description includes a note about 'Paid routes may return HTTP 402 until the x402 broker settles on the configured rail,' which hints at payment-related usage but doesn't explicitly state when to use this tool versus alternatives. No guidance is provided on prerequisites, context, or comparisons to sibling tools, leaving the agent with little direction.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

t54_airdrop_intelligenceAirdrop / incentive screening (Farm Score, risk flags)C
Idempotent
Inspect

Airdrop / incentive screening (Farm Score, risk flags)

HTTP GET /x402/v1/airdrop-intelligence (operationId airdropIntelligence). Paid routes may return HTTP 402 until the x402 broker settles on the configured rail.

ParametersJSON Schema
NameRequiredDescriptionDefault
topicNoQuery parameter `topic` for operation `airdropIntelligence` (GET /x402/v1/airdrop-intelligence). max length 256.
context_noteNoQuery parameter `context` for operation `airdropIntelligence` (GET /x402/v1/airdrop-intelligence). max length 4000.
contractAddressNoOptional EVM contract on Base for Tier-1 triage merge

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover key traits (non-destructive, idempotent, open-world), but the description adds context about HTTP 402 responses for paid routes, which is useful for understanding potential errors. However, it doesn't elaborate on rate limits, authentication needs, or what 'screening' actually does beyond the title.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is poorly structured: it repeats the title, includes technical HTTP details that clutter the purpose, and lacks front-loaded clarity. Sentences like 'Paid routes may return HTTP 402 until the x402 broker settles on the configured rail' are verbose and obscure the tool's function.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given annotations, a 100% schema coverage, and an output schema, the description is minimally adequate but incomplete. It fails to explain what 'Farm Score' or 'risk flags' mean, leaving gaps in understanding the tool's output and use despite structured data support.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so parameters are well-documented in the schema. The description adds no additional meaning about 'topic', 'context_note', or 'contractAddress', sticking to the baseline since the schema handles the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description restates the title ('Airdrop / incentive screening') and mentions 'Farm Score, risk flags' but lacks a specific verb or clear differentiation from siblings like 'contract_triage' or 't54_structured_query'. It's vague about what 'screening' entails operationally.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives like 'contract_audit' or 't54_research_brief'. The mention of 'Paid routes may return HTTP 402' hints at payment requirements but doesn't specify use cases or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

t54_constitution_audit_liteHeuristic constitution / ethics reviewC
Idempotent
Inspect

Heuristic constitution / ethics review

HTTP GET /x402/v1/constitution-audit (operationId constitutionAuditLite). Paid routes may return HTTP 402 until the x402 broker settles on the configured rail.

ParametersJSON Schema
NameRequiredDescriptionDefault
prompt_snippetNoQuery parameter `prompt_snippet` for operation `constitutionAuditLite` (GET /x402/v1/constitution-audit). max length 4000.

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations provide good coverage (readOnlyHint=false, openWorldHint=true, idempotentHint=true, destructiveHint=false), so the description's burden is lower. It adds some behavioral context about potential HTTP 402 responses for paid routes, which isn't covered by annotations. However, it doesn't elaborate on what 'heuristic constitution/ethics review' entails, rate limits, or authentication needs. The description doesn't contradict annotations but adds minimal extra value.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is brief but poorly structured. The first line repeats the title without adding value, and the technical details (HTTP endpoint, operationId) are included but not explained in a user-friendly way. While concise, it lacks front-loaded clarity—the most important information (what the tool does) is missing, making the brevity more under-specification than efficiency.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (constitution/ethics review), annotations provide good behavioral hints, and an output schema exists (so return values are documented elsewhere). However, the description fails to explain the core functionality, leaving gaps in understanding what 'review' entails. It's minimally adequate due to structured field support but incomplete for guiding effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the parameter 'prompt_snippet' well-documented in the schema (max length 4000, optional string/null). The description doesn't add any meaning beyond this—it merely repeats the parameter name without explaining its role in the audit/review process. With high schema coverage, the baseline is 3, and the description doesn't enhance parameter understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description restates the tool's title ('Heuristic constitution / ethics review') without specifying what the tool actually does. It mentions an HTTP endpoint and operation ID but doesn't explain the action being performed (e.g., 'audit a prompt for constitutional compliance' or 'review content against ethical guidelines'). The purpose remains vague rather than clearly stating a verb+resource combination.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus its siblings. The description mentions 'Paid routes may return HTTP 402' which hints at potential payment requirements but doesn't clarify use cases, prerequisites, or alternatives. Without explicit when/when-not instructions or named alternatives, users lack context for tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

t54_get_healthSeller health and LLM probeB
Idempotent
Inspect

Seller health and LLM probe

Returns JSON including llm probe; does not require payment.

HTTP GET /health (operationId getHealth). Paid routes may return HTTP 402 until the x402 broker settles on the configured rail.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond what annotations provide. While annotations indicate readOnlyHint=false (contradicting the GET operation), openWorldHint=true, idempotentHint=true, and destructiveHint=false, the description clarifies this is an HTTP GET endpoint, specifies it doesn't require payment, and warns about potential HTTP 402 responses on paid routes. This adds practical implementation details that annotations don't cover.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is reasonably concise but not optimally structured. The first sentence 'Seller health and LLM probe' repeats the title without adding value. The subsequent sentences provide useful information about the HTTP method, payment status, and potential error responses, but the information isn't front-loaded effectively. Some redundancy exists between the title and opening phrase.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 0 parameters, comprehensive annotations, and an output schema exists, the description provides adequate context. It explains the core functionality (health check with LLM probe), payment status, and potential HTTP behavior. While it could better differentiate from sibling tools, the combination of structured data and description covers the essential information needed for this simple endpoint.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0 parameters and 100% schema description coverage, the baseline would be 4. The description appropriately doesn't discuss parameters since there are none, which is correct. No additional parameter information is needed or provided.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the tool returns JSON including an 'llm probe' and mentions seller health, but it's vague about what specific action it performs. It doesn't clearly distinguish this health-check tool from siblings like 't54_hello_ping' or 't54_list_operations', which might also provide system status information. The phrase 'Seller health and LLM probe' is somewhat tautological with the title, lacking a specific verb+resource combination.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no explicit guidance on when to use this tool versus alternatives. It mentions that it 'does not require payment' and contrasts with 'paid routes', but this is behavioral information rather than usage context. There's no indication of when this health check should be performed versus other tools like 't54_hello_ping' or 'contract_audit'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

t54_hello_pingMicropayment ping (cheapest SKU)A
Idempotent
Inspect

Micropayment ping (cheapest SKU)

Confirms HTTP 402 + T54 facilitator path; minimal JSON body.

HTTP GET /hello (operationId helloPing). Paid routes may return HTTP 402 until the x402 broker settles on the configured rail.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover read/write status (readOnlyHint=false), idempotency (idempotentHint=true), and safety (destructiveHint=false). The description adds valuable context beyond this: it discloses that the tool triggers HTTP 402 responses under certain conditions ('paid routes may return HTTP 402 until settlement'), which is a key behavioral trait not captured in annotations. It also specifies the HTTP method (GET) and endpoint (/hello), enhancing transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is highly concise and well-structured: it starts with a clear title-like phrase, followed by specific details about the operation (HTTP GET, endpoint, response behavior). Every sentence adds value without redundancy, making it efficient and front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (simple ping with potential HTTP 402 responses), rich annotations, 0 parameters, and presence of an output schema, the description is largely complete. It explains the core functionality and key behavioral nuance (HTTP 402 conditions). However, it could slightly improve by clarifying the exact role versus t54_get_health, but the output schema likely handles return values, so this is minor.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0 parameters with 100% coverage, so no parameter documentation is needed. The description appropriately does not discuss parameters, maintaining focus on the tool's purpose and behavior. This meets the baseline of 4 for zero-parameter tools.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose as confirming HTTP 402 + T54 facilitator path with a minimal JSON body, which is specific (verb+resource). It distinguishes from most siblings by focusing on a ping/health-check operation, though not explicitly contrasting with t54_get_health or t54_x402_request.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no explicit guidance on when to use this tool versus alternatives. It mentions that 'paid routes may return HTTP 402 until settlement,' which hints at a context for HTTP 402 responses, but does not specify when to choose this over t54_get_health (general health check) or t54_x402_request (payment-related request).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

t54_list_operationsList T54 OpenAPI operationsA
Read-onlyIdempotent
Inspect

Returns operationIds, HTTP methods, paths, and query parameter names from the bundled OpenAPI spec (no network). Use before t54_x402_request or per-SKU tools.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already cover key behavioral traits (read-only, non-destructive, idempotent, closed-world), but the description adds valuable context by specifying that it works 'from the bundled OpenAPI spec (no network)', which clarifies data source and operational constraints beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence and adds usage guidance in the second, with no wasted words. Every sentence earns its place by providing essential information efficiently.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (0 parameters, rich annotations, and an output schema), the description is complete. It explains what the tool does, when to use it, and key behavioral aspects, leaving no gaps for an AI agent to understand its role.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0 parameters and 100% schema coverage, the baseline is 4. The description appropriately does not discuss parameters, as none exist, and focuses on the tool's function without redundancy.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with a specific verb ('Returns') and resource ('operationIds, HTTP methods, paths, and query parameter names from the bundled OpenAPI spec'), distinguishing it from siblings like 't54_x402_request' or 't54_structured_query' by specifying it's for listing operations without network calls.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It provides explicit guidance on when to use this tool ('Use before t54_x402_request or per-SKU tools') and when not to use it ('no network'), offering clear alternatives and context for its application.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

t54_research_briefMulti-section research briefC
Idempotent
Inspect

Multi-section research brief

HTTP GET /x402/v1/research-brief (operationId researchBrief). Paid routes may return HTTP 402 until the x402 broker settles on the configured rail.

ParametersJSON Schema
NameRequiredDescriptionDefault
topicNoQuery parameter `topic` for operation `researchBrief` (GET /x402/v1/research-brief). max length 256.
context_noteNoQuery parameter `context` for operation `researchBrief` (GET /x402/v1/research-brief). max length 2000.

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds some behavioral context beyond what annotations provide: it mentions the HTTP 402 payment status possibility, which isn't captured in the annotations. However, it doesn't explain what a 'multi-section research brief' actually contains, how comprehensive it is, what sources it uses, or any quality/accuracy limitations. With annotations covering idempotency, non-destructive nature, and open-world characteristics, the description adds marginal but incomplete behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is relatively concise (two sentences) but poorly structured. The first sentence is just the tool title repeated, which adds no value. The second sentence mixes technical implementation details (HTTP GET, operationId) with payment behavior information. While brief, it's not effectively front-loaded with the most important information for an AI agent trying to understand when and how to use this tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given that the tool has annotations covering key behavioral aspects (idempotent, non-destructive, open-world) and an output schema exists (so return values are documented elsewhere), the description's gaps are somewhat mitigated. However, for a tool that presumably generates research content, the description should at minimum explain what kind of research brief it produces and its basic characteristics. The current description leaves too much undefined about the tool's core functionality.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema already fully documents both parameters (topic and context_note) including their types, defaults, and length constraints. The description adds no additional parameter semantics - it doesn't explain what constitutes a good 'topic' versus 'context_note,' how they affect the research brief output, or provide examples of effective usage. The baseline of 3 is appropriate when the schema does all the parameter documentation work.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description is a tautology that essentially restates the tool's title ('Multi-section research brief') without specifying what action it performs. It mentions an HTTP GET operation but doesn't clarify what the tool actually does (e.g., 'generates a research brief on a topic' or 'retrieves research brief content'). The description fails to distinguish this tool from its many siblings on the server.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. While it mentions that 'paid routes may return HTTP 402,' this is technical implementation detail rather than usage guidance. There's no indication of what problem this tool solves, what context it's designed for, or how it differs from sibling tools like 't54_structured_query' or 't54_x402_request.'

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

t54_structured_queryConstitution-safe short LLM answerB
Idempotent
Inspect

Constitution-safe short LLM answer

Listing-friendly GET with q query parameter.

HTTP GET /x402/v1/query (operationId structuredQuery). Paid routes may return HTTP 402 until the x402 broker settles on the configured rail.

ParametersJSON Schema
NameRequiredDescriptionDefault
qNoQuery parameter `q` for operation `structuredQuery` (GET /x402/v1/query). max length 512; default: 'What is ethical agent commerce?'.

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The annotations already provide good behavioral information (not read-only, open world, idempotent, non-destructive). The description adds valuable context about payment requirements ('Paid routes may return HTTP 402 until the x402 broker settles on the configured rail') which isn't captured in annotations. However, it doesn't mention rate limits, authentication needs, or what 'constitution-safe' means in practice. The description doesn't contradict the annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is relatively concise but poorly structured. It front-loads the purpose but then mixes implementation details (HTTP GET, operationId) with behavioral information. The sentences about HTTP 402 and payment rails could be more clearly integrated. While not verbose, the structure doesn't optimally serve an AI agent trying to understand when and how to use this tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has comprehensive annotations and an output schema exists, the description doesn't need to cover return values. However, for a tool with payment requirements and the vague 'constitution-safe' concept, the description should provide more context about what this means operationally. The description leaves key questions unanswered about what makes responses 'constitution-safe' and how payment affects functionality.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage for the single parameter 'q', the schema already documents it thoroughly including default value and max length. The description adds minimal value by mentioning 'q' query parameter but doesn't provide additional semantic context about what types of queries work best, what 'constitution-safe' means for query formulation, or examples of effective queries.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states 'Constitution-safe short LLM answer' which provides a general purpose (safe LLM responses), but it's vague about what 'constitution-safe' means and doesn't specify what resource it operates on. The title repeats the same phrase, offering no additional clarity. It doesn't distinguish this from sibling tools like 't54_research_brief' or 't54_agent_commerce_data' which might also provide LLM-generated content.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description mentions 'Listing-friendly GET with `q` query parameter' which implies this is for simple queries, but provides no explicit guidance on when to use this versus alternatives. There's no mention of when not to use it or what specific scenarios it's designed for compared to sibling tools like 't54_research_brief' or 't54_x402_request'. The HTTP method and endpoint information doesn't constitute usage guidance for an AI agent.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

t54_x402_requestT54 x402 request (generic)A
Idempotent
Inspect

Execute any T54 seller operation by operationId with an optional query map. Prefer per-operation t54_* tools when available for clearer arguments. On HTTP 402, x402_broker_client pays then retries.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryNoQuery string parameters as a JSON object. Keys must match OpenAPI query names for this operation (see resource openapi://agentic-swarm-t54-skus).
operation_idYesExact OpenAPI operationId. Call t54_list_operations first. Known: agentCommerceData, airdropIntelligence, constitutionAuditLite, getHealth, helloPing, researchBrief, structuredQuery

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond annotations: it explains the HTTP 402 handling behavior ('x402_broker_client pays then retries'), which is not covered by the annotations. The annotations (readOnlyHint=false, openWorldHint=true, idempotentHint=true, destructiveHint=false) provide safety and idempotency info, but the description complements this with payment/retry logic for a specific error case. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise and front-loaded: two sentences that each earn their place. The first sentence states the core purpose, the second provides critical usage guidance and error handling. No wasted words or redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (generic operation executor with open-world hint), the description is complete enough. It covers purpose, usage vs. alternatives, and key behavioral nuance (402 handling). With annotations covering safety/idempotency, schema covering parameters 100%, and an output schema existing, no major gaps remain.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents both parameters. The description mentions 'optional query map' and 'operationId', but doesn't add semantic meaning beyond what the schema provides (e.g., the schema already describes query as 'Query string parameters as a JSON object' and operation_id with examples). Baseline 3 is appropriate when schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Execute any T54 seller operation by operationId with an optional query map.' It specifies the verb ('Execute'), resource ('T54 seller operation'), and mechanism ('by operationId'). However, it doesn't explicitly differentiate from all siblings like 't54_list_operations' (which lists operations rather than executing them), though it does mention preferring per-operation tools when available.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidance: 'Prefer per-operation t54_* tools when available for clearer arguments.' It names alternatives (the per-operation tools) and gives a clear when-not-to-use rule (when clearer arguments are needed). This directly addresses when to use this generic tool vs. the more specific sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources