Skip to main content
Glama

Boundary Guard x402

Server Details

MCP tools for x402 readiness, paid-path probes, launch packs, and trust receipts.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.7/5 across 6 of 6 tools scored. Lowest: 2.9/5.

Server CoherenceA
Disambiguation5/5

Each tool targets a distinct operation: checkpoint creation, readiness checks, receipt generation, launch pack generation, endpoint probing, and resource scanning. No functional overlap.

Naming Consistency5/5

All names follow a consistent verb_noun snake_case pattern (e.g., check_agent_tool_readiness, generate_trust_receipt). Verbs clearly indicate the action.

Tool Count5/5

Six tools is a well-scoped number for a specialized server covering core x402 boundary guard operations without excess.

Completeness4/5

Covers major workflows like check, probe, scan, and generation. Minor gaps like a 'verify' tool are not essential for the stated purpose, so completeness is high.

Available Tools

6 tools
boundary_guard_checkBoundary Guard CheckC
Read-onlyIdempotent
Inspect

Create a Boundary Guard pre-action checkpoint receipt for an agent request, policy decision, and result summary. Suggested xpay price: $0.03/call.

ParametersJSON Schema
NameRequiredDescriptionDefault
policyNoDecision object, e.g. allow/retry/review/block and reason.
resultNoOptional result or dry-run summary to hash into evidence.
requestYesAction metadata the agent is about to perform.
nextStepNoOptional guidance stored in the receipt.

Output Schema

ParametersJSON Schema
NameRequiredDescription
decisionYes
nextStepNo
createdAtYes
receiptIdYes
evidenceHashYes
claimBoundaryYes
marketplacePositioningNo
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description states 'Create a ... receipt', implying a write operation, while annotations declare readOnlyHint=true, creating a direct contradiction. Additionally, idempotentHint=true conflicts with the idea of creating a new receipt on each call. No further behavioral context is provided beyond the contradictory annotation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of two sentences: one for purpose and one listing a suggested price. It is efficient with no wasted words, though the price information may be extraneous.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the annotations contradiction and lack of usage guidance, the description is incomplete. It fails to clarify the tool's read-only nature or idempotency, leaving the agent with misleading information.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with each parameter documented in the input schema. The description does not add additional meaning beyond summarizing the purpose, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool creates a Boundary Guard pre-action checkpoint receipt, specifying it includes agent request, policy decision, and result summary. This is a specific verb-resource combination that distinguishes it from siblings like generate_trust_receipt.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives such as generate_trust_receipt or check_agent_tool_readiness. The description lacks explicit usage context or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

check_agent_tool_readinessAgent Tool Readiness CheckerA
Read-onlyIdempotent
Inspect

Check whether an x402/agent-facing tool is ready for agent routing, marketplace listing, and paid-path monitoring, including public agent discovery surfaces (/llms.txt, /agents.txt, /.well-known/mcp.json, /mcp). Tiers: quick $1, deep $5, report $10.

ParametersJSON Schema
NameRequiredDescriptionDefault
tierNoReadiness depth. quick=$1, deep=$5, report=$10. Defaults to quick.
methodNoSafe unpaid probe method when paid_path is supplied. Defaults to GET.
targetYesTarget API/provider base URL to scan.
expectedNoOptional expected x402 network/asset/price metadata for paid_path probes.
paid_pathNoOptional specific paid endpoint to probe without payment for deep/report tiers.
marketplace_urlNoOptional marketplace/listing URL to compare against public metadata.
expected_resourcesNoOptional expected resource count.

Output Schema

ParametersJSON Schema
NameRequiredDescription
scanNo
tierYes
readyYes
scoreYes
checksYes
issuesYes
reportNo
targetYes
productYes
priceUsdYes
healthProbeNo
recommendedFixesYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only, open-world, idempotent, and non-destructive behavior. The description adds valuable context about the specific surfaces checked and pricing tiers, enhancing transparency beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise with two sentences, front-loading the purpose and then listing tiers. No unnecessary words or redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (7 parameters, output schema exists), the description covers the main purpose, tiers, and discovery surfaces. It could mention output or when to use each tier, but overall it is reasonably complete for an agent to understand.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so parameters are well-documented in the schema. The description does not add significant meaning beyond what the schema provides, meeting the baseline expectation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool checks readiness for agent routing, marketplace listing, paid-path monitoring, and public discovery surfaces (specific files). It differentiates from sibling tools which focus on other aspects of x402, like boundary guarding or receipt generation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies use for readiness checking and mentions pricing tiers, but does not explicitly guide when to choose this tool over siblings like probe_x402_paid_path or scan_x402_resource. No exclusions or alternative recommendations are provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

generate_trust_receiptGenerate Trust ReceiptA
Read-onlyIdempotent
Inspect

Generate a deterministic trust receipt from sanitized request/policy/result/payment summaries. Do not submit raw auth headers, cookies, API keys, private keys, payment signatures, payment response headers, customer prompts, customer documents, or payer-identifying evidence. Suggested xpay price: $0.05/call.

ParametersJSON Schema
NameRequiredDescriptionDefault
policyNoSanitized policy or decision summary to hash.
resultNoSanitized outcome/result summary to hash; omit customer data and secret-like values.
paymentNoOptional sanitized payment summary or caller-provided hashes only; do not include raw payment signatures, raw payment response headers, private keys, API keys, cookies, payer-identifying evidence, or wallet secrets.
requestYesSanitized request/action summary to hash; omit raw prompts, documents, credentials, cookies, auth headers, signatures, and secrets.
nextStepNoOptional receipt next-step guidance.

Output Schema

ParametersJSON Schema
NameRequiredDescription
decisionYes
nextStepNo
createdAtYes
receiptIdYes
evidenceHashYes
claimBoundaryYes
marketplacePositioningNo
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnlyHint, openWorldHint, idempotentHint, and non-destructive. The description adds context about determinism and sensitive data restrictions, which are not in annotations. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences: first defines purpose, second lists prohibitions and price. No filler, front-loaded, and every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given high schema coverage, existing annotations, and an output schema, the description covers purpose, constraints, and a pricing suggestion. It is complete for the tool's complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with each parameter described. The description provides a general caution about data sanitization but does not add specific meaning beyond the schema. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states 'Generate a deterministic trust receipt from sanitized request/policy/result/payment summaries,' which is a specific verb ('generate') and resource ('trust receipt'). It clearly distinguishes from sibling tools like boundary_guard_check or generate_x402_launch_pack, which have different purposes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description lists what not to include (raw auth headers, keys, etc.) but lacks explicit guidance on when to use this tool versus alternatives. No 'use when' or 'do not use if' conditions are stated, leaving usage context implied.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

generate_x402_launch_packx402 Launch Pack GeneratorB
Read-onlyIdempotent
Inspect

Generate marketplace-safe listing copy, buyer FAQ, launch checklist, approval packet, and claim boundaries for x402/MCP sellers from readiness evidence. Tiers: single $9, service $29, premium $49.

ParametersJSON Schema
NameRequiredDescriptionDefault
tierNoLaunch pack depth. single=$9, service=$29, premium=$49. Defaults to single.
methodNoSafe unpaid probe method when paid_path is supplied. Defaults to GET.
targetYesTarget API/provider base URL to package for launch.
audienceNoPrimary buyer/audience for listing copy.
expectedNoOptional expected x402 network/asset/price metadata for paid_path probes.
paid_pathNoOptional paid endpoint to validate via unpaid 402 challenge for service/premium packs.
product_nameNoBuyer-facing product title.
marketplace_urlNoOptional marketplace/listing URL to compare against public metadata.
primary_use_caseNoPrimary buyer outcome/use case.
expected_resourcesNoOptional expected resource count.
desired_marketplacesNoOptional marketplace names to include in launch planning.

Output Schema

ParametersJSON Schema
NameRequiredDescription
tierYes
reportNo
targetYes
productYes
priceUsdYes
readinessNo
launchPackYes
productNameYes
claimBoundaryNo
readinessScoreYes
readyForDistributionNo
approvalRequiredBeforeDistributionYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint, idempotentHint, openWorldHint, destructiveHint false. Description adds that the tool 'generates' documents from 'readiness evidence', which is consistent but adds limited behavioral context beyond what annotations provide. No mention of safety, rate limits, or side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with the main action and output types. No redundant phrasing. Tiers and pricing efficiently packed into second sentence. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite 11 parameters and existing output schema, the description does not map inputs to outputs or explain how 'readiness evidence' relates to parameters. Missing guidance on parameter usage, dependencies, or expected behaviors. For a complex tool, more context is needed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline 3 applies. Description adds pricing context for the 'tier' enum (single $9, service $29, premium $49) but does not explain other parameters like target, audience, or paid_path. Value added is marginal beyond schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool generates specific documents (listing copy, FAQ, checklist, packet, boundaries) for x402/MCP sellers, with verb 'Generate' and resource types, distinguishing it from siblings like probe_x402_paid_path or scan_x402_resource.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool vs siblings. The description only lists output types and tiers, lacking context about prerequisites, scenarios, or alternatives. It is implied for launch pack generation but no 'when not to use' or references to sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

probe_x402_paid_pathx402 Paid-Path Health ProbeA
Read-onlyIdempotent
Inspect

Probe a public x402 endpoint without payment, parse the 402 challenge, compare expected network/asset/price, and return a deterministic health receipt. Suggested xpay price: $0.50/call.

ParametersJSON Schema
NameRequiredDescriptionDefault
modeNoProbe mode for v1. Defaults to unpaid_402.
methodNoSafe unpaid probe method. Defaults to GET.
targetYesSpecific paid endpoint URL to probe without payment.
expectedNoOptional expected x402 metadata such as network, asset, and priceUsd.

Output Schema

ParametersJSON Schema
NameRequiredDescription
checksYes
issuesYes
targetYes
healthyYes
receiptYes
observedYes
recommendedFixesYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds value beyond annotations by detailing the probe behavior: it parses the 402 challenge, compares expected metadata, and issues a deterministic health receipt. Annotations already confirm safety, and the description enriches this with the probe's specific workflow and a suggested cost. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences long with no filler. The first sentence front-loads the action and outcome; the second adds a cost suggestion. Every part is essential and efficiently conveys the tool's purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite having 4 params, nested objects, and enums, the description covers the core probe flow and return value. The output schema exists (not shown) to handle return details. It does not explain mode differences, but the schema covers those. Overall, it is adequate for the complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description indirectly explains the 'expected' parameter through the phrase 'compare expected network/asset/price', but does not elaborate on 'mode' or 'method' enums beyond what the schema provides. Its contribution over schema is marginal.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('probe') and defines the resource ('public x402 endpoint') and outcome ('return a deterministic health receipt'). It clearly differentiates from sibling tools by emphasizing 'without payment' and the comparison of expected values.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies the tool is for health checking x402 endpoints but does not explicitly state when to use it versus alternatives like scan_x402_resource. The cost suggestion is a guideline but not about usage context. No exclusions or comparisons provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

scan_x402_resourcex402 Resource ScanA
Read-onlyIdempotent
Inspect

Read-only scan of public x402/OpenAPI metadata and optional marketplace listing staleness. Suggested xpay price: $0.10/call.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesTarget API/provider base URL to scan.
marketplace_urlNoOptional marketplace/listing URL to compare against public metadata.
expected_resourcesNoOptional expected resource count.

Output Schema

ParametersJSON Schema
NameRequiredDescription
scoreYes
issuesYes
pricesNo
targetYes
nextStepsYes
marketplacePositioningNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint, idempotentHint, and destructiveHint=false. The description adds 'Read-only scan' and notes a suggested price, but does not provide additional behavioral context beyond what annotations convey.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, front-loading purpose and including a relevant pricing note, with no wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of an output schema (stated in context signals) and full annotations, the description adequately covers the tool's purpose and pricing. It could mention prerequisites or return value hints, but is mostly complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% with clear parameter descriptions. The description does not add meaning beyond what the schema already provides, so a baseline of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Read-only scan of public x402/OpenAPI metadata' with a specific verb and resource, and includes optional marketplace staleness checking, distinguishing it from sibling tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies a use case for scanning x402 metadata and suggests a price, but does not explicitly state when to use this tool versus alternatives like boundary_guard_check or check_agent_tool_readiness.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources