Boundary Guard x402
Server Details
MCP tools for x402 readiness, paid-path probes, launch packs, and trust receipts.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.7/5 across 6 of 6 tools scored. Lowest: 2.9/5.
Each tool targets a distinct operation: checkpoint creation, readiness checks, receipt generation, launch pack generation, endpoint probing, and resource scanning. No functional overlap.
All names follow a consistent verb_noun snake_case pattern (e.g., check_agent_tool_readiness, generate_trust_receipt). Verbs clearly indicate the action.
Six tools is a well-scoped number for a specialized server covering core x402 boundary guard operations without excess.
Covers major workflows like check, probe, scan, and generation. Minor gaps like a 'verify' tool are not essential for the stated purpose, so completeness is high.
Available Tools
6 toolsboundary_guard_checkBoundary Guard CheckCRead-onlyIdempotentInspect
Create a Boundary Guard pre-action checkpoint receipt for an agent request, policy decision, and result summary. Suggested xpay price: $0.03/call.
| Name | Required | Description | Default |
|---|---|---|---|
| policy | No | Decision object, e.g. allow/retry/review/block and reason. | |
| result | No | Optional result or dry-run summary to hash into evidence. | |
| request | Yes | Action metadata the agent is about to perform. | |
| nextStep | No | Optional guidance stored in the receipt. |
Output Schema
| Name | Required | Description |
|---|---|---|
| decision | Yes | |
| nextStep | No | |
| createdAt | Yes | |
| receiptId | Yes | |
| evidenceHash | Yes | |
| claimBoundary | Yes | |
| marketplacePositioning | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description states 'Create a ... receipt', implying a write operation, while annotations declare readOnlyHint=true, creating a direct contradiction. Additionally, idempotentHint=true conflicts with the idea of creating a new receipt on each call. No further behavioral context is provided beyond the contradictory annotation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two sentences: one for purpose and one listing a suggested price. It is efficient with no wasted words, though the price information may be extraneous.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the annotations contradiction and lack of usage guidance, the description is incomplete. It fails to clarify the tool's read-only nature or idempotency, leaving the agent with misleading information.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with each parameter documented in the input schema. The description does not add additional meaning beyond summarizing the purpose, so baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool creates a Boundary Guard pre-action checkpoint receipt, specifying it includes agent request, policy decision, and result summary. This is a specific verb-resource combination that distinguishes it from siblings like generate_trust_receipt.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives such as generate_trust_receipt or check_agent_tool_readiness. The description lacks explicit usage context or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
check_agent_tool_readinessAgent Tool Readiness CheckerARead-onlyIdempotentInspect
Check whether an x402/agent-facing tool is ready for agent routing, marketplace listing, and paid-path monitoring, including public agent discovery surfaces (/llms.txt, /agents.txt, /.well-known/mcp.json, /mcp). Tiers: quick $1, deep $5, report $10.
| Name | Required | Description | Default |
|---|---|---|---|
| tier | No | Readiness depth. quick=$1, deep=$5, report=$10. Defaults to quick. | |
| method | No | Safe unpaid probe method when paid_path is supplied. Defaults to GET. | |
| target | Yes | Target API/provider base URL to scan. | |
| expected | No | Optional expected x402 network/asset/price metadata for paid_path probes. | |
| paid_path | No | Optional specific paid endpoint to probe without payment for deep/report tiers. | |
| marketplace_url | No | Optional marketplace/listing URL to compare against public metadata. | |
| expected_resources | No | Optional expected resource count. |
Output Schema
| Name | Required | Description |
|---|---|---|
| scan | No | |
| tier | Yes | |
| ready | Yes | |
| score | Yes | |
| checks | Yes | |
| issues | Yes | |
| report | No | |
| target | Yes | |
| product | Yes | |
| priceUsd | Yes | |
| healthProbe | No | |
| recommendedFixes | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only, open-world, idempotent, and non-destructive behavior. The description adds valuable context about the specific surfaces checked and pricing tiers, enhancing transparency beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise with two sentences, front-loading the purpose and then listing tiers. No unnecessary words or redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (7 parameters, output schema exists), the description covers the main purpose, tiers, and discovery surfaces. It could mention output or when to use each tier, but overall it is reasonably complete for an agent to understand.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so parameters are well-documented in the schema. The description does not add significant meaning beyond what the schema provides, meeting the baseline expectation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool checks readiness for agent routing, marketplace listing, paid-path monitoring, and public discovery surfaces (specific files). It differentiates from sibling tools which focus on other aspects of x402, like boundary guarding or receipt generation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies use for readiness checking and mentions pricing tiers, but does not explicitly guide when to choose this tool over siblings like probe_x402_paid_path or scan_x402_resource. No exclusions or alternative recommendations are provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
generate_trust_receiptGenerate Trust ReceiptARead-onlyIdempotentInspect
Generate a deterministic trust receipt from sanitized request/policy/result/payment summaries. Do not submit raw auth headers, cookies, API keys, private keys, payment signatures, payment response headers, customer prompts, customer documents, or payer-identifying evidence. Suggested xpay price: $0.05/call.
| Name | Required | Description | Default |
|---|---|---|---|
| policy | No | Sanitized policy or decision summary to hash. | |
| result | No | Sanitized outcome/result summary to hash; omit customer data and secret-like values. | |
| payment | No | Optional sanitized payment summary or caller-provided hashes only; do not include raw payment signatures, raw payment response headers, private keys, API keys, cookies, payer-identifying evidence, or wallet secrets. | |
| request | Yes | Sanitized request/action summary to hash; omit raw prompts, documents, credentials, cookies, auth headers, signatures, and secrets. | |
| nextStep | No | Optional receipt next-step guidance. |
Output Schema
| Name | Required | Description |
|---|---|---|
| decision | Yes | |
| nextStep | No | |
| createdAt | Yes | |
| receiptId | Yes | |
| evidenceHash | Yes | |
| claimBoundary | Yes | |
| marketplacePositioning | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint, openWorldHint, idempotentHint, and non-destructive. The description adds context about determinism and sensitive data restrictions, which are not in annotations. No contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences: first defines purpose, second lists prohibitions and price. No filler, front-loaded, and every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given high schema coverage, existing annotations, and an output schema, the description covers purpose, constraints, and a pricing suggestion. It is complete for the tool's complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with each parameter described. The description provides a general caution about data sanitization but does not add specific meaning beyond the schema. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states 'Generate a deterministic trust receipt from sanitized request/policy/result/payment summaries,' which is a specific verb ('generate') and resource ('trust receipt'). It clearly distinguishes from sibling tools like boundary_guard_check or generate_x402_launch_pack, which have different purposes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description lists what not to include (raw auth headers, keys, etc.) but lacks explicit guidance on when to use this tool versus alternatives. No 'use when' or 'do not use if' conditions are stated, leaving usage context implied.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
generate_x402_launch_packx402 Launch Pack GeneratorBRead-onlyIdempotentInspect
Generate marketplace-safe listing copy, buyer FAQ, launch checklist, approval packet, and claim boundaries for x402/MCP sellers from readiness evidence. Tiers: single $9, service $29, premium $49.
| Name | Required | Description | Default |
|---|---|---|---|
| tier | No | Launch pack depth. single=$9, service=$29, premium=$49. Defaults to single. | |
| method | No | Safe unpaid probe method when paid_path is supplied. Defaults to GET. | |
| target | Yes | Target API/provider base URL to package for launch. | |
| audience | No | Primary buyer/audience for listing copy. | |
| expected | No | Optional expected x402 network/asset/price metadata for paid_path probes. | |
| paid_path | No | Optional paid endpoint to validate via unpaid 402 challenge for service/premium packs. | |
| product_name | No | Buyer-facing product title. | |
| marketplace_url | No | Optional marketplace/listing URL to compare against public metadata. | |
| primary_use_case | No | Primary buyer outcome/use case. | |
| expected_resources | No | Optional expected resource count. | |
| desired_marketplaces | No | Optional marketplace names to include in launch planning. |
Output Schema
| Name | Required | Description |
|---|---|---|
| tier | Yes | |
| report | No | |
| target | Yes | |
| product | Yes | |
| priceUsd | Yes | |
| readiness | No | |
| launchPack | Yes | |
| productName | Yes | |
| claimBoundary | No | |
| readinessScore | Yes | |
| readyForDistribution | No | |
| approvalRequiredBeforeDistribution | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint, idempotentHint, openWorldHint, destructiveHint false. Description adds that the tool 'generates' documents from 'readiness evidence', which is consistent but adds limited behavioral context beyond what annotations provide. No mention of safety, rate limits, or side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with the main action and output types. No redundant phrasing. Tiers and pricing efficiently packed into second sentence. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite 11 parameters and existing output schema, the description does not map inputs to outputs or explain how 'readiness evidence' relates to parameters. Missing guidance on parameter usage, dependencies, or expected behaviors. For a complex tool, more context is needed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline 3 applies. Description adds pricing context for the 'tier' enum (single $9, service $29, premium $49) but does not explain other parameters like target, audience, or paid_path. Value added is marginal beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool generates specific documents (listing copy, FAQ, checklist, packet, boundaries) for x402/MCP sellers, with verb 'Generate' and resource types, distinguishing it from siblings like probe_x402_paid_path or scan_x402_resource.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool vs siblings. The description only lists output types and tiers, lacking context about prerequisites, scenarios, or alternatives. It is implied for launch pack generation but no 'when not to use' or references to sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
probe_x402_paid_pathx402 Paid-Path Health ProbeARead-onlyIdempotentInspect
Probe a public x402 endpoint without payment, parse the 402 challenge, compare expected network/asset/price, and return a deterministic health receipt. Suggested xpay price: $0.50/call.
| Name | Required | Description | Default |
|---|---|---|---|
| mode | No | Probe mode for v1. Defaults to unpaid_402. | |
| method | No | Safe unpaid probe method. Defaults to GET. | |
| target | Yes | Specific paid endpoint URL to probe without payment. | |
| expected | No | Optional expected x402 metadata such as network, asset, and priceUsd. |
Output Schema
| Name | Required | Description |
|---|---|---|
| checks | Yes | |
| issues | Yes | |
| target | Yes | |
| healthy | Yes | |
| receipt | Yes | |
| observed | Yes | |
| recommendedFixes | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds value beyond annotations by detailing the probe behavior: it parses the 402 challenge, compares expected metadata, and issues a deterministic health receipt. Annotations already confirm safety, and the description enriches this with the probe's specific workflow and a suggested cost. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences long with no filler. The first sentence front-loads the action and outcome; the second adds a cost suggestion. Every part is essential and efficiently conveys the tool's purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite having 4 params, nested objects, and enums, the description covers the core probe flow and return value. The output schema exists (not shown) to handle return details. It does not explain mode differences, but the schema covers those. Overall, it is adequate for the complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description indirectly explains the 'expected' parameter through the phrase 'compare expected network/asset/price', but does not elaborate on 'mode' or 'method' enums beyond what the schema provides. Its contribution over schema is marginal.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('probe') and defines the resource ('public x402 endpoint') and outcome ('return a deterministic health receipt'). It clearly differentiates from sibling tools by emphasizing 'without payment' and the comparison of expected values.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies the tool is for health checking x402 endpoints but does not explicitly state when to use it versus alternatives like scan_x402_resource. The cost suggestion is a guideline but not about usage context. No exclusions or comparisons provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
scan_x402_resourcex402 Resource ScanARead-onlyIdempotentInspect
Read-only scan of public x402/OpenAPI metadata and optional marketplace listing staleness. Suggested xpay price: $0.10/call.
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | Target API/provider base URL to scan. | |
| marketplace_url | No | Optional marketplace/listing URL to compare against public metadata. | |
| expected_resources | No | Optional expected resource count. |
Output Schema
| Name | Required | Description |
|---|---|---|
| score | Yes | |
| issues | Yes | |
| prices | No | |
| target | Yes | |
| nextSteps | Yes | |
| marketplacePositioning | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint, idempotentHint, and destructiveHint=false. The description adds 'Read-only scan' and notes a suggested price, but does not provide additional behavioral context beyond what annotations convey.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, front-loading purpose and including a relevant pricing note, with no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of an output schema (stated in context signals) and full annotations, the description adequately covers the tool's purpose and pricing. It could mention prerequisites or return value hints, but is mostly complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% with clear parameter descriptions. The description does not add meaning beyond what the schema already provides, so a baseline of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Read-only scan of public x402/OpenAPI metadata' with a specific verb and resource, and includes optional marketplace staleness checking, distinguishing it from sibling tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies a use case for scanning x402 metadata and suggests a price, but does not explicitly state when to use this tool versus alternatives like boundary_guard_check or check_agent_tool_readiness.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!