x402watch
Server Details
Wash-filtered intelligence for x402: categories, services, wash analysis, trends.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- printmoneylab/x402watch
- GitHub Stars
- 0
- Server Listing
- x402watch
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.2/5 across 5 of 5 tools scored.
Each tool addresses a distinct aspect: wash analysis, category overview, service detail, trends, and search. No overlapping purposes.
All tools follow a consistent verb_noun pattern with the 'x402_' prefix. Verbs are imperative and nouns describe the resource clearly.
Five tools is well-scoped for a monitoring server, covering the key operations without redundancy or missing essential functionality.
The surface covers aggregate wash, categories, service detail, trends, and search. The only minor gap is a dedicated per-address real-time wash tool, which is referenced as a separate paid endpoint.
Available Tools
5 toolsx402_check_washAInspect
Get the aggregate wash-report dataset: 30-day total active buyers, real-volume %, suspected_wash and self_test counts, full 8-label distribution, 14-day wash percentage time series, and five anonymized case studies (Service A through E) with pattern signals.
For per-address real-time wash analysis with full signal breakdown, use the paid POST /api/v1/wash/check HTTP endpoint ($0.05 USDC) — that endpoint speaks x402, agents pay and receive data in a single HTTP round-trip.
| Name | Required | Description | Default |
|---|---|---|---|
| address | No | Optional wallet or seller address. When provided, the response includes a hint about the paid per-address endpoint. |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided; description fully explains the tool's behavior (returns aggregate dataset, hints when address given). Could mention it's read-only, but not essential.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is informative but slightly long; the first sentence lists many items. Could be more concise without losing clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given output schema exists, description adequately covers the return data and optional parameter behavior, making the tool fully understandable.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%; description adds value by explaining that providing an address triggers a hint about the paid endpoint, beyond the schema description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Get the aggregate wash-report dataset' and enumerates specific components, which distinguishes it from siblings like x402_get_categories and x402_search_services.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly tells when to use this tool (for aggregate report) and when to use the paid per-address endpoint, providing clear decision guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
x402_get_categoriesAInspect
List all 33 x402 service categories with aggregate stats: services count, 24h volume, transaction count, real-volume %, and label distribution. Use this to understand the shape of the x402 ecosystem before drilling into specific services or wallets.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description must carry full burden. It only mentions output content but not behavioral traits like read-only nature, auth requirements, or rate limits. Minimal disclosure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences front-loading the main action, followed by details and usage guidance. No extraneous words; every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With zero parameters and an output schema, the description covers purpose, output contents (stats), and usage context. Sufficient for a simple list tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No parameters exist, so baseline 4 applies. Description doesn't need to add param info but provides useful output context, which is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool lists all 33 categories with aggregate stats, using specific verbs and resource. It also distinguishes from sibling tools by indicating it's for ecosystem overview before drilling into specifics.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit guidance: 'Use this to understand the shape of the x402 ecosystem before drilling into specific services or wallets.' Implies when not to use (for specific services/wallets) and suggests alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
x402_get_serviceAInspect
Get the full detail record for one x402 service: name, description, seller address, chain, price, 24h and total transaction stats, 30-day daily volume time series, buyer-label distribution, and top buyers. Use this to evaluate a single service's traffic composition.
| Name | Required | Description | Default |
|---|---|---|---|
| service_id | Yes | Numeric x402 service id (visible in /services list and detail URLs). |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It describes what the tool returns but does not disclose behavioral traits like auth requirements, rate limits, or whether it is read-only. Lacks needed transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence with a clear list of returned fields and a usage recommendation. Every sentence adds value, no fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (many return fields) and presence of an output schema, the description lists key data points and purpose. Lacks some context like usage boundaries, but covers the essential information.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with a clear description for service_id. The tool description adds no extra meaning beyond the schema, so baseline 3 applies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states 'Get the full detail record for one x402 service' and lists specific fields. Distinguishes from siblings like x402_search_services which likely return lists or different data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says 'Use this to evaluate a single service's traffic composition,' providing clear context. No explicit when-not or alternatives, but it's implied that this is for detailed analysis of one service.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
x402_get_trendsAInspect
Get the last-24-hour trends snapshot: new services count vs the previous 24h, total transaction count, total USDC volume, active buyer count, daily new-services bar (14 days), recent new services (top 10), category volume movers, and hot services with traffic surges (>= 100 24h tx and >= +50% growth). Refreshed every 5 min.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description fully discloses behavior: snapshot of last 24 hours with specific metrics and refresh interval (every 5 min). It implies a read-only operation with no destructive effects. Minor missing: no mention of error states or rate limits, but overall transparent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence listing many items but remains readable and front-loaded with 'Get the last-24-hour trends snapshot'. Could be more structured (e.g., bullet points), but it's sufficiently concise for a no-parameter tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite listing many output fields, the description is complete: it covers all returned data categories, includes the refresh interval, and an output schema is present. No gaps for the agent to guess.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has zero parameters (schema coverage 100%), so description need not explain param semantics. Baseline 4 applied. Description instead details the output fields, which is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Get the last-24-hour trends snapshot', specifying a read operation on aggregate data. It lists specific metrics, distinguishing it from siblings like x402_get_service (single service) and x402_search_services (search), indicating a unique purpose.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for broad trend overview but does not explicitly state when to use this tool versus alternatives (e.g., for detailed service info would use x402_get_service). No exclusions or when-not conditions are provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
x402_search_servicesAInspect
Search the index of 36k+ x402 services with filters. Returns a paginated list of matching services with their stats and label mix. Use this to find services by topic, chain, or seller wallet.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | 1-indexed page number. | |
| sort | No | Sort key: tx_24h | volume_24h | tx_total | price | real_pct | wash_pct | first_seen | alpha. | tx_24h |
| chain | No | Filter to one chain: 'base', 'solana', 'arbitrum', 'base-sepolia'. | |
| search | No | Free-text match against name, description, or seller address. | |
| category | No | Filter to a single category slug (e.g. 'ai_inference', 'wallet_analytics'). | |
| page_size | No | Page size (max 200; default 24). |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Mentions pagination and returned fields, but does not disclose rate limits, authorization requirements, or side effects. For a search tool, basic but incomplete.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two succinct sentences: first states function and output, second clarifies usage. No redundant or verbose language.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the rich input schema (6 params, all described) and presence of an output schema, the description adequately covers purpose and context. Could mention pagination size limit, but that is in schema. Overall sufficient.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage, so baseline is 3. The description adds minimal value by summarizing filter capabilities (topic, chain, wallet), but does not provide new parameter-level details beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the verb ('Search'), the resource ('36k+ x402 services'), and the return value ('paginated list with stats and label mix'). Distinguishes from siblings by mentioning filters and specific returned fields.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states 'Use this to find services by topic, chain, or seller wallet.' Provides clear context but does not explicitly mention when not to use or differentiate from sibling tools like x402_get_service or x402_get_categories.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!
Your Connectors
Sign in to create a connector for this server.