Skip to main content
Glama

Server Details

Wash-filtered intelligence for x402: categories, services, wash analysis, trends.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
printmoneylab/x402watch
GitHub Stars
0
Server Listing
x402watch

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.2/5 across 5 of 5 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool addresses a distinct aspect: wash analysis, category overview, service detail, trends, and search. No overlapping purposes.

Naming Consistency5/5

All tools follow a consistent verb_noun pattern with the 'x402_' prefix. Verbs are imperative and nouns describe the resource clearly.

Tool Count5/5

Five tools is well-scoped for a monitoring server, covering the key operations without redundancy or missing essential functionality.

Completeness4/5

The surface covers aggregate wash, categories, service detail, trends, and search. The only minor gap is a dedicated per-address real-time wash tool, which is referenced as a separate paid endpoint.

Available Tools

5 tools
x402_check_washAInspect

Get the aggregate wash-report dataset: 30-day total active buyers, real-volume %, suspected_wash and self_test counts, full 8-label distribution, 14-day wash percentage time series, and five anonymized case studies (Service A through E) with pattern signals.

For per-address real-time wash analysis with full signal breakdown, use the paid POST /api/v1/wash/check HTTP endpoint ($0.05 USDC) — that endpoint speaks x402, agents pay and receive data in a single HTTP round-trip.

ParametersJSON Schema
NameRequiredDescriptionDefault
addressNoOptional wallet or seller address. When provided, the response includes a hint about the paid per-address endpoint.

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided; description fully explains the tool's behavior (returns aggregate dataset, hints when address given). Could mention it's read-only, but not essential.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is informative but slightly long; the first sentence lists many items. Could be more concise without losing clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given output schema exists, description adequately covers the return data and optional parameter behavior, making the tool fully understandable.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%; description adds value by explaining that providing an address triggers a hint about the paid endpoint, beyond the schema description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Get the aggregate wash-report dataset' and enumerates specific components, which distinguishes it from siblings like x402_get_categories and x402_search_services.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly tells when to use this tool (for aggregate report) and when to use the paid per-address endpoint, providing clear decision guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

x402_get_categoriesAInspect

List all 33 x402 service categories with aggregate stats: services count, 24h volume, transaction count, real-volume %, and label distribution. Use this to understand the shape of the x402 ecosystem before drilling into specific services or wallets.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description must carry full burden. It only mentions output content but not behavioral traits like read-only nature, auth requirements, or rate limits. Minimal disclosure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences front-loading the main action, followed by details and usage guidance. No extraneous words; every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With zero parameters and an output schema, the description covers purpose, output contents (stats), and usage context. Sufficient for a simple list tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

No parameters exist, so baseline 4 applies. Description doesn't need to add param info but provides useful output context, which is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool lists all 33 categories with aggregate stats, using specific verbs and resource. It also distinguishes from sibling tools by indicating it's for ecosystem overview before drilling into specifics.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicit guidance: 'Use this to understand the shape of the x402 ecosystem before drilling into specific services or wallets.' Implies when not to use (for specific services/wallets) and suggests alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

x402_get_serviceAInspect

Get the full detail record for one x402 service: name, description, seller address, chain, price, 24h and total transaction stats, 30-day daily volume time series, buyer-label distribution, and top buyers. Use this to evaluate a single service's traffic composition.

ParametersJSON Schema
NameRequiredDescriptionDefault
service_idYesNumeric x402 service id (visible in /services list and detail URLs).

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It describes what the tool returns but does not disclose behavioral traits like auth requirements, rate limits, or whether it is read-only. Lacks needed transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence with a clear list of returned fields and a usage recommendation. Every sentence adds value, no fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (many return fields) and presence of an output schema, the description lists key data points and purpose. Lacks some context like usage boundaries, but covers the essential information.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with a clear description for service_id. The tool description adds no extra meaning beyond the schema, so baseline 3 applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states 'Get the full detail record for one x402 service' and lists specific fields. Distinguishes from siblings like x402_search_services which likely return lists or different data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says 'Use this to evaluate a single service's traffic composition,' providing clear context. No explicit when-not or alternatives, but it's implied that this is for detailed analysis of one service.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

x402_search_servicesAInspect

Search the index of 36k+ x402 services with filters. Returns a paginated list of matching services with their stats and label mix. Use this to find services by topic, chain, or seller wallet.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNo1-indexed page number.
sortNoSort key: tx_24h | volume_24h | tx_total | price | real_pct | wash_pct | first_seen | alpha.tx_24h
chainNoFilter to one chain: 'base', 'solana', 'arbitrum', 'base-sepolia'.
searchNoFree-text match against name, description, or seller address.
categoryNoFilter to a single category slug (e.g. 'ai_inference', 'wallet_analytics').
page_sizeNoPage size (max 200; default 24).

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Mentions pagination and returned fields, but does not disclose rate limits, authorization requirements, or side effects. For a search tool, basic but incomplete.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two succinct sentences: first states function and output, second clarifies usage. No redundant or verbose language.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the rich input schema (6 params, all described) and presence of an output schema, the description adequately covers purpose and context. Could mention pagination size limit, but that is in schema. Overall sufficient.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage, so baseline is 3. The description adds minimal value by summarizing filter capabilities (topic, chain, wallet), but does not provide new parameter-level details beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the verb ('Search'), the resource ('36k+ x402 services'), and the return value ('paginated list with stats and label mix'). Distinguishes from siblings by mentioning filters and specific returned fields.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states 'Use this to find services by topic, chain, or seller wallet.' Provides clear context but does not explicitly mention when not to use or differentiate from sibling tools like x402_get_service or x402_get_categories.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.