x402watch
Server Details
Wash-filtered intelligence layer for x402 — read-only access to category stats, service details, wash analysis aggregate, service search, and 24h trends.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.3/5 across 5 of 5 tools scored.
Each tool targets a distinct aspect of x402 analytics: wash reports, categories, service details, trends, and search. No two tools have overlapping purposes, making it easy for an agent to select the correct one.
All tools follow the pattern `x402_<verb>_<noun>` with consistent snake_case and lowercase verbs (check, get, search). Even with two 'get' verbs, the nouns differentiate them clearly.
With 5 tools, the server is well-scoped for its purpose (x402 ecosystem analytics). The count falls comfortably within the ideal 3-15 range, and each tool serves a necessary function.
The toolset covers core querying and analytics needs: wash analysis, category overview, service details, trends, and search. A minor gap exists for per-address wash analysis, which is relegated to a paid endpoint, but overall the surface is solid.
Available Tools
5 toolsx402_check_washAInspect
Get the aggregate wash-report dataset: 30-day total active buyers, real-volume %, suspected_wash and self_test counts, full 8-label distribution, 14-day wash percentage time series, and five anonymized case studies (Service A through E) with pattern signals.
For per-address real-time wash analysis with full signal breakdown, use the paid POST /api/v1/wash/check HTTP endpoint ($0.05 USDC) — that endpoint speaks x402, agents pay and receive data in a single HTTP round-trip.
| Name | Required | Description | Default |
|---|---|---|---|
| address | No | Optional wallet or seller address. When provided, the response includes a hint about the paid per-address endpoint. |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It describes the data returned and the behavior when address is provided. Does not mention side effects or auth, but as a read-only aggregate report, this is reasonable. Slightly lacking in explicitly stating idempotence or cost, but still transparent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with front-loading of core purpose. No redundant words. Efficient and well-organized.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (multiple data components) and the presence of an output schema, the description lists all major data elements and provides a clear alternative for per-address analysis. It is complete and self-contained.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Only one optional parameter (address) with 100% schema description coverage. The description adds value by explaining that providing an address includes a hint about the paid endpoint, which goes beyond the schema's description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns an 'aggregate wash-report dataset' and enumerates its components. It distinguishes itself from the paid per-address endpoint by name and purpose.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use this tool (for aggregate report) and when to use the paid POST endpoint (per-address real-time analysis). Also clarifies that the optional address parameter only provides a hint about the paid endpoint.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
x402_get_categoriesAInspect
List all 33 x402 service categories with aggregate stats: services count, 24h volume, transaction count, real-volume %, and label distribution. Use this to understand the shape of the x402 ecosystem before drilling into specific services or wallets.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, but the description lists the aggregate stats returned (services count, volume, transactions, real-volume %, label distribution). It implies a read-only operation and does not hide any behavioral traits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, each earning its place: the first specifies the output, the second provides usage guidance. No waste, front-loaded with key information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no parameters and presence of an output schema, the description provides sufficient context for a list tool. Includes a usage recommendation, making it complete for an agent to decide when to invoke.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has no parameters, so the description adds no parameter info. With zero parameters, the baseline is 4, and the description is adequate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool lists all 33 x402 service categories with aggregate stats. It differentiates from siblings like x402_get_service and x402_search_services by implying this is an overview tool before drilling into specifics.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly tells when to use this tool: to understand the ecosystem before drilling into specific services or wallets. This provides clear context, though it does not explicitly mention when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
x402_get_serviceAInspect
Get the full detail record for one x402 service: name, description, seller address, chain, price, 24h and total transaction stats, 30-day daily volume time series, buyer-label distribution, and top buyers. Use this to evaluate a single service's traffic composition.
| Name | Required | Description | Default |
|---|---|---|---|
| service_id | Yes | Numeric x402 service id (visible in /services list and detail URLs). |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must convey behavioral traits. It states the operation is a 'get', implying a read-only, non-destructive action. However, it does not explicitly state safety, authorization needs, or error handling, which would raise the score.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences with no fluff. The first sentence lists the data fields efficiently, and the second provides usage guidance. Every sentence serves a clear purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The tool has a single parameter, clear description, and an output schema exists (as indicated by context signals). The description lists key data fields, which aligns with expected output, making it complete for an agent to select and invoke correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has one parameter (service_id) with 100% schema description coverage. The description adds value by explaining the parameter is numeric and where to find it ('visible in /services list and detail URLs'), going beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves the full detail record for one x402 service, listing many specific data fields like name, description, seller address, chain, price, stats, time series, buyer-label distribution, and top buyers. This distinguishes it from siblings such as x402_search_services (which searches multiple services) and others.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description includes explicit usage guidance: 'Use this to evaluate a single service's traffic composition.' While it does not mention when not to use it or alternatives, it provides clear context for when to invoke this tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
x402_get_trendsAInspect
Get the last-24-hour trends snapshot: new services count vs the previous 24h, total transaction count, total USDC volume, active buyer count, daily new-services bar (14 days), recent new services (top 10), category volume movers, and hot services with traffic surges (>= 100 24h tx and >= +50% growth). Refreshed every 5 min.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden. It discloses that the snapshot is for the last 24 hours and is refreshed every 5 minutes. For a read-only, parameterless tool, this is sufficient transparency; it does not hide any behavioral traits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the purpose and lists all returned metrics. While dense, it is concise and contains no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no parameters and an output schema, the description provides a comprehensive overview of all returned data (counts, volumes, bar chart data, top lists, thresholds). It fully defines the tool's scope and refresh behavior.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters with 100% coverage. The description adds meaning by detailing the output contents, compensating for the lack of parameters. Baseline for 0 params is 4.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: retrieving a last-24-hour trends snapshot. It enumerates specific data points (new services count, transaction count, USDC volume, etc.), which distinguishes it from sibling tools like x402_check_wash, x402_get_categories, x402_get_service, and x402_search_services.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for getting an overall trends overview but does not explicitly state when to use this tool versus siblings, nor does it provide when-not or alternative recommendations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
x402_search_servicesAInspect
Search the index of 36k+ x402 services with filters. Returns a paginated list of matching services with their stats and label mix. Use this to find services by topic, chain, or seller wallet.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | 1-indexed page number. | |
| sort | No | Sort key: tx_24h | volume_24h | tx_total | price | real_pct | wash_pct | first_seen | alpha. | tx_24h |
| chain | No | Filter to one chain: 'base', 'solana', 'arbitrum', 'base-sepolia'. | |
| search | No | Free-text match against name, description, or seller address. | |
| category | No | Filter to a single category slug (e.g. 'ai_inference', 'wallet_analytics'). | |
| page_size | No | Page size (max 200; default 24). |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It notes pagination and returned fields (stats, label mix) but does not disclose potential side effects, rate limits, or idempotency. For a read-like search tool, this is adequate but not exhaustive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with no wasted words. The first sentence states purpose and key features (filters, paginated list, stats). The second gives concrete usage examples. Efficiently front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
An output schema exists, so return values are already documented. The description mentions paginated list with stats and label mix, which aligns with typical search results. Missing error handling or performance notes, but acceptable for a search tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the baseline is 3. The description adds minimal value beyond the schema, merely restating that filters exist. It does not explain parameter interactions or constraints beyond what the schema already covers.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches an index of 36k+ services with filters and returns a paginated list. It specifies the resource (services), action (search), and scope (index with filters). Siblings like x402_get_service suggest this is for listing/filtering vs. single retrieval.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description says 'Use this to find services by topic, chain, or seller wallet,' providing clear guidance on when to use it. However, it does not explicitly mention when not to use it or point to alternatives among siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!