Skip to main content
Glama

Server Details

Wash-filtered intelligence layer for x402 — read-only access to category stats, service details, wash analysis aggregate, service search, and 24h trends.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.3/5 across 5 of 5 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool targets a distinct aspect of x402 analytics: wash reports, categories, service details, trends, and search. No two tools have overlapping purposes, making it easy for an agent to select the correct one.

Naming Consistency5/5

All tools follow the pattern `x402_<verb>_<noun>` with consistent snake_case and lowercase verbs (check, get, search). Even with two 'get' verbs, the nouns differentiate them clearly.

Tool Count5/5

With 5 tools, the server is well-scoped for its purpose (x402 ecosystem analytics). The count falls comfortably within the ideal 3-15 range, and each tool serves a necessary function.

Completeness4/5

The toolset covers core querying and analytics needs: wash analysis, category overview, service details, trends, and search. A minor gap exists for per-address wash analysis, which is relegated to a paid endpoint, but overall the surface is solid.

Available Tools

5 tools
x402_check_washAInspect

Get the aggregate wash-report dataset: 30-day total active buyers, real-volume %, suspected_wash and self_test counts, full 8-label distribution, 14-day wash percentage time series, and five anonymized case studies (Service A through E) with pattern signals.

For per-address real-time wash analysis with full signal breakdown, use the paid POST /api/v1/wash/check HTTP endpoint ($0.05 USDC) — that endpoint speaks x402, agents pay and receive data in a single HTTP round-trip.

ParametersJSON Schema
NameRequiredDescriptionDefault
addressNoOptional wallet or seller address. When provided, the response includes a hint about the paid per-address endpoint.

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It describes the data returned and the behavior when address is provided. Does not mention side effects or auth, but as a read-only aggregate report, this is reasonable. Slightly lacking in explicitly stating idempotence or cost, but still transparent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with front-loading of core purpose. No redundant words. Efficient and well-organized.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (multiple data components) and the presence of an output schema, the description lists all major data elements and provides a clear alternative for per-address analysis. It is complete and self-contained.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Only one optional parameter (address) with 100% schema description coverage. The description adds value by explaining that providing an address includes a hint about the paid endpoint, which goes beyond the schema's description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool returns an 'aggregate wash-report dataset' and enumerates its components. It distinguishes itself from the paid per-address endpoint by name and purpose.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use this tool (for aggregate report) and when to use the paid POST endpoint (per-address real-time analysis). Also clarifies that the optional address parameter only provides a hint about the paid endpoint.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

x402_get_categoriesAInspect

List all 33 x402 service categories with aggregate stats: services count, 24h volume, transaction count, real-volume %, and label distribution. Use this to understand the shape of the x402 ecosystem before drilling into specific services or wallets.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, but the description lists the aggregate stats returned (services count, volume, transactions, real-volume %, label distribution). It implies a read-only operation and does not hide any behavioral traits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, each earning its place: the first specifies the output, the second provides usage guidance. No waste, front-loaded with key information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no parameters and presence of an output schema, the description provides sufficient context for a list tool. Includes a usage recommendation, making it complete for an agent to decide when to invoke.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has no parameters, so the description adds no parameter info. With zero parameters, the baseline is 4, and the description is adequate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool lists all 33 x402 service categories with aggregate stats. It differentiates from siblings like x402_get_service and x402_search_services by implying this is an overview tool before drilling into specifics.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly tells when to use this tool: to understand the ecosystem before drilling into specific services or wallets. This provides clear context, though it does not explicitly mention when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

x402_get_serviceAInspect

Get the full detail record for one x402 service: name, description, seller address, chain, price, 24h and total transaction stats, 30-day daily volume time series, buyer-label distribution, and top buyers. Use this to evaluate a single service's traffic composition.

ParametersJSON Schema
NameRequiredDescriptionDefault
service_idYesNumeric x402 service id (visible in /services list and detail URLs).

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must convey behavioral traits. It states the operation is a 'get', implying a read-only, non-destructive action. However, it does not explicitly state safety, authorization needs, or error handling, which would raise the score.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences with no fluff. The first sentence lists the data fields efficiently, and the second provides usage guidance. Every sentence serves a clear purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool has a single parameter, clear description, and an output schema exists (as indicated by context signals). The description lists key data fields, which aligns with expected output, making it complete for an agent to select and invoke correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has one parameter (service_id) with 100% schema description coverage. The description adds value by explaining the parameter is numeric and where to find it ('visible in /services list and detail URLs'), going beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves the full detail record for one x402 service, listing many specific data fields like name, description, seller address, chain, price, stats, time series, buyer-label distribution, and top buyers. This distinguishes it from siblings such as x402_search_services (which searches multiple services) and others.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description includes explicit usage guidance: 'Use this to evaluate a single service's traffic composition.' While it does not mention when not to use it or alternatives, it provides clear context for when to invoke this tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

x402_search_servicesAInspect

Search the index of 36k+ x402 services with filters. Returns a paginated list of matching services with their stats and label mix. Use this to find services by topic, chain, or seller wallet.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNo1-indexed page number.
sortNoSort key: tx_24h | volume_24h | tx_total | price | real_pct | wash_pct | first_seen | alpha.tx_24h
chainNoFilter to one chain: 'base', 'solana', 'arbitrum', 'base-sepolia'.
searchNoFree-text match against name, description, or seller address.
categoryNoFilter to a single category slug (e.g. 'ai_inference', 'wallet_analytics').
page_sizeNoPage size (max 200; default 24).

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It notes pagination and returned fields (stats, label mix) but does not disclose potential side effects, rate limits, or idempotency. For a read-like search tool, this is adequate but not exhaustive.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with no wasted words. The first sentence states purpose and key features (filters, paginated list, stats). The second gives concrete usage examples. Efficiently front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

An output schema exists, so return values are already documented. The description mentions paginated list with stats and label mix, which aligns with typical search results. Missing error handling or performance notes, but acceptable for a search tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the baseline is 3. The description adds minimal value beyond the schema, merely restating that filters exist. It does not explain parameter interactions or constraints beyond what the schema already covers.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool searches an index of 36k+ services with filters and returns a paginated list. It specifies the resource (services), action (search), and scope (index with filters). Siblings like x402_get_service suggest this is for listing/filtering vs. single retrieval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description says 'Use this to find services by topic, chain, or seller wallet,' providing clear guidance on when to use it. However, it does not explicitly mention when not to use it or point to alternatives among siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources