agentic-payments
Server Details
Public MCP discovery for Blocksize market data, pricing, and docs.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4/5 across 5 of 5 tools scored. Lowest: 3.1/5.
Each tool has a clearly distinct purpose: fetching document contents, checking pricing, listing instruments, searching metadata, and discovering symbols. No overlap or ambiguity.
All tool names follow a consistent snake_case pattern with clear verb_noun structure (fetch, get, list, search, search_pairs). No mixing of conventions.
With 5 tools, the set is well-scoped for a payments-related MCP server. Each tool earns its place without being too few or too many for the apparent purpose.
Despite the server name 'agentic-payments', the tools are almost entirely informational/read-only (fetch docs, list instruments, search). There are no tools for actually initiating payments, managing transactions, or handling accounts, which is a significant gap for a payments domain.
Available Tools
5 toolsfetchCatalog FetchARead-onlyIdempotentInspect
Use this to fetch the full contents of a document or instrument guide returned by search. This is read-only.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Result id returned by the search tool, for example doc:pricing, doc:quickstart, or instrument:crypto:BTCUSD. |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Description adds 'read-only' which aligns with readOnlyHint annotation. Does not expand beyond annotations, but no contradiction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with purpose, no superfluous information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter tool with output schema, description sufficiently covers purpose and parameter context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Description explains that the 'id' parameter comes from search results, adding meaning beyond the bare schema. Schema coverage is 0%, so description compensates well.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it fetches full contents of a document or instrument guide returned by search, distinguishing it from siblings like search (which returns IDs) and list_instruments.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says to use after search, implying when to use. Does not mention alternatives or when not to use, but context makes it clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_pricing_infoPricing InformationARead-onlyIdempotentInspect
Use this to inspect the current free and paid tiers, supported settlement networks, and credit options before using Blocksize's paid HTTP API.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint, idempotentHint, openWorldHint. Description adds that the info is 'current' and for pre-use planning, but no contradiction or significant additional behavioral disclosure beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, no fluff, front-loaded with purpose. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Tool is simple (0 params, good annotations, output schema present). Description suffices to tell an agent what it returns and when to use it.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
There are no parameters, so baseline is 4. Description does not need to add parameter details, and it does not misrepresent anything.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool inspects pricing tiers, settlement networks, and credit options. It uses an imperative verb and differs from siblings (fetch, list_instruments, search, search_pairs) by focusing on pricing context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description says to use 'before using Blocksize's paid HTTP API,' providing clear context for when it is applicable but not explicitly excluding alternatives or stating when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_instrumentsInstrument ListARead-onlyIdempotentInspect
Use this to list supported instruments for one service such as vwap, bidask, fx, or metal. This is free and read-only.
| Name | Required | Description | Default |
|---|---|---|---|
| service | No | Blocksize service namespace to list: vwap for crypto VWAP pairs, bidask for shared bid/ask symbols, fx for FX pairs, or metal for metals. | vwap |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint, openWorldHint, and idempotentHint. The description adds the behavioral detail that it is 'free' and 'read-only' (reinforcing) and that it lists instruments for one service. This adds context beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two short sentences, front-loaded with the purpose. Every word adds value: lists purpose, gives examples, notes it's free and read-only. No fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has one optional parameter and an output schema, the description covers the essential aspects: what it does, how to use it (specify service), and its safety profile. The output schema exists, so not describing the return format is acceptable. Minor gap: does not explicitly say 'returns a list', but 'list instruments' implies that.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has one parameter 'service' with no description (0% coverage). The description provides example values ('vwap, bidask, fx, or metal'), adding meaning that the schema lacks. This compensates for the missing parameter descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the verb 'list', the resource 'supported instruments', and the scope 'for one service', with examples (vwap, bidask, fx, metal). It distinguishes this tool from sibling tools (fetch, get_pricing_info, search, search_pairs) which perform different actions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says 'Use this to list...' and notes it is free and read-only, indicating safe usage. It provides context that a single service must be specified, but does not explicitly state when not to use it or mention alternatives. Given the distinct purpose vs siblings, this is clear enough.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
searchCatalog SearchARead-onlyIdempotentInspect
Use this when a remote MCP client needs OpenAI-style search across Blocksize docs and free catalog metadata. This does not return live market prices.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | Documentation or catalog search query, such as pricing, quickstart, credits, x402, Solana, Base, BTC, or VWAP. |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnly and idempotent behavior. The description adds value by clarifying it does not return live market prices, which is beyond the annotations. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is very short (two sentences), and the first sentence is action-oriented ('Use this when...'). Every word adds value, with no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool is simple (one required parameter) and has an output schema, the description is mostly adequate. However, it lacks any guidance on query formatting or what the response contains, which is needed given the 0% schema description coverage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has one parameter 'query' with no description, and schema coverage is 0%. The description does not add any semantics about the query parameter (e.g., format, expected content, length limits). It fails to compensate for the lack of schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool performs 'OpenAI-style search across Blocksize docs and free catalog metadata', using a specific verb and resource. It distinguishes itself from siblings by specifying it does not return live market prices, contrasting with pricing tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says 'Use this when a remote MCP client needs...', providing a clear usage context. It also states what it does not do ('not return live market prices'), but does not explicitly name alternative tools or when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_pairsInstrument SearchBRead-onlyIdempotentInspect
Use this to discover supported crypto, FX, or metal symbols before using Blocksize's paid HTTP API. This is free and read-only.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | Symbol, ticker, asset, or pair to search for, such as BTC, BTC-USD, ETH, EURUSD, or XAUUSD. | |
| asset_class | No | Optional asset-class filter. Use all for the full catalog, crypto for digital assets, fx for currency pairs, or metal for metals. | all |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint, openWorldHint, idempotentHint; description adds 'free and read-only' which aligns. No additional behavioral traits disclosed (e.g., pagination, rate limits), but with annotations the bar is lower.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences, each adding value. No wasted words; purpose and key constraint are front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite having an output schema, the description lacks parameter explanations, which is a significant gap for a search tool with 2 parameters. Context of 'free' and 'before paid API' is helpful but insufficient for complete invocation guidance.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0% and the description does not explain the two parameters (query, asset_class). Since there are no param docs in schema and description adds nothing, the agent receives no guidance on what values to provide.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool discovers supported crypto, FX, or metal symbols, specifying the resource and context (free, read-only). However, it doesn't explicitly differentiate from sibling tools like 'list_instruments' or 'search', so not a 5.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Suggests using before the paid API, but lacks explicit when-not-to-use or comparison to alternatives. No mention of when 'search' or 'list_instruments' might be more appropriate.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!