Skip to main content
Glama

Server Details

Marketplace of pay-per-call APIs for AI agents. Top up once with USDC, then call any listed API - gasless, walletless, metered per request. Includes onboarded providers and hundreds of x402-native APIs discovered on-chain.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.2/5 across 9 of 9 tools scored. Lowest: 3.5/5.

Server CoherenceA
Disambiguation5/5

Each tool targets a distinct purpose: balance for account info, call/call_external for API calls of different types, get_service/list_services/search/search_external for discovery, read_content for content access, and topup for purchasing. No overlaps.

Naming Consistency4/5

All tools share the 'apihub_' prefix. Most use verb_noun pattern (get_service, list_services, read_content) but some are verb-only (call, search, topup) and two have adjective suffixes (call_external, search_external). Mostly consistent with minor deviations.

Tool Count5/5

9 tools is well-scoped for an API marketplace server, covering browsing, searching, purchasing credits, checking balance, and making both onboarded and external API calls. Each tool earns its place.

Completeness4/5

Core operations (discover, get details, call, pay, check balance) are covered. Minor gaps like usage history or service ratings are absent but not critical for the primary use case.

Available Tools

9 tools
apihub_balanceAInspect

Read-only. Returns your current APIHub credit balance (in microdollars and USD), total lifetime spending (microdollars and USD), and total completed request count. Requires a valid API key. Use before apihub_call or apihub_call_external to confirm sufficient funds for a paid request, or periodically to audit usage. Does not modify state, send payments, or call upstream APIs; for top-ups use apihub_topup.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations present; description fully discloses read-only nature, no state modification, no payment sending, no upstream API calls, and requires a valid API key.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, no redundancy. First sentence gives purpose and output, second gives usage, third clarifies behavior and sibling contrast. Front-loaded and efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Fully covers what the tool returns, when to use it, and its safety profile. No gaps given zero parameters and no output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

No parameters; description adds value by detailing the returned fields (balance in microdollars and USD, lifetime spending, request count), which is beyond the empty schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it returns current balance, lifetime spending, and request count. It distinguishes itself from siblings by explicitly being read-only and used for fund checks before call tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says to use before apihub_call or apihub_call_external to confirm funds, and for periodic auditing. Also clarifies not for top-ups (use apihub_topup).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

apihub_callAInspect

Sends payment. Calls a paid endpoint on an onboarded APIHub service. Debits the endpoint's price from your credit balance and forwards the request to the upstream provider. Returns an object with the upstream response body, HTTP status, and credits_charged_microdollars. Requires a valid API key and sufficient credit balance; if balance is insufficient the call returns a 402 with payment requirements (use apihub_topup to add credits, apihub_balance to check). Use this for services already onboarded to APIHub (find slugs via apihub_search or apihub_list_services); use apihub_call_external for arbitrary x402 URLs not onboarded here, or apihub_read_content for content gateways.

ParametersJSON Schema
NameRequiredDescriptionDefault
bodyNoOptional. Request body as a JSON string for POST/PUT. Ignored for GET/DELETE. The proxy forwards this verbatim with Content-Type: application/json.
methodNoOptional HTTP method, default GET. Must match the method declared on the endpoint or the request will fail.
service_slugYesRequired. The service slug as returned by apihub_search or apihub_list_services, e.g. 'exchange-rates' or 'weather'.
endpoint_pathYesRequired. The endpoint path including any leading slash, e.g. '/latest/USD' or '/v1/forecast'. Get valid paths from apihub_get_service.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must fully disclose behavior. It mentions cost debiting but lacks details on destruction risks, rate limits, or return format, leaving gaps for an agent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences with no wasted words, front-loaded with the core action.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Covers purpose and cost, but lacks explanation of return values (no output schema) or error scenarios, leaving some completeness gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema covers 100% of parameters with descriptions; the description adds no extra meaning beyond the schema, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Make a paid API call') and the resource ('through the APIHub proxy'), distinguishing it from siblings like apihub_balance or apihub_get_service.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Mentions 'paid' and 'requires sufficient wallet balance', which implies when to use (when you have balance) but does not explicitly explain when to use this vs alternatives like apihub_call_external.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

apihub_call_externalAInspect

Call an external x402-protected URL (any provider in the marketplace or any x402 API). APIHub pays the provider on your behalf using the platform wallet and debits your credit balance for the exact amount. No wallet or gas required.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesThe full URL to call (e.g. https://hub.atxp.ai/...)
bodyNoRequest body (object or string). Omit for GET.
methodNoHTTP method (default POST)
headersNoAdditional request headers. Do not set Authorization or X-PAYMENT - handled automatically.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It discloses payment behavior (pays provider, debits credit) and 'no wallet or gas required', but lacks details on rate limits, failure modes, or insufficient credit handling.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, front-loaded with the core purpose, and contains no fluff or redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists, and the description does not explain what the tool returns (e.g., response format). For an API call tool, this is a gap, but the input semantics are well-covered.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so description adds little beyond schema. The only extra is the headers note about not setting Authorization headers, but that is part of the parameter description in the input schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it calls an external x402-protected URL, pays on behalf, and debits credit balance. It distinguishes from sibling 'apihub_call' by specifying 'external' and the x402 protocol.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly says 'any provider in the marketplace or any x402 API', providing clear use context. However, it does not mention when to avoid this tool or alternatives like 'apihub_call' for internal URLs.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

apihub_get_serviceAInspect

Get full details for a specific API service including all endpoints, schemas, and pricing.

ParametersJSON Schema
NameRequiredDescriptionDefault
service_slugYesThe service slug (e.g., 'exchange-rates')
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It clearly states that the tool returns full details including endpoints, schemas, and pricing, indicating a read operation with no mention of destructive behavior or prerequisites. Some additional context (e.g., authorization) would be helpful but is not critical.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that front-loads the purpose and scope without any fluff or redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite lacking an output schema, the description informs the agent about the return content (endpoints, schemas, pricing). For a simple lookup tool with one parameter, this is largely sufficient, though mentioning any size limits or pagination would make it complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% coverage with a description for service_slug, so the baseline is 3. The description adds no further meaning beyond confirming the parameter's purpose, which is adequate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Get full details') and resource ('specific API service'), and explicitly lists what is included (endpoints, schemas, pricing), clearly distinguishing it from siblings like apihub_list_services which lists all services.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when you need detailed information about a particular service, but does not explicitly state when to avoid using it (e.g., if a summary suffices) or compare with alternatives like apihub_list_services.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

apihub_list_servicesAInspect

Read-only. Lists onboarded APIHub services alphabetically, returning each service's slug, name, description, category, provider, endpoint count, and lowest per-endpoint price in microdollars. No authentication required. Use this to browse the full onboarded catalog when you don't have a specific capability in mind; prefer apihub_search when filtering by query, category, or price. Does not include external x402 APIs (use apihub_search_external for those) and does not return endpoint-level details (use apihub_get_service for that).

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoOptional max number of services to return. Default 20, minimum 1, hard cap 100. Values above 100 are clamped.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Given no annotations, the description fully carries the transparency burden. It declares read-only behavior, no authentication required, and explicitly states what the tool does not include (external APIs, endpoint details). No contradictions or omissions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences: first sentence states purpose and output, second gives usage context, third clarifies exclusions. No redundant words, front-loaded with key info, and every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite no output schema, the description enumerates returned fields. It covers purpose, usage guidance, exclusions, and auth. Complete for an agent to decide and invoke correctly given the tool's simplicity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description already covers the single parameter (limit) with default, min, max, and clamping behavior. The description does not add additional semantic meaning beyond what the schema provides, so baseline score for high coverage (100%) is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it lists onboarded APIHub services alphabetically with specific fields returned. It distinguishes from siblings by naming alternatives for filtering (apihub_search), external APIs (apihub_search_external), and endpoint details (apihub_get_service), making its purpose unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use: 'when you don't have a specific capability in mind' and gives clear alternatives for filtering, external APIs, and endpoint details. This provides comprehensive guidance on tool selection relative to siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

apihub_read_contentAInspect

Read web content through a paid content gateway. Returns clean, structured text extracted from the URL. Use this for content services (service_type = 'content').

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYesThe full URL to read (must match a verified domain on the service)
service_slugYesThe content service slug
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, description must fully disclose behavior. It mentions 'paid content gateway' and 'verified domain' but lacks details on payment mechanics, rate limits, or error handling.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences with no fluff, front-loading purpose and output, then usage hint.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema, so description should detail return format more. 'Clean, structured text' is vague. Missing details on payment verification, failure modes, or pagination.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. Description adds slight context for 'url' ('must match a verified domain') but does not enhance 'service_slug' beyond schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states 'Read web content through a paid content gateway' with a specific verb and resource, and distinguishes from sibling tools like apihub_call or apihub_search which have different purposes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says 'Use this for content services (service_type = "content")', providing a clear condition, but does not include when to avoid or consider alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

apihub_search_externalAInspect

Search external x402-protected APIs (not operated by APIHub, but callable via credits). Returns listings with endpoint counts, prices, and on-chain activity.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results (default 10, max 50)
queryNoSearch text matched against name/description
categoryNoFilter by category (ai, search, finance, media, other)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must carry the burden. It discloses that these are external and credit-based, but lacks details on side effects, authentication, rate limits, or errors.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no wasted words. Purpose is front-loaded, output description follows. Efficient and clear.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description explains return values (endpoint counts, prices, activity). It also explains the nature of the APIs. Missing pagination or sorting info, but sufficient for a search tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline 3 applies. The description does not add extra meaning beyond the schema; it only names the parameters implicitly via the tool's purpose.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Search external x402-protected APIs', specifies that they are not operated by APIHub, and lists what is returned. This distinguishes from sibling tools like apihub_search.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit when-to-use guidance is provided; the description implies use when targeting external APIs, but does not discuss alternatives or when not to use. The mention of 'credits' is a condition.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

apihub_topupAInspect

Purchase APIHub credits via x402 (USDC on Base). Returns payment instructions including a web URL for browser-based payment, a CLI command, and raw x402 requirements for agents with wallet support. Credits are added to your account instantly once payment confirms on-chain.

ParametersJSON Schema
NameRequiredDescriptionDefault
amount_dollarsYesAmount to top up in USD. Minimum $5.00, maximum $10,000 per call. Example: 10 for $10.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description reveals that credits are added instantly upon payment confirmation and that the tool returns multiple payment instruction formats. It does not disclose failure modes or rate limits, but for a straightforward purchase, the behavioral context is clear.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences with no redundant information. It front-loads the main action and then explains the output and effect. Every sentence adds value, making it highly concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple single-parameter top-up tool with no output schema or annotations, the description covers the key aspects: what it does, what it returns, and the outcome. It does not delve into prerequisites like wallet setup, but is sufficiently complete for an agent to use correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema already fully describes the single parameter (amount in USD, min $5, max $10000, example). The tool description does not add additional parameter semantics beyond what the schema provides. With 100% schema coverage, baseline score is 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Purchase' and the resource 'APIHub credits', and specifies the method (x402, USDC on Base). It distinguishes itself from siblings by being the only tool for credit purchase, making its purpose unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when credits are needed, and mentions wallet/agent support for the payment method, but does not explicitly state when not to use or mention alternatives (though none exist). It provides adequate context for typical use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources