APIHub
Server Details
Marketplace of pay-per-call APIs for AI agents. Top up once with USDC, then call any listed API - gasless, walletless, metered per request. Includes onboarded providers and hundreds of x402-native APIs discovered on-chain.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.2/5 across 9 of 9 tools scored. Lowest: 3.5/5.
Each tool targets a distinct purpose: balance for account info, call/call_external for API calls of different types, get_service/list_services/search/search_external for discovery, read_content for content access, and topup for purchasing. No overlaps.
All tools share the 'apihub_' prefix. Most use verb_noun pattern (get_service, list_services, read_content) but some are verb-only (call, search, topup) and two have adjective suffixes (call_external, search_external). Mostly consistent with minor deviations.
9 tools is well-scoped for an API marketplace server, covering browsing, searching, purchasing credits, checking balance, and making both onboarded and external API calls. Each tool earns its place.
Core operations (discover, get details, call, pay, check balance) are covered. Minor gaps like usage history or service ratings are absent but not critical for the primary use case.
Available Tools
9 toolsapihub_balanceAInspect
Read-only. Returns your current APIHub credit balance (in microdollars and USD), total lifetime spending (microdollars and USD), and total completed request count. Requires a valid API key. Use before apihub_call or apihub_call_external to confirm sufficient funds for a paid request, or periodically to audit usage. Does not modify state, send payments, or call upstream APIs; for top-ups use apihub_topup.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations present; description fully discloses read-only nature, no state modification, no payment sending, no upstream API calls, and requires a valid API key.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences, no redundancy. First sentence gives purpose and output, second gives usage, third clarifies behavior and sibling contrast. Front-loaded and efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Fully covers what the tool returns, when to use it, and its safety profile. No gaps given zero parameters and no output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No parameters; description adds value by detailing the returned fields (balance in microdollars and USD, lifetime spending, request count), which is beyond the empty schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it returns current balance, lifetime spending, and request count. It distinguishes itself from siblings by explicitly being read-only and used for fund checks before call tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says to use before apihub_call or apihub_call_external to confirm funds, and for periodic auditing. Also clarifies not for top-ups (use apihub_topup).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
apihub_callAInspect
Sends payment. Calls a paid endpoint on an onboarded APIHub service. Debits the endpoint's price from your credit balance and forwards the request to the upstream provider. Returns an object with the upstream response body, HTTP status, and credits_charged_microdollars. Requires a valid API key and sufficient credit balance; if balance is insufficient the call returns a 402 with payment requirements (use apihub_topup to add credits, apihub_balance to check). Use this for services already onboarded to APIHub (find slugs via apihub_search or apihub_list_services); use apihub_call_external for arbitrary x402 URLs not onboarded here, or apihub_read_content for content gateways.
| Name | Required | Description | Default |
|---|---|---|---|
| body | No | Optional. Request body as a JSON string for POST/PUT. Ignored for GET/DELETE. The proxy forwards this verbatim with Content-Type: application/json. | |
| method | No | Optional HTTP method, default GET. Must match the method declared on the endpoint or the request will fail. | |
| service_slug | Yes | Required. The service slug as returned by apihub_search or apihub_list_services, e.g. 'exchange-rates' or 'weather'. | |
| endpoint_path | Yes | Required. The endpoint path including any leading slash, e.g. '/latest/USD' or '/v1/forecast'. Get valid paths from apihub_get_service. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description must fully disclose behavior. It mentions cost debiting but lacks details on destruction risks, rate limits, or return format, leaving gaps for an agent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences with no wasted words, front-loaded with the core action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers purpose and cost, but lacks explanation of return values (no output schema) or error scenarios, leaving some completeness gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema covers 100% of parameters with descriptions; the description adds no extra meaning beyond the schema, so baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Make a paid API call') and the resource ('through the APIHub proxy'), distinguishing it from siblings like apihub_balance or apihub_get_service.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Mentions 'paid' and 'requires sufficient wallet balance', which implies when to use (when you have balance) but does not explicitly explain when to use this vs alternatives like apihub_call_external.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
apihub_call_externalAInspect
Call an external x402-protected URL (any provider in the marketplace or any x402 API). APIHub pays the provider on your behalf using the platform wallet and debits your credit balance for the exact amount. No wallet or gas required.
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | The full URL to call (e.g. https://hub.atxp.ai/...) | |
| body | No | Request body (object or string). Omit for GET. | |
| method | No | HTTP method (default POST) | |
| headers | No | Additional request headers. Do not set Authorization or X-PAYMENT - handled automatically. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses payment behavior (pays provider, debits credit) and 'no wallet or gas required', but lacks details on rate limits, failure modes, or insufficient credit handling.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, front-loaded with the core purpose, and contains no fluff or redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema exists, and the description does not explain what the tool returns (e.g., response format). For an API call tool, this is a gap, but the input semantics are well-covered.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so description adds little beyond schema. The only extra is the headers note about not setting Authorization headers, but that is part of the parameter description in the input schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it calls an external x402-protected URL, pays on behalf, and debits credit balance. It distinguishes from sibling 'apihub_call' by specifying 'external' and the x402 protocol.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says 'any provider in the marketplace or any x402 API', providing clear use context. However, it does not mention when to avoid this tool or alternatives like 'apihub_call' for internal URLs.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
apihub_get_serviceAInspect
Get full details for a specific API service including all endpoints, schemas, and pricing.
| Name | Required | Description | Default |
|---|---|---|---|
| service_slug | Yes | The service slug (e.g., 'exchange-rates') |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden. It clearly states that the tool returns full details including endpoints, schemas, and pricing, indicating a read operation with no mention of destructive behavior or prerequisites. Some additional context (e.g., authorization) would be helpful but is not critical.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence that front-loads the purpose and scope without any fluff or redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite lacking an output schema, the description informs the agent about the return content (endpoints, schemas, pricing). For a simple lookup tool with one parameter, this is largely sufficient, though mentioning any size limits or pagination would make it complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% coverage with a description for service_slug, so the baseline is 3. The description adds no further meaning beyond confirming the parameter's purpose, which is adequate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Get full details') and resource ('specific API service'), and explicitly lists what is included (endpoints, schemas, pricing), clearly distinguishing it from siblings like apihub_list_services which lists all services.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when you need detailed information about a particular service, but does not explicitly state when to avoid using it (e.g., if a summary suffices) or compare with alternatives like apihub_list_services.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
apihub_list_servicesAInspect
Read-only. Lists onboarded APIHub services alphabetically, returning each service's slug, name, description, category, provider, endpoint count, and lowest per-endpoint price in microdollars. No authentication required. Use this to browse the full onboarded catalog when you don't have a specific capability in mind; prefer apihub_search when filtering by query, category, or price. Does not include external x402 APIs (use apihub_search_external for those) and does not return endpoint-level details (use apihub_get_service for that).
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Optional max number of services to return. Default 20, minimum 1, hard cap 100. Values above 100 are clamped. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Given no annotations, the description fully carries the transparency burden. It declares read-only behavior, no authentication required, and explicitly states what the tool does not include (external APIs, endpoint details). No contradictions or omissions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences: first sentence states purpose and output, second gives usage context, third clarifies exclusions. No redundant words, front-loaded with key info, and every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite no output schema, the description enumerates returned fields. It covers purpose, usage guidance, exclusions, and auth. Complete for an agent to decide and invoke correctly given the tool's simplicity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description already covers the single parameter (limit) with default, min, max, and clamping behavior. The description does not add additional semantic meaning beyond what the schema provides, so baseline score for high coverage (100%) is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it lists onboarded APIHub services alphabetically with specific fields returned. It distinguishes from siblings by naming alternatives for filtering (apihub_search), external APIs (apihub_search_external), and endpoint details (apihub_get_service), making its purpose unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use: 'when you don't have a specific capability in mind' and gives clear alternatives for filtering, external APIs, and endpoint details. This provides comprehensive guidance on tool selection relative to siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
apihub_read_contentAInspect
Read web content through a paid content gateway. Returns clean, structured text extracted from the URL. Use this for content services (service_type = 'content').
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | The full URL to read (must match a verified domain on the service) | |
| service_slug | Yes | The content service slug |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, description must fully disclose behavior. It mentions 'paid content gateway' and 'verified domain' but lacks details on payment mechanics, rate limits, or error handling.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences with no fluff, front-loading purpose and output, then usage hint.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema, so description should detail return format more. 'Clean, structured text' is vague. Missing details on payment verification, failure modes, or pagination.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. Description adds slight context for 'url' ('must match a verified domain') but does not enhance 'service_slug' beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states 'Read web content through a paid content gateway' with a specific verb and resource, and distinguishes from sibling tools like apihub_call or apihub_search which have different purposes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says 'Use this for content services (service_type = "content")', providing a clear condition, but does not include when to avoid or consider alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
apihub_searchAInspect
Read-only. Searches onboarded APIHub services by free-text query, with optional category, price, and type filters. Returns up to 10 matches ranked by uptime and endpoint count, each with slug, description, endpoints array, min price in microdollars, provider name, and quality score. No authentication required. Use this when you need to find an API by capability; use apihub_list_services to browse without a query, apihub_search_external to include the external x402 catalog, or apihub_get_service when you already know a slug. Does not call any upstream API or debit credits.
| Name | Required | Description | Default |
|---|---|---|---|
| type | No | Optional filter. 'api' = standard REST APIs, 'content' = content gateways that proxy a fixed upstream URL. | |
| query | Yes | Required free-text query matched against service name and description (case-insensitive substring match). Use 1-3 keywords describing the capability you want, e.g. 'weather' or 'stock price'. | |
| category | No | Optional exact-match filter. Valid values: ai, data, search, finance, media, infra, communication, content, travel. | |
| max_price_microdollars | No | Optional upper bound on price per request in microdollars (1 USD = 1,000,000 microdollars, so 10000 = $0.01). Services whose cheapest endpoint exceeds this are excluded. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Despite no annotations, the description fully discloses behavioral traits: read-only, returns up to 10 results ranked by uptime and endpoint count, result fields enumerated, no upstream API calls, no credit debits. All this information is beyond the schema and annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single paragraph with no redundancy. Every sentence adds value: purpose, result format, usage guidance, and behavioral notes. Information density is high without being verbose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description enumerates return fields (slug, description, endpoints array, price, provider, quality). All 4 parameters are clearly defined with examples. Covers ranking, authentication, and side effects. Completely explains tool behavior for an AI agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, but the description adds substantial meaning: explains query as case-insensitive substring match with keyword recommendation, clarifies type filter values, lists valid categories, and explains microdollar conversion and price filter semantics. This goes beyond the schema's own descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the tool searches onboarded APIHub services by free-text query with optional filters. Explicitly distinguishes from sibling tools by naming apihub_list_services, apihub_search_external, and apihub_get_service, giving specific guidance on when to use each.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Directly states when to use this tool ('when you need to find an API by capability') and lists alternatives for different scenarios. Also notes that no authentication is required.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
apihub_search_externalAInspect
Search external x402-protected APIs (not operated by APIHub, but callable via credits). Returns listings with endpoint counts, prices, and on-chain activity.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max results (default 10, max 50) | |
| query | No | Search text matched against name/description | |
| category | No | Filter by category (ai, search, finance, media, other) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description must carry the burden. It discloses that these are external and credit-based, but lacks details on side effects, authentication, rate limits, or errors.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no wasted words. Purpose is front-loaded, output description follows. Efficient and clear.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description explains return values (endpoint counts, prices, activity). It also explains the nature of the APIs. Missing pagination or sorting info, but sufficient for a search tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline 3 applies. The description does not add extra meaning beyond the schema; it only names the parameters implicitly via the tool's purpose.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Search external x402-protected APIs', specifies that they are not operated by APIHub, and lists what is returned. This distinguishes from sibling tools like apihub_search.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit when-to-use guidance is provided; the description implies use when targeting external APIs, but does not discuss alternatives or when not to use. The mention of 'credits' is a condition.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
apihub_topupAInspect
Purchase APIHub credits via x402 (USDC on Base). Returns payment instructions including a web URL for browser-based payment, a CLI command, and raw x402 requirements for agents with wallet support. Credits are added to your account instantly once payment confirms on-chain.
| Name | Required | Description | Default |
|---|---|---|---|
| amount_dollars | Yes | Amount to top up in USD. Minimum $5.00, maximum $10,000 per call. Example: 10 for $10. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description reveals that credits are added instantly upon payment confirmation and that the tool returns multiple payment instruction formats. It does not disclose failure modes or rate limits, but for a straightforward purchase, the behavioral context is clear.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences with no redundant information. It front-loads the main action and then explains the output and effect. Every sentence adds value, making it highly concise and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple single-parameter top-up tool with no output schema or annotations, the description covers the key aspects: what it does, what it returns, and the outcome. It does not delve into prerequisites like wallet setup, but is sufficiently complete for an agent to use correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema already fully describes the single parameter (amount in USD, min $5, max $10000, example). The tool description does not add additional parameter semantics beyond what the schema provides. With 100% schema coverage, baseline score is 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Purchase' and the resource 'APIHub credits', and specifies the method (x402, USDC on Base). It distinguishes itself from siblings by being the only tool for credit purchase, making its purpose unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when credits are needed, and mentions wallet/agent support for the payment method, but does not explicitly state when not to use or mention alternatives (though none exist). It provides adequate context for typical use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!