mcp-server
Server Details
Discover and pay for APIs with USDC credits. No wallet, no gas, MCP-native marketplace.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- apihubio/mcp-server
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.2/5 across 9 of 9 tools scored. Lowest: 3.5/5.
Each tool has a clearly distinct purpose: balance checking, calling onboarded/external services, searching/list services, getting service details, reading content, and topping up. No overlap or ambiguity.
All tools follow the 'apihub_' prefix with a verb_noun pattern (e.g., apihub_call, apihub_search, apihub_topup). The naming is consistent and predictable, with minor exceptions like apihub_balance being a noun but still clear in context.
9 tools is well-scoped for an API marketplace server, covering essential operations: browsing, searching, calling services, checking balance, and topping up. Not too many or too few.
The tool surface covers the full lifecycle: discover services (list/search), get details, call them (onboarded/external), read content, check balance, and top up. No obvious gaps for the stated purpose.
Available Tools
9 toolsapihub_balanceAInspect
Read-only. Returns your current APIHub credit balance (in microdollars and USD), total lifetime spending (microdollars and USD), and total completed request count. Requires a valid API key. Use before apihub_call or apihub_call_external to confirm sufficient funds for a paid request, or periodically to audit usage. Does not modify state, send payments, or call upstream APIs; for top-ups use apihub_topup.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses read-only nature, no state modification, no payment execution, no upstream calls, and requires a valid API key. No annotations exist, so description carries full burden and does so thoroughly.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with no waste. Front-loaded with 'Read-only' and main purpose. Each sentence adds value: first defines output, second gives usage guidance. Perfectly concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with no params and no output schema, the description fully explains behavior, return data, authorization, and usage context. No gaps remain.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No parameters in schema (0 params), so baseline is 4. Description adds context about return values but not param details, which is unnecessary. Schema coverage is vacuously high, but 0-param rule takes precedence.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it returns the APIHub balance, spending, and request count, using specific verbs like 'Returns' and defining the resource. It distinguishes from siblings by explicitly noting it does not modify state or send payments.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use: before apihub_call or apihub_call_external to confirm funds, or periodically for audit. Also provides alternative: for top-ups use apihub_topup. No other guidance needed.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
apihub_callAInspect
Sends payment. Calls a paid endpoint on an onboarded APIHub service. Debits the endpoint's price from your credit balance and forwards the request to the upstream provider. Returns an object with the upstream response body, HTTP status, and credits_charged_microdollars. Requires a valid API key and sufficient credit balance; if balance is insufficient the call returns a 402 with payment requirements (use apihub_topup to add credits, apihub_balance to check). Use this for services already onboarded to APIHub (find slugs via apihub_search or apihub_list_services); use apihub_call_external for arbitrary x402 URLs not onboarded here, or apihub_read_content for content gateways.
| Name | Required | Description | Default |
|---|---|---|---|
| body | No | Optional. Request body as a JSON string for POST/PUT. Ignored for GET/DELETE. The proxy forwards this verbatim with Content-Type: application/json. | |
| method | No | Optional HTTP method, default GET. Must match the method declared on the endpoint or the request will fail. | |
| service_slug | Yes | Required. The service slug as returned by apihub_search or apihub_list_services, e.g. 'exchange-rates' or 'weather'. | |
| endpoint_path | Yes | Required. The endpoint path including any leading slash, e.g. '/latest/USD' or '/v1/forecast'. Get valid paths from apihub_get_service. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Describes the debit-forward-return flow, failure case with 402, and suggests related tools. No annotations provided, so description covers the behavioral aspects adequately, though rate limits are not mentioned.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Information-dense single paragraph without redundancy. Could be slightly more structured, but every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers prerequisites, return object fields, and error handling. Without output schema, it explains the response. Provides enough context for selection and initial use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, baseline 3. Description adds context on body handling (ignored for GET/DELETE) and method default, exceeding schema explanations.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the tool sends payment and calls a paid endpoint on an onboarded APIHub service. Distinguishes from siblings by naming apihub_call_external and apihub_read_content as alternatives.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly provides when to use this tool (onboarded services) and when not (arbitrary x402 URLs or content gateways), plus prerequisites like valid API key and sufficient balance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
apihub_call_externalAInspect
Call an external x402-protected URL (any provider in the marketplace or any x402 API). APIHub pays the provider on your behalf using the platform wallet and debits your credit balance for the exact amount. No wallet or gas required.
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | The full URL to call (e.g. https://hub.atxp.ai/...) | |
| body | No | Request body (object or string). Omit for GET. | |
| method | No | HTTP method (default POST) | |
| headers | No | Additional request headers. Do not set Authorization or X-PAYMENT - handled automatically. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden. It discloses key behaviors: payment is handled automatically, debits credit balance, no wallet/gas required. Misses details like error handling or rate limits, but covers the essential payment mechanism well.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences that are front-loaded with purpose, no redundant information. Every sentence adds value: first states action and scope, second explains payment mechanism. Efficient and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for basic usage given 4 parameters, no output schema, and no annotations. Explains payment flow but omits response format, error handling, URL constraints, or examples. Could be more complete for a tool that makes external API calls.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage, so baseline is 3. Description does not add parameter-specific meaning beyond the schema; it adds broader context about payment. No further elaboration on URL format, body structure, or header constraints beyond what schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Describes a clear action ('call') and specific resource ('external x402-protected URL'). Indicates it works for marketplace providers and x402 APIs, distinguishing it from sibling apihub_call by mentioning 'external'. However, it could explicitly contrast with siblings for stronger clarity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides context that it pays on behalf and requires no wallet/gas, but lacks explicit guidance on when to use this tool vs alternatives like apihub_call or apihub_search_external. No 'when not to use' or differentiation from siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
apihub_get_serviceAInspect
Get full details for a specific API service including all endpoints, schemas, and pricing.
| Name | Required | Description | Default |
|---|---|---|---|
| service_slug | Yes | The service slug (e.g., 'exchange-rates') |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must disclose behavior. It mentions returning details but fails to disclose side effects, authentication needs, or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
One concise sentence with all key information (what is retrieved and included), no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple retrieval tool with one parameter and no output schema, the description adequately states the content returned, but lacks usage guidance and behavioral details.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description adds context that the slug identifies a specific service, but this is already implied by the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the tool retrieves full details for a specific API service, including endpoints, schemas, and pricing, which distinguishes it from listing tools like apihub_list_services.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage when full details of a specific service are needed, but no explicit when-not-to-use or alternative comparisons are provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
apihub_list_servicesAInspect
Read-only. Lists onboarded APIHub services alphabetically, returning each service's slug, name, description, category, provider, endpoint count, and lowest per-endpoint price in microdollars. No authentication required. Use this to browse the full onboarded catalog when you don't have a specific capability in mind; prefer apihub_search when filtering by query, category, or price. Does not include external x402 APIs (use apihub_search_external for those) and does not return endpoint-level details (use apihub_get_service for that).
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Optional max number of services to return. Default 20, minimum 1, hard cap 100. Values above 100 are clamped. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Clearly states 'Read-only' and 'No authentication required' and explains clamping behavior for limit values above 100. Sufficient for a simple listing tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences: first states purpose and output, second gives usage guidance, third clarifies exclusions. No redundant words, front-loaded with key info.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema, but description compensates by listing exact return fields. Mentions clamping behavior. Missing pagination details, but limit parameter covers that. Adequate for a simple list tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Only parameter 'limit' has full schema coverage (100%). Description repeats schema info (default, min, cap, clamping) but adds no new semantic meaning beyond what schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it is read-only, lists onboarded APIHub services alphabetically, and specifies exact return fields (slug, name, description, category, provider, endpoint count, lowest per-endpoint price). Distinguishes from siblings by excluding external APIs and endpoint details.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly tells when to use ('when you don't have a specific capability in mind') and when to prefer alternatives: apihub_search for filtering, apihub_search_external for external APIs, apihub_get_service for endpoint details.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
apihub_read_contentAInspect
Read web content through a paid content gateway. Returns clean, structured text extracted from the URL. Use this for content services (service_type = 'content').
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | The full URL to read (must match a verified domain on the service) | |
| service_slug | Yes | The content service slug |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Mentions it is a paid content gateway (implies cost) and returns clean structured text. No annotations provided, so description carries full burden. Lacks details on authentication, rate limits, or potential side effects, but read operation is generally safe.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with action ('Read web content...'). Each sentence adds value without redundancy. Efficient and clear.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, description partially explains return value ('clean, structured text'). Parameters are fully covered. With sibling tools, usage context is clear. Could mention subscription requirement, but overall adequate for a simple read tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with descriptions for both parameters. Description adds the constraint that the URL 'must match a verified domain on the service', providing additional guidance beyond the schema. For service_slug, it echoes the schema but is sufficiently clear.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the tool reads web content through a paid content gateway and returns clean, structured text. Specifies use for content services (service_type='content'), distinguishing it from sibling tools like apihub_balance, apihub_call, etc.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says to use for content services with service_type='content', providing clear context for when to use. However, does not explicitly mention when not to use or name alternative tools for other service types.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
apihub_searchAInspect
Read-only. Searches onboarded APIHub services by free-text query, with optional category, price, and type filters. Returns up to 10 matches ranked by uptime and endpoint count, each with slug, description, endpoints array, min price in microdollars, provider name, and quality score. No authentication required. Use this when you need to find an API by capability; use apihub_list_services to browse without a query, apihub_search_external to include the external x402 catalog, or apihub_get_service when you already know a slug. Does not call any upstream API or debit credits.
| Name | Required | Description | Default |
|---|---|---|---|
| type | No | Optional filter. 'api' = standard REST APIs, 'content' = content gateways that proxy a fixed upstream URL. | |
| query | Yes | Required free-text query matched against service name and description (case-insensitive substring match). Use 1-3 keywords describing the capability you want, e.g. 'weather' or 'stock price'. | |
| category | No | Optional exact-match filter. Valid values: ai, data, search, finance, media, infra, communication, content, travel. | |
| max_price_microdollars | No | Optional upper bound on price per request in microdollars (1 USD = 1,000,000 microdollars, so 10000 = $0.01). Services whose cheapest endpoint exceeds this are excluded. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Despite no annotations, the description fully discloses behavioral traits: read-only, no authentication, no upstream call, no credit debit, returns up to 10 matches ranked by uptime and endpoint count, and lists return fields (slug, description, endpoints, min price, provider, quality score).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Front-loaded with 'Read-only'; structured into purpose, return details, usage guidance, exclusions. Every sentence adds value, though slightly lengthy; still appropriate for the information density.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and no annotations, the description covers purpose, parameters, usage, return data, and behavioral traits (no auth, no upstream, no credit). Complete for a search tool with 4 params.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, but description adds significant value: query is case-insensitive substring match with keyword suggestion; max_price_microdollars includes conversion example; category lists valid values; type explains filters. Enhances schema meaning.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it is read-only, searches APIHub services by free-text query with optional filters, returns up to 10 matches, and distinguishes from sibling tools like apihub_list_services, apihub_search_external, and apihub_get_service.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says when to use this tool (find API by capability) and when to use alternatives (browse without query, include external catalog, already know slug). Mentions no authentication required and no upstream call or credit debit.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
apihub_search_externalAInspect
Search external x402-protected APIs (not operated by APIHub, but callable via credits). Returns listings with endpoint counts, prices, and on-chain activity.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max results (default 10, max 50) | |
| query | No | Search text matched against name/description | |
| category | No | Filter by category (ai, search, finance, media, other) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided; description lacks disclosure of safety, rate limits, or auth requirements. Only mentions externality and credit cost, but not read-only vs destructive nature.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two-sentence description is concise, front-loaded with key purpose, and contains no redundant information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given optional parameters and no output schema, description adequately covers purpose, differentiation, and return values; could mention pagination but not essential.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3; description adds no additional parameter meaning beyond what schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states tool searches external x402-protected APIs, distinguishing it from sibling apihub_search. It specifies verb 'search', resource 'external APIs', and includes return values.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Description implies use for external APIs ('not operated by APIHub') and mentions callable via credits, but does not explicitly state when to use vs alternatives or provide exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
apihub_topupAInspect
Purchase APIHub credits via x402 (USDC on Base). Returns payment instructions including a web URL for browser-based payment, a CLI command, and raw x402 requirements for agents with wallet support. Credits are added to your account instantly once payment confirms on-chain.
| Name | Required | Description | Default |
|---|---|---|---|
| amount_dollars | Yes | Amount to top up in USD. Minimum $5.00, maximum $10,000 per call. Example: 10 for $10. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Describes return type (payment instructions: web URL, CLI, x402) and behavior (credits added instantly on confirmation). Lacks disclosure of potential failures or auth requirements, but overall adequate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences, front-loaded with action and method. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple one-parameter tool with no output schema, description provides sufficient context about purpose, return values, and execution outcome. Could mention case of payment failure but not critical.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with description for 'amount_dollars'. Description adds payment method context but no additional parameter semantics beyond schema. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states 'Purchase APIHub credits' with specific method 'via x402 (USDC on Base)'. Distinguishes from sibling tools like apihub_balance (check balance) and apihub_call (make calls).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use or alternatives, but context and uniqueness of 'topup' among siblings imply its purpose. Could specify not to use for balance checks or payment troubleshooting.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!