accounting-mcp
Server Details
8 EU accounting (x402 USDC on Base): reconcile, VAT, invoicing. Free health.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.6/5 across 9 of 9 tools scored. Lowest: 2.9/5.
Each tool has a clearly distinct purpose: VAT calculation, expense categorization, reconciliation matching/confirmation, invoice drafting/sending, and report generation. No overlap that would confuse an agent.
Most tools follow a verb_noun pattern (e.g., apply_categories, calculate_vat). The only exception is 'health', which is a single noun but common for health checks. Overall consistent.
9 tools is well-scoped for an accounting MCP covering invoicing, VAT, reconciliation, and reporting. Each tool earns its place without being overwhelming.
Core accounting workflows (invoicing, VAT, reconciliation, reporting) are covered. Minor gaps exist, such as no tool for direct transaction management or contact handling, but the set is functional for the intended domain.
Available Tools
9 toolsapply_categoriesBInspect
Apply confirmed Portuguese tax categories to Xero transactions. Updates AccountCode and TaxType.
| Name | Required | Description | Default |
|---|---|---|---|
| categories | Yes | Array of transaction-to-category assignments | |
| api_key_hash | Yes | Customer API key hash identifying Xero token |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It discloses that the tool updates fields, but omits important details like idempotency, error handling, permissions, or reversibility. For a mutation tool, more transparency is needed.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, front-loaded with the primary action, and contains no unnecessary words. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity, the description covers purpose and effect but lacks guidance on failure modes, prerequisites, or output. With no output schema and rich sibling tools, more operational context would improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description adds domain context (Portuguese tax categories, Xero) but does not explain parameter behavior beyond what is in the schema. No additional semantic value for parameters like api_key_hash or region.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb (apply), the resource (confirmed Portuguese tax categories to Xero transactions), and the effect (updates AccountCode and TaxType). It is specific about the domain (Portuguese tax) but does not explicitly differentiate from siblings like categorise_expenses.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It does not mention prerequisites, exclusions, or conditions for use. The context is implied (e.g., after confirmation) but not stated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
calculate_vatAInspect
Calculate Portuguese VAT for a given amount, category, and region. Supports Mainland, Azores, and Madeira rates. Detects intra-community B2B reverse charge.
| Name | Required | Description | Default |
|---|---|---|---|
| amount | Yes | Net amount (ex-VAT) to calculate VAT on | |
| is_b2b | No | Whether the transaction is business-to-business | |
| region | No | Portuguese tax region | mainland |
| category | No | Portuguese tax category code (e.g. office_supplies, food_restaurant) | |
| counterpart_country | No | ISO 3166-1 alpha-2 country code of the counterpart |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description adds some behavioral context (e.g., 'detects intra-community B2B reverse charge') but does not confirm read-only nature, side effects, or error handling. It relies on the input schema to imply behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two short sentences with no filler. It front-loads the action and resource, then adds specific details. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The tool has 5 parameters and no output schema, but the description does not explain the return value (e.g., computed VAT amount or breakdown). It adequately covers input semantics and special cases but leaves the output ambiguous.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the baseline is 3. The description adds value by explaining the region parameter's supported values ('Mainland, Azores, and Madeira rates') and the automatic detection logic for reverse charge, which the schema alone does not convey.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states a specific verb 'calculate' and resource 'Portuguese VAT', with qualifiers for amount, category, and region. It directly distinguishes this tool from siblings like apply_categories or draft_invoice, which handle different tasks.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies the tool is for Portuguese VAT calculations (Mainland, Azores, Madeira) but does not explicitly state when not to use it or compare it to alternatives like a general tax calculator. No exclusion or fallback guidance is provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
categorise_expensesBInspect
Suggest Portuguese tax categories for uncategorised Xero expenses. Returns category, VAT tier, deductibility, and VAT breakdown per transaction.
| Name | Required | Description | Default |
|---|---|---|---|
| region | No | Portuguese tax region | mainland |
| to_date | No | End date filter (YYYY-MM-DD) | |
| from_date | No | Start date filter (YYYY-MM-DD) | |
| api_key_hash | Yes | Customer API key hash identifying Xero token |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden. It discloses what the tool returns but does not state if it is read-only, required permissions, rate limits, or any side effects. The 'Suggest' verb implies no mutation, but this is not explicit.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single-sentence description is concise and front-loaded with the verb 'Suggest'. Could be slightly more structured (e.g., bullet points for returns) but remains clear and efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequately describes purpose and returns, but lacks context on prerequisites (e.g., Xero connection), workflow integration with siblings like 'apply_categories', and absence of output schema limits completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the baseline is 3. The description adds no additional parameter-level detail beyond the schema, so it meets the baseline without improvement.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the verb 'Suggest', resource 'Portuguese tax categories for uncategorised Xero expenses', and enumerates returned fields (category, VAT tier, deductibility, VAT breakdown). Distinguishes from siblings like 'apply_categories' which applies categories rather than suggesting them.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives (e.g., 'apply_categories'). Implies use for uncategorised expenses but does not provide exclusions or context for sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
confirm_reconciliationCInspect
Apply confirmed reconciliation matches to Xero. Marks matched bank transactions as reconciled.
| Name | Required | Description | Default |
|---|---|---|---|
| matches | Yes | Array of confirmed statement-to-transaction matches | |
| api_key_hash | Yes | Customer API key hash identifying Xero token |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description must carry the full burden of behavioral disclosure. It indicates a mutation (marking as reconciled) but does not disclose side effects (e.g., irreversibility, rate limits, authorization requirements beyond the API key). More details are needed for safe agent invocation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is very concise at two sentences with no wasted words. However, it sacrifices completeness for brevity. A score of 5 would require front-loading critical context without making it verbose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description lacks essential context such as the result of the operation (e.g., confirmation message, error handling) and potential side effects. For a tool with no output schema, explaining return values or outcomes is crucial. The current description is insufficient for safe usage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the baseline is 3. The description adds no additional detail beyond what the schema already provides; it does not explain the structure of 'matches' or constraints on 'api_key_hash'. It is adequate but not enriching.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool applies confirmed reconciliation matches and marks them as reconciled. It uses specific verbs ('Apply', 'Marks') and identifies the resource ('Xero'). However, it does not explicitly differentiate from sibling tools like 'reconcile_transactions', although the name implies a distinct step.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives, nor does it mention prerequisites or consequences. The context of 'confirmed' matches is implied, but there is no explicit 'when to use' or 'when not to use'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
draft_invoiceAInspect
Create a DRAFT invoice in Xero with VAT preview. Returns line totals and tax amounts for review before sending.
| Name | Required | Description | Default |
|---|---|---|---|
| notes | No | Notes for internal use | |
| currency | No | Currency code (defaults to EUR) | EUR |
| due_date | Yes | Due date (YYYY-MM-DD) | |
| reference | No | Invoice reference number | |
| line_items | Yes | Invoice line items | |
| api_key_hash | Yes | Customer API key hash identifying Xero token | |
| contact_name | Yes | Invoice recipient name | |
| contact_email | Yes | Invoice recipient email address |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses the draft nature, the return of line totals and tax amounts, and the review-before-sending workflow. It does not contradict any annotations. However, it could mention the need for the api_key_hash for authentication.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two front-loaded sentences with no extraneous words. Every sentence adds value and is efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 8 parameters, 5 required, no output schema, and no annotations, the description is fairly complete. It explains the purpose and output but could elaborate on how VAT is determined (reference to category/tax_type) and that the invoice is stored as a draft in Xero.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description adds 'VAT preview' and 'line totals and tax amounts' but does not provide additional meaning beyond the schema for individual parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (Create), the resource (DRAFT invoice), and adds specificity with 'VAT preview' and 'Returns line totals and tax amounts for review before sending'. This distinguishes it from siblings like 'send_invoice'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool vs alternatives (e.g., 'send_invoice'). The description implies it's for draft creation before sending, but doesn't state exclusions or alternative scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
generate_reportAInspect
Generate a financial report: P&L, Balance Sheet, Cash Flow (indirect method from balance sheet deltas), or VAT Summary with Portuguese SAF-T filing hints.
| Name | Required | Description | Default |
|---|---|---|---|
| to_date | No | End date (YYYY-MM-DD, defaults to today) | |
| from_date | No | Start date (YYYY-MM-DD, defaults to 1st of current month) | |
| report_type | Yes | Report type to generate | |
| api_key_hash | Yes | Customer API key hash identifying Xero token |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description must disclose behavioral traits. It mentions the cash flow method ('indirect method from balance sheet deltas') and VAT filing hints, but fails to clarify rate limits, authentication requirements (though api_key_hash is in schema), or whether the operation is read-only or mutative. Partial transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that lists all report types and key details. No redundant words, every component serves a purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and no annotations, the description lacks details on return format, side effects, or prerequisites beyond the parameters. It provides specific insights into cash flow and VAT but omits broader usage context, such as error handling or data dependencies.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description adds value by explaining the cash flow method ('indirect method from balance sheet deltas') and VAT filing hints, which go beyond the schema's enum labels. Dates are not elaborated, but overall adds meaningful context.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Generate' and the resource 'financial report', listing specific types (P&L, Balance Sheet, Cash Flow, VAT Summary) with additional details about the indirect method and SAF-T hints, making it highly specific and distinct from sibling tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While the description implies when to use this tool (for generating financial reports), it provides no explicit guidance on when not to use it or alternatives. Given the sibling tools (e.g., calculate_vat, confirm_reconciliation), some context exists but no direct usage recommendations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
healthAInspect
Health check. Returns server status and optional echo.
| Name | Required | Description | Default |
|---|---|---|---|
| echo | No | Optional string to echo back |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It only states it returns status and echo, but does not disclose any behavioral traits like side effects, permissions needed, or rate limits. For a read-only diagnostic tool, this is minimal disclosure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, front-loaded sentence with no unnecessary words. It efficiently communicates the core functionality.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple tool with one optional parameter and no output schema, the description covers the essentials. It could be improved by noting it is a safe, read-only operation, but overall it is adequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, and the description simply repeats 'optional echo' from the schema. It adds no additional meaning beyond what the parameter description already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool is a 'Health check' that 'Returns server status and optional echo', which is a specific verb and resource. It distinguishes well from sibling tools which are business operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies the tool is for checking server health, but does not explicitly state when to use or not use it versus alternatives. No mention of prerequisites or when to avoid.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
reconcile_transactionsAInspect
Match bank statement lines against unreconciled Xero transactions. Returns confidence-scored matches (>=0.8 matched, 0.5-0.8 suggested, <0.5 unmatched).
| Name | Required | Description | Default |
|---|---|---|---|
| to_date | No | End date filter (YYYY-MM-DD) | |
| from_date | No | Start date filter (YYYY-MM-DD) | |
| api_key_hash | Yes | Customer API key hash identifying Xero token |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description discloses the matching process and output categories (confidence-scored matches with thresholds), which adds behavioral insight beyond the basic purpose. However, it does not mention side effects, authentication needs, or rate limits, but these are partially covered by the required api_key_hash parameter.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two concise sentences, each providing essential information: purpose and return value. No unnecessary words or repetition.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema, the description explains the return format (confidence thresholds) adequately. It covers the core functionality, though it could mention prerequisites like the need for an api_key_hash, which is already in the schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage for all three parameters, so the description does not need to add parameter details. It provides no additional meaning beyond the schema, meeting the baseline of 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool matches bank statement lines against unreconciled Xero transactions, specifying a specific verb and resource. However, it does not distinguish itself from sibling tools like 'confirm_reconciliation', which could be used after this step.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives like 'confirm_reconciliation' or when not to use it. The description implies usage for reconciliation but lacks explicit context or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
send_invoiceAInspect
Authorise a DRAFT invoice and email it to the contact via Xero. Two-step: sets status to AUTHORISED then triggers email.
| Name | Required | Description | Default |
|---|---|---|---|
| invoice_id | Yes | Xero InvoiceID to authorise and send | |
| api_key_hash | Yes | Customer API key hash identifying Xero token |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses two-step behavior (authorise then email) and marks it as a mutation. No annotations provided, so description carries burden; it covers key side effects but could mention irreversible nature.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences, front-loading action and steps. No unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequately explains purpose and two-step process for a simple tool with two params and no output schema. Lacks error handling or status checks but sufficient for agent understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with clear descriptions for both parameters. Description adds no extra meaning beyond the schema, so baseline score applies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it authorises a DRAFT invoice and emails it via Xero. Distinguishes from siblings like 'draft_invoice' by specifying the two-step process.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies use case for draft invoices needing authorisation and emailing, but lacks explicit when-not-to-use or alternative tools. Sibling 'draft_invoice' suggests context but not stated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!