BoomTax - 1099, W-2, ACA Filing
Server Details
Connect AI agents to BoomTax for IRS information return filing. Query filings (1099, W-2, 1095, etc.), check e-file status and errors, look up payers, and get filing summaries across tax years. **Tools:** - Search and filter filings by tax year, form type, and status - Get filing details with payer info and e-file status - View e-file errors with IRS error codes and messages - Look up payers/issuers with filing counts - List all supported filing types and e-file availability
- Status
- Unhealthy
- Last Tested
- Transport
- Streamable HTTP
- URL
See and control every tool call
Tool Definition Quality
Average 3.4/5 across 10 of 10 tools scored.
Tools are clearly distinguished by resource type (filing, form, payer, efile) and operation scope. Minor overlap exists between get_filing_details (which includes latest e-file status) and get_efile_status (full timeline), but descriptions clarify the distinction between summary and detailed audit trail.
Exemplary consistency with strict verb_noun snake_case convention. List operations use 'list_' prefix while retrieval operations use 'get_' prefix, making the API surface predictable and scannable.
Ten tools represents an ideal scope for this domain—sufficient to cover listings, entity details, status monitoring, and error retrieval without bloat. Each tool serves a distinct purpose in the information retrieval workflow.
The toolset is strictly read-only, lacking creation, submission, update, or deletion capabilities for filings, forms, or payers. While complete for status monitoring and reporting, it represents a significant gap for end-to-end 'Filing' workflows implied by the server name.
Available Tools
10 toolsget_efile_errorsBInspect
Get e-file errors for a filing including header-level and per-form errors with error codes and messages.
| Name | Required | Description | Default |
|---|---|---|---|
| filingId | Yes | Filing ID (GUID) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It adds valuable context about the response structure (distinguishing header-level from per-form errors and noting the presence of codes/messages), but it does not clarify operational aspects like idempotency, read-only safety, or error handling when invalid filing IDs are provided.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the action ('Get e-file errors') and immediately follows with the specific content details. Every word contributes to understanding the tool's output, with no redundant or wasted phrases.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (single required parameter) and lack of output schema, the description adequately compensates by detailing the expected return content (header vs. per-form errors, codes, messages). It could be improved by noting the parameter source or linking to 'list_filings,' but it is sufficient for a simple retrieval tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% coverage with the 'filingId' parameter already described as 'Filing ID (GUID).' The description does not add additional semantic context about the parameter (such as where to obtain the ID or format constraints), so it meets the baseline score for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves 'e-file errors for a filing' and specifies the content includes 'header-level and per-form errors with error codes and messages.' However, it does not explicitly differentiate from the sibling tool 'get_efile_status,' which likely returns higher-level status information without detailed error breakdowns.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'get_efile_status' or 'get_filing_details.' It does not indicate prerequisites (e.g., whether to check status first) or conditions where this tool would return empty results.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_efile_statusBInspect
Get the e-file status for a filing including the full request/response timeline.
| Name | Required | Description | Default |
|---|---|---|---|
| filingId | Yes | Filing ID (GUID) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It adds valuable context about the return data structure ('full request/response timeline'), indicating temporal, detailed logging content. However, it lacks disclosure on safety (read-only vs. side effects), rate limits, or error conditions that annotations would typically cover.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with no redundant words. It front-loads the core action ('Get the e-file status') and appends the distinctive scope ('full request/response timeline'), making every clause earn its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple single-parameter lookup tool without an output schema, the description adequately conveys what the agent will receive (status + timeline). It misses output format details, but given the low complexity and clear resource identification, the description is sufficiently complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% (filingId is documented as 'Filing ID (GUID)'), establishing a baseline of 3. The description does not add supplementary context about the parameter (e.g., where to obtain the ID or format constraints beyond GUID), so it remains at the baseline.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Get') and resource ('e-file status') and clarifies scope with 'including the full request/response timeline.' This distinguishes it from siblings like get_efile_errors (likely error-only) and get_filing_summary (summary view), though it does not explicitly name those alternatives.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus siblings such as get_filing_details or get_efile_errors. It does not specify prerequisites (e.g., having a filingId from list_filings) or when this level of detail (full timeline) is necessary versus a summary.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_filing_detailsBInspect
Get detailed information about a specific filing including payer summary and latest e-file status.
| Name | Required | Description | Default |
|---|---|---|---|
| filingId | Yes | Filing ID (GUID) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses return content (payer summary, e-file status) but omits safety indicators (read-only), error behaviors (invalid filingId), or cost/rate-limit characteristics. Adequate but minimal behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, 12 words. Front-loaded with action verb. Every clause earns its place by specifying exact data returned (payer summary, e-file status). No redundancy or filler.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter retrieval tool with no output schema, mentioning the specific data categories returned is minimally sufficient. However, lacking annotations, it should ideally indicate safety (read-only) or error handling to be fully complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with filingId described as 'Filing ID (GUID)'. The description implies the parameter identifies the specific filing but adds no syntax, format, or semantic details beyond the schema. Baseline score appropriate for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb ('Get') and resource ('filing'), with specific scope ('detailed information', 'payer summary', 'latest e-file status'). Implicitly distinguishes from sibling get_filing_summary by emphasizing 'detailed' vs 'summary', and from get_efile_status by including additional payer data, though explicit differentiation would strengthen this.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use this tool versus siblings like get_filing_summary (brief overview) or get_efile_status (status only). No prerequisites or conditions mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_filing_summaryBInspect
Get aggregate filing counts grouped by status and form type for a tax year.
| Name | Required | Description | Default |
|---|---|---|---|
| taxYear | No | Tax year to summarize (e.g. 2025). If omitted, summarizes all years. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure but offers minimal information. While 'Get' implies read-only access, the description fails to disclose error handling (e.g., invalid tax years), pagination behavior, or the structure of the returned aggregation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with no redundant words. It front-loads the action verb ('Get') and immediately specifies the resource and grouping dimensions, earning full marks for economy of language.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter aggregation tool, the description is minimally adequate. However, given the absence of an output schema, it should ideally hint at the return structure (e.g., that it returns counts by category) rather than just the grouping dimensions. It leaves gaps regarding what 'status' and 'form type' values are possible.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, documenting the taxYear parameter and its default behavior. The description references 'tax year' contextually but adds no syntax, format details, or semantic meaning beyond what the schema already provides, warranting the baseline score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves 'aggregate filing counts' grouped by 'status and form type' for a specific scope (tax year). The term 'aggregate' effectively distinguishes it from sibling tools like get_filing_details or list_filings, though it doesn't explicitly name those alternatives.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this aggregation tool versus retrieving individual filings via list_filings or get_filing_details. There are no prerequisites, exclusion criteria, or explicit comparisons to sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_formBInspect
Get a specific form's metadata including type, status, and dates. TINs are always masked.
| Name | Required | Description | Default |
|---|---|---|---|
| formId | Yes | Form ID (GUID) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Adds valuable behavioral context about data masking ('TINs are always masked'). However, lacks disclosure on error behavior (e.g., missing formId), authentication requirements, or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two efficient sentences with zero waste. First sentence states purpose and return value; second provides security context. Well front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Compensates for missing output schema by listing return fields (type, status, dates) and security behavior (TIN masking). However, as a retrieval tool with no annotations, it should specify error behavior for invalid form IDs to be fully complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage ('Form ID (GUID)'). Description mentions no parameters, relying entirely on schema documentation. Baseline 3 appropriate given schema completeness.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb ('Get') and resource ('form's metadata'). Specifies returned fields (type, status, dates) which distinguishes it from sibling 'get_filing_details' or 'get_filing_summary'. Implicitly differentiates from list operations via 'specific form'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use this versus siblings like 'get_filing_details' or 'list_filing_forms'. No prerequisites or exclusions mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_payerAInspect
Get payer (issuer) details for a specific filing including masked TIN and contact information.
| Name | Required | Description | Default |
|---|---|---|---|
| filingId | Yes | Filing ID (GUID) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses 'masked TIN' (important privacy behavior) and implies read-only access via 'Get,' but fails to declare the safety profile explicitly, error handling behavior, or authorization requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence that is front-loaded and information-dense. Every element earns its place: action ('Get'), resource ('payer/issuer'), scope ('for a specific filing'), and specific return values ('masked TIN and contact information'). Zero waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple single-parameter tool with 100% schema coverage, the description is reasonably complete. It compensates for the missing output schema by previewing return content (masked TIN, contact info). Minor gap: no mention of error cases or 'not found' behavior.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with 'filingId' already documented as 'Filing ID (GUID)'. The description reinforces that this ID represents a 'specific filing' but adds no additional syntax, format constraints, or examples beyond the schema's existing documentation. Baseline 3 is appropriate when schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Get') with clear resource ('payer/issuer details') and distinguishes from siblings like 'list_payers' by specifying 'for a specific filing'. It also differentiates from 'get_filing_details' by highlighting specific data types returned (masked TIN, contact info).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by specifying 'for a specific filing,' suggesting a filingId prerequisite, but lacks explicit guidance on when to use this versus 'list_payers' or 'get_filing_details'. No alternatives or exclusion criteria are stated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_filing_formsBInspect
List all forms belonging to a specific filing with their status and metadata.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | Page number (1-based, default 1) | |
| filingId | Yes | Filing ID (GUID) | |
| pageSize | No | Page size (default 100, max 200) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It adds value by specifying that returned forms include 'status and metadata', hinting at the payload structure. However, it lacks disclosure of read-only safety, pagination behavior (despite page/pageSize parameters), or error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single efficient sentence that is front-loaded with the action verb. Every clause earns its place: 'List all forms' establishes the operation, 'belonging to a specific filing' scopes the query, and 'with their status and metadata' previews the return value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple parameter structure (3 flat parameters) and lack of output schema, the description adequately covers the core purpose and hints at return content via 'status and metadata'. However, it misses opportunity to mention pagination behavior implied by the schema parameters or read-only safety given the lack of annotations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage, establishing a baseline of 3. The description reinforces the 'filingId' parameter by mentioning 'specific filing' but adds no further semantic detail about pagination parameters or parameter relationships beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verb 'List' with clear resource 'forms belonging to a specific filing' and mentions returned data 'status and metadata'. It implicitly distinguishes from sibling 'list_filings' (which lists filings, not forms within them) and 'get_form' (which gets a single form), though it doesn't explicitly name these alternatives.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus siblings like 'get_form' (for single form retrieval) or 'list_filings' (to find the filingId initially). It does not mention prerequisites such as obtaining the filingId from another tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_filingsAInspect
List tax filings with optional filters. Returns paginated results with filing name, status, form type, and dates.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | Page number (1-based, default 1) | |
| status | No | Filter by status (e.g. 'Accepted', 'Rejected') | |
| taxYear | No | Filter by tax year (e.g. 2025) | |
| formType | No | Filter by form type name (e.g. '1099-NEC', '1099-MISC', 'W-2') | |
| pageSize | No | Page size (default 100, max 200) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Since no annotations are provided, the description carries the full burden. It successfully discloses pagination behavior and describes the return fields (filing name, status, form type, dates) compensating for the missing output schema. However, it lacks information on error handling, authentication requirements, or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two efficient sentences with no wasted words. It is front-loaded with the action (List tax filings) and immediately follows with behavioral details (filters, pagination, return values). Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema, the description appropriately compensates by describing the returned data structure and pagination. For a tool with 5 optional parameters and no annotations, this provides sufficient context for invocation, though explicit mention of safety (read-only) would strengthen it further.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline score is 3. The description acknowledges the 'optional filters' capability, which aligns with the status, taxYear, and formType parameters, but does not add additional semantic context, examples, or syntax guidance beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb (List) and resource (tax filings), and mentions the optional filters capability. However, it does not explicitly differentiate from sibling retrieval tools like 'get_filing_details' to clarify when to use the list operation versus fetching a specific record.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The phrase 'with optional filters' provides implicit guidance on when to use the tool (when filtering is needed), but there are no explicit 'when-not-to-use' statements or mentions of alternatives like 'get_filing_details' for single-record retrieval.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_filing_typesAInspect
List all filing types supported by BoomTax with their tax year and e-file availability.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It implies a read-only operation via 'List' and mentions comprehensiveness ('all'), but does not explicitly confirm safety, idempotency, caching behavior, or pagination. It compensates slightly by specifying returned data fields.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence of 12 words. It is front-loaded with the action verb 'List' and every phrase contributes value—defining scope ('all'), provider ('BoomTax'), and return data content without filler.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a parameterless list operation without an output schema, the description adequately compensates by describing what data is returned (tax year, e-file availability). However, it could be strengthened by clarifying the relationship between 'filing types' and the sibling resources (forms, filings).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema contains zero parameters, establishing a baseline score of 4. The description appropriately makes no mention of parameters since none exist, requiring no additional semantic clarification.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('List'), resource ('filing types supported by BoomTax'), and key returned attributes ('tax year and e-file availability'). However, it does not explicitly differentiate from similar sibling tools like 'list_filing_forms' or 'list_filings', which could cause selection ambiguity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. Given the sibling tools with similar names (list_filing_forms, list_filings), explicit context about when to query filing types versus forms or actual filings would be necessary for correct agent selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_payersBInspect
List payers (issuers) across your filings. TINs are not shown in the list view for security.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | Page number (1-based, default 1) | |
| search | No | Search by payer name | |
| pageSize | No | Page size (default 100, max 200) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full disclosure burden. Adds valuable behavioral context that 'TINs are not shown... for security' which explains data omissions. However, lacks other operational details like pagination behavior, rate limits, or authentication requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, zero waste. First sentence establishes purpose and scope immediately; second sentence provides critical security context. Every word earns its place with no filler or redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a simple list operation with documented parameters, but gaps remain given no output schema or annotations. Security note about TINs partially compensates, though description omits what fields ARE returned and pagination behavior details.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing baseline 3. Description does not mention parameters explicitly, but doesn't need to compensate given complete schema documentation. No additional semantic context provided for 'search' (e.g., partial vs exact match) beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific action 'List' with resource 'payers (issuers)' and scope 'across your filings'. The plural form and 'list view' phrasing distinguishes it from sibling 'get_payer' (singular), though it doesn't explicitly name the alternative.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit when-to-use guidance or comparison to alternatives. While 'list view' implies contrast with detail retrieval, it fails to mention sibling 'get_payer' for individual record access or when to prefer listing versus direct retrieval.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Receive usage reports showing how your server is being used
Get monitoring and health status updates for your server
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!