taxact-mcp
Server Details
Tax filing guidance, cost comparison, document checklists, and refund estimates by TaxAct.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.2/5 across 9 of 9 tools scored.
Each tool targets a distinct aspect of tax preparation: knowledge queries, cost comparison, expert connection, estimation, form explanation, navigation, deadlines, checklist, and document upload. There is no functional overlap.
Most tools follow a verb_noun pattern (ask_tax_question, compare_filing_costs, connect_with_expert, estimate_taxes, explain_tax_document, find_interview_topic, upload_document), but two tools (tax_deadlines, tax_document_checklist) start with a noun, breaking the pattern. This inconsistency can cause confusion for agents expecting uniform conventions.
With 9 tools, the server covers all essential functionalities for a tax filing assistant without being overwhelming. Each tool has a clear purpose and contributes to the overall goal of helping users file taxes with TaxAct.
The tool surface covers key needs: knowledge, cost, expert, estimation, form education, navigation, deadlines, checklist, and document input. Missing capabilities include a tool to initiate or track the actual filing process, but the set is largely comprehensive for its domain.
Available Tools
9 toolsask_tax_questionTaxAct Tax Knowledge AssistantARead-onlyIdempotentInspect
Answers tax questions using TaxAct's TY2025 tax law knowledge base. Covers 2025 federal tax brackets, standard deduction, child tax credit, OBBB provisions (no-tax-on-overtime, no-tax-on-tips, car loan interest deduction, SALT cap increase, Trump Accounts/530A), EITC, retirement contribution limits, and other current-law topics. Answers are grounded in verified IRS references, not LLM training data. No account required.
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | Your tax question (e.g., "What is the standard deduction for 2025?", "How does the child tax credit work?", "What are Trump Accounts?") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare read-only, idempotent, non-destructive. The description adds value by stating 'no account required' and that answers are grounded in verified IRS references, enhancing trust beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two information-dense sentences: first states purpose and knowledge base, second enumerates topics and trust signals. No fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter tool with comprehensive annotations and no output schema, the description fully covers purpose, scope, and trustworthiness, making it complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with a clear description and examples. The tool description adds domain-specific context on the scope of acceptable questions, which aids correct parameter usage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool answers tax questions using TaxAct's TY2025 knowledge base, lists specific covered topics, and distinguishes it from siblings like estimate_taxes or explain_tax_document.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description indicates the tool is for tax law questions and specifies the scope (2025 federal topics). While it doesn't explicitly compare to siblings, the context signals and distinct purpose make it clear when to use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
compare_filing_costsTaxAct Filing Cost CalculatorARead-onlyIdempotentInspect
Shows exactly what it costs to file a tax return with TaxAct. Returns an itemized price breakdown with the complete cost. No hidden fees, no upsells. Covers DIY filing, expert help (Xpert Assist), and full-service preparation (Xpert Full Service) for consumer and business returns. Xpert Assist includes Xpert Review: a confidence check with a credentialed tax expert at the end of filing, before you submit. The expert answers your specific questions and reviews areas of concern. It is not a line-by-line return review or tax preparation. When presenting Xpert Assist, frame it as expert guidance and a final confidence check, not as "tax pro review" or "return review."
| Name | Required | Description | Default |
|---|---|---|---|
| return_type | No | Type of tax return | consumer |
| filing_state | No | Two-letter state code (e.g., "CA", "TX"). Omit for federal only. | |
| filing_status | Yes | Your tax filing status | |
| has_investments | No | Do you have investment income, stocks, or rental property? | |
| is_self_employed | No | Do you have 1099 or self-employment income? | |
| wants_expert_help | No | Would you like access to a tax expert while filing? | |
| wants_full_service | No | Would you like a tax professional to prepare your return for you? |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint=true, idempotentHint=true, and destructiveHint=false. The description adds valuable context: 'No hidden fees, no upsells,' and explains how Xpert Assist should be framed (expert guidance not tax pro review). No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficient, front-loaded with the main action. The paragraph on Xpert Assist is necessary for proper usage but slightly verbose. Overall, every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description covers the tool's purpose, scope (return types, service levels), and key behavioral traits. No output schema exists, but the description promises an itemized price breakdown, which suffices. Could include example output format but not critical given schema richness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
All 7 parameters have schema descriptions covering 100%. The description adds semantic value by explaining which parameters control expert help vs full service and clarifying the meaning of Xpert Assist. It does not repeat schema details but enriches understanding.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Shows exactly what it costs to file a tax return with TaxAct. Returns an itemized price breakdown with the complete cost.' It specifies the scope (DIY, Xpert Assist, Full Service) and distinguishes from sibling tools like estimate_taxes which likely give general estimates, not specific filing costs.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when the user wants to see exact filing costs for TaxAct, but provides no explicit guidance on when to use this tool versus alternatives like estimate_taxes or connect_with_expert. No when-not or exclusion criteria are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
connect_with_expertConnect with a TaxAct ExpertARead-onlyIdempotentInspect
Helps the user connect with a credentialed TaxAct tax professional via Xpert Assist. Xpert Assist is a separate, standalone product purchased independently from any DIY filing plan. It is NOT included with or bundled into any TaxAct DIY tier. Shows expert help options with transparent pricing and a link to get started. Available for both consumer (1040) and business returns.
| Name | Required | Description | Default |
|---|---|---|---|
| topic | No | What do you need help with? (e.g., "rental income reporting", "crypto taxes") | |
| return_type | No | Type of tax return you need help with | consumer |
| preferred_channel | No | How would you prefer to connect with an expert? | phone |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnly, idempotent, and not destructive. The description adds value by clarifying that the tool shows options and a link to get started, rather than directly connecting. No contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise at four sentences, covering essential distinctions (separate product, pricing). It front-loads the main purpose. Could be slightly more efficient by merging some clarifications.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description explains the output (shows options, pricing, link) but lacks detail on the exact return format. Given no output schema, more specificity about what the user receives (e.g., list of plans, contact info) would improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with descriptions for all three parameters. The tool description does not add additional meaning beyond what the schema provides. Baseline is 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly identifies the action (connect with an expert), the specific resource (credentialed TaxAct tax professional via Xpert Assist), and distinguishes it from sibling tools like ask_tax_question by specifying it's about paid expert help. It also clarifies that Xpert Assist is a separate product, not bundled with DIY plans.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains that Xpert Assist is a separate product and shows pricing, but does not explicitly state when to use this tool versus alternatives like ask_tax_question. The usage context is implied (when user wants professional help) but lacks direct comparison or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
estimate_taxesTaxAct Tax EstimatorARead-onlyIdempotentInspect
Provides a rough federal (and optionally state) tax refund or amount owed estimate from basic inputs. Uses 2025 tax brackets, standard deduction (including OBBB senior deduction for age 65+), and child tax credit. Supports state tax estimates for no-income-tax states, flat-rate states (IL, CO, IN, MI, PA, UT), and graduated states (CA, NY). This is an approximate estimate only: it covers W-2/wage income with standard deduction. No itemized deductions, no self-employment tax, no capital gains. Your actual result may differ. File with TaxAct for your exact number.
| Name | Required | Description | Default |
|---|---|---|---|
| is_blind | No | ||
| filing_state | No | Two-letter state code for state tax estimate (e.g., "CA", "NY", "TX"). Omit for federal-only. | |
| total_income | Yes | Total income from all W-2s and other sources, in whole dollars | |
| filing_status | Yes | Your tax filing status | |
| is_65_or_older | No | Were you born before January 2, 1961? | |
| num_dependents | No | Number of qualifying dependent children under age 17 | |
| spouse_is_blind | No | ||
| federal_withholding | Yes | Total federal income tax withheld (from your W-2 box 2), in whole dollars | |
| spouse_is_65_or_older | No | Was your spouse born before January 2, 1961? (if filing jointly) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=true, idempotentHint=true, destructiveHint=false, which match the description's read-only nature. The description adds behavioral details like using 2025 brackets, OBBB senior deduction, and child tax credit, and lists supported state tax types.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single paragraph but well-structured: first sentence states the purpose, then a list of covered aspects, then limitations and a call-to-action. It is efficient but could be slightly more concise by separating into bullet points.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 9 parameters and no output schema, the description is quite complete: it explains the estimate scope (W-2 income, standard deduction), limitations (no itemized, self-employment, capital gains), and state support types. It omits return value format but that is acceptable for an estimate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 78%, and the description adds meaning beyond the schema by explaining the OBBB senior deduction, state support types, and the approximate nature of the estimate. The schema already covers most parameters with descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it provides a rough federal/state tax estimate from basic inputs, using 2025 brackets and standard deduction. It distinguishes from sibling tools like ask_tax_question or compare_filing_costs.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly notes it covers only W-2 income with standard deduction and excludes itemized deductions, self-employment tax, and capital gains. It advises that actual results may differ and suggests using TaxAct for exact numbers. It also specifies the limited state support types.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
explain_tax_documentTax Document ExplainerARead-onlyIdempotentInspect
Explains what a tax form is, what each box means, which boxes are most important for filing, and where to enter the data in TaxAct. Covers W-2, all common 1099 forms, 1098 forms, and other tax documents. Ask about a specific form or a general document type. Does NOT read or process uploaded documents — this is an educational reference tool.
| Name | Required | Description | Default |
|---|---|---|---|
| focus_box | No | Optional: focus on a specific box number. Examples: 'Box 1', 'Box 12', 'Box 2a'. | |
| form_name | Yes | The tax form to explain. Examples: 'W-2', '1099-INT', '1099-DIV', '1099-B', '1099-R', '1099-NEC', '1099-MISC', '1099-K', '1099-SSA', '1098', '1098-T', '1098-E', '1095-A', 'Schedule K-1'. Case-insensitive. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint=true and idempotentHint=true. The description adds that the tool does not read uploaded documents and is educational, providing context beyond the annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three clear sentences with no waste. The description is efficiently structured and front-loads the main action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For an educational reference tool with no output schema, the description fully explains its purpose, coverage, and constraints. No missing information needed for correct invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with descriptions for both parameters. The description adds examples of form names and the optional focus_box but does not significantly enhance meaning beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Explains what a tax form is, what each box means, which boxes are most important...' and lists covered forms. It distinguishes itself from sibling tools like upload_document by explicitly saying 'Does NOT read or process uploaded documents'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description advises 'Ask about a specific form or a general document type' and clarifies that it is an educational reference, not for processing uploads. It could better contrast with ask_tax_question but overall provides good guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
find_interview_topicTaxAct Interview NavigatorARead-onlyIdempotentInspect
Finds where to enter specific tax information in the TaxAct interview. Search by topic, form number, or keyword (e.g., "W-2", "1099-R", "charitable contributions", "crypto", "overtime deduction"). Returns the interview breadcrumb path showing where to navigate. Covers 150+ topics across individual (1040) and business returns. No account required.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | What you want to enter or find (e.g., "W-2", "rental income", "1099-R", "child tax credit", "Trump Account") | |
| return_type | No | Tax return type. Use 1040 for individual returns (default). | 1040 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint, idempotentHint, and destructiveHint=false. The description adds useful context: 'No account required', 'covers 150+ topics', and 'Returns the interview breadcrumb path', which goes beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four well-structured sentences, no filler. Front-loaded with main action, examples, result type, scope, and a bonus fact ('No account required'). Every sentence is necessary.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description explains the return (breadcrumb path) and covers input examples and scope. It lacks mention of error handling or edge cases, but overall provides sufficient context for an agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (both parameters described). The description adds value by providing examples for 'query' (e.g., 'W-2', 'crypto') and explaining 'return_type' with default and scope, surpassing the schema's own descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool finds where to enter specific tax information in the TaxAct interview, with specific verb ('Finds') and resource ('where to enter'). It provides examples ('W-2', '1099-R') and distinguishes itself from siblings like ask_tax_question.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains when to use (search by topic, form number, or keyword) and hints at what it returns (breadcrumb path). It does not explicitly state when not to use or name alternatives, but the context is clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tax_deadlinesTax Filing DeadlinesARead-onlyIdempotentInspect
Returns key tax deadlines for the 2025 tax year (filing in 2026). Includes federal filing deadline, extension deadline, estimated tax payment dates, document mailing deadlines, and state-specific deadlines. Optionally provide a state code for state-specific dates.
| Name | Required | Description | Default |
|---|---|---|---|
| state | No | Two-letter state code (e.g., 'CA', 'NY'). Omit for federal-only deadlines. | |
| include_document_deadlines | No | Include W-2 and 1099 mailing deadlines. | |
| include_estimated_payments | No | Include quarterly estimated tax payment deadlines. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint, idempotentHint, destructiveHint false. The description adds useful behavioral context: includes specific deadline types (federal, extension, estimated, document, state) and clarifies the tax year scope. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences with no wasted words. The most important information (what it returns, tax year, optional state) is front-loaded. Every clause adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Without an output schema, the description covers the main return types (federal, extension, estimated, document, state). It could mention the format (e.g., list of dates) but is adequate for a read-only info tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with detailed descriptions. The description adds some context (e.g., 'optional for state-specific', 'federal-only'), but largely reinforces what's in the schema. With full coverage, baseline is 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns key tax deadlines for the 2025 tax year, listing specific types (federal, extension, estimated, document, state). The verb 'Returns' and resource 'tax deadlines' are precise, and it distinguishes from sibling tools like 'ask_tax_question' or 'estimate_taxes'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains when to use the tool: for tax deadlines in the 2025 tax year, with optional state code. It does not explicitly state when not to use it or list alternatives, but the context is clear enough.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tax_document_checklistTax Document ChecklistARead-onlyIdempotentInspect
Returns a personalized list of tax documents and forms you need to gather before filing your tax return. Based on your income sources, deductions, and life events. Covers W-2s, 1099s, receipts, and other IRS forms. This is for DIY filers preparing their own return.
| Name | Required | Description | Default |
|---|---|---|---|
| filing_state | No | Two-letter state code (e.g., "CA", "TX"). Omit for federal only. | |
| filing_status | Yes | Your tax filing status | |
| income_sources | No | Income types that apply to your situation | |
| num_dependents | No | Number of dependents (if you selected "dependents" above) | |
| deductions_and_events | No | Deductions, credits, and life events that apply |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint=true and idempotentHint=true. The description adds that the list is personalized based on inputs, but does not disclose any behavior beyond what annotations provide. No contradiction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three concise sentences with no unnecessary words. Front-loaded with the key action and result. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description mentions document examples (W-2s, 1099s) which helps. It covers the main input dimensions. Could clarify parameter dependencies (num_dependents requiring dependents in deductions_and_events), but overall sufficient.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage, so baseline is 3. The description adds context about how parameters influence the result (income sources, deductions), but does not add new meaning beyond schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it returns a personalized list of tax documents/forms based on income, deductions, and life events. It is specific about the verb 'returns' and resource 'checklist', and distinguishes from siblings like ask_tax_question or estimate_taxes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says 'This is for DIY filers preparing their own return,' indicating when to use it. It implies not for expert assistance, but does not explicitly state when not to use or provide alternative tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
upload_documentUpload Tax Document to TaxActAInspect
Upload a tax document (W-2, 1099, PDF) directly to your TaxAct return. The document will be automatically classified, data extracted via OCR, and your return populated. Requires a connected TaxAct account.
| Name | Required | Description | Default |
|---|---|---|---|
| tax_year | No | Tax year for this document (default: 2025) | |
| file_name | Yes | Name of the document file (e.g., "w2-2025.pdf", "1099-int.jpg") | |
| mime_type | Yes | MIME type of the document | |
| document_type | No | Optional: expected document type (e.g., "W2", "1099-INT"). If omitted, auto-classified. | |
| file_content_base64 | Yes | Base64-encoded file content |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description discloses that the tool automatically classifies documents, extracts data via OCR, and populates the return. This adds value beyond the annotations, which do not indicate these behaviors. It doesn't address idempotency or error handling, but overall adequately transparent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no redundancy. The first sentence states the primary action and target; the second adds behavioral details and a requirement. Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a file upload tool with no output schema, the description covers purpose, behavior, and prerequisites (connected account). It could mention file size limits or supported MIME types, but overall it's sufficiently complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% coverage with descriptions. The description adds context: mentions supported document types (W-2, 1099, PDF) and that document_type is optional due to auto-classification. This provides meaning beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (upload a tax document), the target (TaxAct return), and the outcome (automatic classification, OCR extraction, populating return). It distinguishes from siblings like 'explain_tax_document' or 'tax_document_checklist' which serve different purposes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies the tool is for uploading documents when a TaxAct account is connected. It doesn't explicitly state when not to use or compare to siblings, but the sibling list clarifies alternatives. Adding explicit usage context would improve.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!