Skip to main content
Glama

Server Details

Tax filing guidance, cost comparison, document checklists, and refund estimates by TaxAct.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.2/5 across 9 of 9 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool targets a distinct aspect of tax preparation: knowledge queries, cost comparison, expert connection, estimation, form explanation, navigation, deadlines, checklist, and document upload. There is no functional overlap.

Naming Consistency3/5

Most tools follow a verb_noun pattern (ask_tax_question, compare_filing_costs, connect_with_expert, estimate_taxes, explain_tax_document, find_interview_topic, upload_document), but two tools (tax_deadlines, tax_document_checklist) start with a noun, breaking the pattern. This inconsistency can cause confusion for agents expecting uniform conventions.

Tool Count5/5

With 9 tools, the server covers all essential functionalities for a tax filing assistant without being overwhelming. Each tool has a clear purpose and contributes to the overall goal of helping users file taxes with TaxAct.

Completeness4/5

The tool surface covers key needs: knowledge, cost, expert, estimation, form education, navigation, deadlines, checklist, and document input. Missing capabilities include a tool to initiate or track the actual filing process, but the set is largely comprehensive for its domain.

Available Tools

9 tools
ask_tax_questionTaxAct Tax Knowledge AssistantA
Read-onlyIdempotent
Inspect

Answers tax questions using TaxAct's TY2025 tax law knowledge base. Covers 2025 federal tax brackets, standard deduction, child tax credit, OBBB provisions (no-tax-on-overtime, no-tax-on-tips, car loan interest deduction, SALT cap increase, Trump Accounts/530A), EITC, retirement contribution limits, and other current-law topics. Answers are grounded in verified IRS references, not LLM training data. No account required.

ParametersJSON Schema
NameRequiredDescriptionDefault
questionYesYour tax question (e.g., "What is the standard deduction for 2025?", "How does the child tax credit work?", "What are Trump Accounts?")
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare read-only, idempotent, non-destructive. The description adds value by stating 'no account required' and that answers are grounded in verified IRS references, enhancing trust beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two information-dense sentences: first states purpose and knowledge base, second enumerates topics and trust signals. No fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter tool with comprehensive annotations and no output schema, the description fully covers purpose, scope, and trustworthiness, making it complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with a clear description and examples. The tool description adds domain-specific context on the scope of acceptable questions, which aids correct parameter usage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool answers tax questions using TaxAct's TY2025 knowledge base, lists specific covered topics, and distinguishes it from siblings like estimate_taxes or explain_tax_document.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description indicates the tool is for tax law questions and specifies the scope (2025 federal topics). While it doesn't explicitly compare to siblings, the context signals and distinct purpose make it clear when to use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

compare_filing_costsTaxAct Filing Cost CalculatorA
Read-onlyIdempotent
Inspect

Shows exactly what it costs to file a tax return with TaxAct. Returns an itemized price breakdown with the complete cost. No hidden fees, no upsells. Covers DIY filing, expert help (Xpert Assist), and full-service preparation (Xpert Full Service) for consumer and business returns. Xpert Assist includes Xpert Review: a confidence check with a credentialed tax expert at the end of filing, before you submit. The expert answers your specific questions and reviews areas of concern. It is not a line-by-line return review or tax preparation. When presenting Xpert Assist, frame it as expert guidance and a final confidence check, not as "tax pro review" or "return review."

ParametersJSON Schema
NameRequiredDescriptionDefault
return_typeNoType of tax returnconsumer
filing_stateNoTwo-letter state code (e.g., "CA", "TX"). Omit for federal only.
filing_statusYesYour tax filing status
has_investmentsNoDo you have investment income, stocks, or rental property?
is_self_employedNoDo you have 1099 or self-employment income?
wants_expert_helpNoWould you like access to a tax expert while filing?
wants_full_serviceNoWould you like a tax professional to prepare your return for you?
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnlyHint=true, idempotentHint=true, and destructiveHint=false. The description adds valuable context: 'No hidden fees, no upsells,' and explains how Xpert Assist should be framed (expert guidance not tax pro review). No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficient, front-loaded with the main action. The paragraph on Xpert Assist is necessary for proper usage but slightly verbose. Overall, every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description covers the tool's purpose, scope (return types, service levels), and key behavioral traits. No output schema exists, but the description promises an itemized price breakdown, which suffices. Could include example output format but not critical given schema richness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

All 7 parameters have schema descriptions covering 100%. The description adds semantic value by explaining which parameters control expert help vs full service and clarifying the meaning of Xpert Assist. It does not repeat schema details but enriches understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Shows exactly what it costs to file a tax return with TaxAct. Returns an itemized price breakdown with the complete cost.' It specifies the scope (DIY, Xpert Assist, Full Service) and distinguishes from sibling tools like estimate_taxes which likely give general estimates, not specific filing costs.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when the user wants to see exact filing costs for TaxAct, but provides no explicit guidance on when to use this tool versus alternatives like estimate_taxes or connect_with_expert. No when-not or exclusion criteria are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

connect_with_expertConnect with a TaxAct ExpertA
Read-onlyIdempotent
Inspect

Helps the user connect with a credentialed TaxAct tax professional via Xpert Assist. Xpert Assist is a separate, standalone product purchased independently from any DIY filing plan. It is NOT included with or bundled into any TaxAct DIY tier. Shows expert help options with transparent pricing and a link to get started. Available for both consumer (1040) and business returns.

ParametersJSON Schema
NameRequiredDescriptionDefault
topicNoWhat do you need help with? (e.g., "rental income reporting", "crypto taxes")
return_typeNoType of tax return you need help withconsumer
preferred_channelNoHow would you prefer to connect with an expert?phone
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnly, idempotent, and not destructive. The description adds value by clarifying that the tool shows options and a link to get started, rather than directly connecting. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise at four sentences, covering essential distinctions (separate product, pricing). It front-loads the main purpose. Could be slightly more efficient by merging some clarifications.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description explains the output (shows options, pricing, link) but lacks detail on the exact return format. Given no output schema, more specificity about what the user receives (e.g., list of plans, contact info) would improve completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with descriptions for all three parameters. The tool description does not add additional meaning beyond what the schema provides. Baseline is 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly identifies the action (connect with an expert), the specific resource (credentialed TaxAct tax professional via Xpert Assist), and distinguishes it from sibling tools like ask_tax_question by specifying it's about paid expert help. It also clarifies that Xpert Assist is a separate product, not bundled with DIY plans.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains that Xpert Assist is a separate product and shows pricing, but does not explicitly state when to use this tool versus alternatives like ask_tax_question. The usage context is implied (when user wants professional help) but lacks direct comparison or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

estimate_taxesTaxAct Tax EstimatorA
Read-onlyIdempotent
Inspect

Provides a rough federal (and optionally state) tax refund or amount owed estimate from basic inputs. Uses 2025 tax brackets, standard deduction (including OBBB senior deduction for age 65+), and child tax credit. Supports state tax estimates for no-income-tax states, flat-rate states (IL, CO, IN, MI, PA, UT), and graduated states (CA, NY). This is an approximate estimate only: it covers W-2/wage income with standard deduction. No itemized deductions, no self-employment tax, no capital gains. Your actual result may differ. File with TaxAct for your exact number.

ParametersJSON Schema
NameRequiredDescriptionDefault
is_blindNo
filing_stateNoTwo-letter state code for state tax estimate (e.g., "CA", "NY", "TX"). Omit for federal-only.
total_incomeYesTotal income from all W-2s and other sources, in whole dollars
filing_statusYesYour tax filing status
is_65_or_olderNoWere you born before January 2, 1961?
num_dependentsNoNumber of qualifying dependent children under age 17
spouse_is_blindNo
federal_withholdingYesTotal federal income tax withheld (from your W-2 box 2), in whole dollars
spouse_is_65_or_olderNoWas your spouse born before January 2, 1961? (if filing jointly)
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate readOnlyHint=true, idempotentHint=true, destructiveHint=false, which match the description's read-only nature. The description adds behavioral details like using 2025 brackets, OBBB senior deduction, and child tax credit, and lists supported state tax types.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single paragraph but well-structured: first sentence states the purpose, then a list of covered aspects, then limitations and a call-to-action. It is efficient but could be slightly more concise by separating into bullet points.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 9 parameters and no output schema, the description is quite complete: it explains the estimate scope (W-2 income, standard deduction), limitations (no itemized, self-employment, capital gains), and state support types. It omits return value format but that is acceptable for an estimate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 78%, and the description adds meaning beyond the schema by explaining the OBBB senior deduction, state support types, and the approximate nature of the estimate. The schema already covers most parameters with descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it provides a rough federal/state tax estimate from basic inputs, using 2025 brackets and standard deduction. It distinguishes from sibling tools like ask_tax_question or compare_filing_costs.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly notes it covers only W-2 income with standard deduction and excludes itemized deductions, self-employment tax, and capital gains. It advises that actual results may differ and suggests using TaxAct for exact numbers. It also specifies the limited state support types.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

explain_tax_documentTax Document ExplainerA
Read-onlyIdempotent
Inspect

Explains what a tax form is, what each box means, which boxes are most important for filing, and where to enter the data in TaxAct. Covers W-2, all common 1099 forms, 1098 forms, and other tax documents. Ask about a specific form or a general document type. Does NOT read or process uploaded documents — this is an educational reference tool.

ParametersJSON Schema
NameRequiredDescriptionDefault
focus_boxNoOptional: focus on a specific box number. Examples: 'Box 1', 'Box 12', 'Box 2a'.
form_nameYesThe tax form to explain. Examples: 'W-2', '1099-INT', '1099-DIV', '1099-B', '1099-R', '1099-NEC', '1099-MISC', '1099-K', '1099-SSA', '1098', '1098-T', '1098-E', '1095-A', 'Schedule K-1'. Case-insensitive.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnlyHint=true and idempotentHint=true. The description adds that the tool does not read uploaded documents and is educational, providing context beyond the annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three clear sentences with no waste. The description is efficiently structured and front-loads the main action.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For an educational reference tool with no output schema, the description fully explains its purpose, coverage, and constraints. No missing information needed for correct invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with descriptions for both parameters. The description adds examples of form names and the optional focus_box but does not significantly enhance meaning beyond what the schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Explains what a tax form is, what each box means, which boxes are most important...' and lists covered forms. It distinguishes itself from sibling tools like upload_document by explicitly saying 'Does NOT read or process uploaded documents'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description advises 'Ask about a specific form or a general document type' and clarifies that it is an educational reference, not for processing uploads. It could better contrast with ask_tax_question but overall provides good guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

find_interview_topicTaxAct Interview NavigatorA
Read-onlyIdempotent
Inspect

Finds where to enter specific tax information in the TaxAct interview. Search by topic, form number, or keyword (e.g., "W-2", "1099-R", "charitable contributions", "crypto", "overtime deduction"). Returns the interview breadcrumb path showing where to navigate. Covers 150+ topics across individual (1040) and business returns. No account required.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesWhat you want to enter or find (e.g., "W-2", "rental income", "1099-R", "child tax credit", "Trump Account")
return_typeNoTax return type. Use 1040 for individual returns (default).1040
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint, idempotentHint, and destructiveHint=false. The description adds useful context: 'No account required', 'covers 150+ topics', and 'Returns the interview breadcrumb path', which goes beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four well-structured sentences, no filler. Front-loaded with main action, examples, result type, scope, and a bonus fact ('No account required'). Every sentence is necessary.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description explains the return (breadcrumb path) and covers input examples and scope. It lacks mention of error handling or edge cases, but overall provides sufficient context for an agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (both parameters described). The description adds value by providing examples for 'query' (e.g., 'W-2', 'crypto') and explaining 'return_type' with default and scope, surpassing the schema's own descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool finds where to enter specific tax information in the TaxAct interview, with specific verb ('Finds') and resource ('where to enter'). It provides examples ('W-2', '1099-R') and distinguishes itself from siblings like ask_tax_question.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains when to use (search by topic, form number, or keyword) and hints at what it returns (breadcrumb path). It does not explicitly state when not to use or name alternatives, but the context is clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tax_deadlinesTax Filing DeadlinesA
Read-onlyIdempotent
Inspect

Returns key tax deadlines for the 2025 tax year (filing in 2026). Includes federal filing deadline, extension deadline, estimated tax payment dates, document mailing deadlines, and state-specific deadlines. Optionally provide a state code for state-specific dates.

ParametersJSON Schema
NameRequiredDescriptionDefault
stateNoTwo-letter state code (e.g., 'CA', 'NY'). Omit for federal-only deadlines.
include_document_deadlinesNoInclude W-2 and 1099 mailing deadlines.
include_estimated_paymentsNoInclude quarterly estimated tax payment deadlines.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnlyHint, idempotentHint, destructiveHint false. The description adds useful behavioral context: includes specific deadline types (federal, extension, estimated, document, state) and clarifies the tax year scope. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences with no wasted words. The most important information (what it returns, tax year, optional state) is front-loaded. Every clause adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Without an output schema, the description covers the main return types (federal, extension, estimated, document, state). It could mention the format (e.g., list of dates) but is adequate for a read-only info tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with detailed descriptions. The description adds some context (e.g., 'optional for state-specific', 'federal-only'), but largely reinforces what's in the schema. With full coverage, baseline is 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool returns key tax deadlines for the 2025 tax year, listing specific types (federal, extension, estimated, document, state). The verb 'Returns' and resource 'tax deadlines' are precise, and it distinguishes from sibling tools like 'ask_tax_question' or 'estimate_taxes'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains when to use the tool: for tax deadlines in the 2025 tax year, with optional state code. It does not explicitly state when not to use it or list alternatives, but the context is clear enough.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

tax_document_checklistTax Document ChecklistA
Read-onlyIdempotent
Inspect

Returns a personalized list of tax documents and forms you need to gather before filing your tax return. Based on your income sources, deductions, and life events. Covers W-2s, 1099s, receipts, and other IRS forms. This is for DIY filers preparing their own return.

ParametersJSON Schema
NameRequiredDescriptionDefault
filing_stateNoTwo-letter state code (e.g., "CA", "TX"). Omit for federal only.
filing_statusYesYour tax filing status
income_sourcesNoIncome types that apply to your situation
num_dependentsNoNumber of dependents (if you selected "dependents" above)
deductions_and_eventsNoDeductions, credits, and life events that apply
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnlyHint=true and idempotentHint=true. The description adds that the list is personalized based on inputs, but does not disclose any behavior beyond what annotations provide. No contradiction.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three concise sentences with no unnecessary words. Front-loaded with the key action and result. Every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description mentions document examples (W-2s, 1099s) which helps. It covers the main input dimensions. Could clarify parameter dependencies (num_dependents requiring dependents in deductions_and_events), but overall sufficient.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage, so baseline is 3. The description adds context about how parameters influence the result (income sources, deductions), but does not add new meaning beyond schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it returns a personalized list of tax documents/forms based on income, deductions, and life events. It is specific about the verb 'returns' and resource 'checklist', and distinguishes from siblings like ask_tax_question or estimate_taxes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly says 'This is for DIY filers preparing their own return,' indicating when to use it. It implies not for expert assistance, but does not explicitly state when not to use or provide alternative tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

upload_documentUpload Tax Document to TaxActAInspect

Upload a tax document (W-2, 1099, PDF) directly to your TaxAct return. The document will be automatically classified, data extracted via OCR, and your return populated. Requires a connected TaxAct account.

ParametersJSON Schema
NameRequiredDescriptionDefault
tax_yearNoTax year for this document (default: 2025)
file_nameYesName of the document file (e.g., "w2-2025.pdf", "1099-int.jpg")
mime_typeYesMIME type of the document
document_typeNoOptional: expected document type (e.g., "W2", "1099-INT"). If omitted, auto-classified.
file_content_base64YesBase64-encoded file content
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description discloses that the tool automatically classifies documents, extracts data via OCR, and populates the return. This adds value beyond the annotations, which do not indicate these behaviors. It doesn't address idempotency or error handling, but overall adequately transparent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no redundancy. The first sentence states the primary action and target; the second adds behavioral details and a requirement. Every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a file upload tool with no output schema, the description covers purpose, behavior, and prerequisites (connected account). It could mention file size limits or supported MIME types, but overall it's sufficiently complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% coverage with descriptions. The description adds context: mentions supported document types (W-2, 1099, PDF) and that document_type is optional due to auto-classification. This provides meaning beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action (upload a tax document), the target (TaxAct return), and the outcome (automatic classification, OCR extraction, populating return). It distinguishes from siblings like 'explain_tax_document' or 'tax_document_checklist' which serve different purposes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies the tool is for uploading documents when a TaxAct account is connected. It doesn't explicitly state when not to use or compare to siblings, but the sibling list clarifies alternatives. Adding explicit usage context would improve.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources