mlp-tax
Server Details
Deterministic MLP tax computation engine. 6 tools: basis projection, estate planning, sell vs hold comparison, MLP vs ETF tax analysis, distribution stress test, and MLP reference data. Returns IRS-cited calculations for K-1 basis tracking, §751 recapture, and §199A QBI.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4/5 across 6 of 6 tools scored.
Most tools have distinct purposes, but there is some overlap between mlp_projection and k1_basis_multi_year, as both compute multi-year basis erosion and §751 accumulation, which could cause confusion. However, mlp_projection includes tax liability and break-even analysis, while k1_basis_multi_year focuses on basis gaps and step-up values, so descriptions help differentiate them.
Tool names follow a consistent snake_case pattern with clear prefixes (k1_, mlp_), but there is a minor deviation with mlp_info, which uses a noun-only name instead of a verb_noun pattern like the others. Overall, the naming is predictable and readable.
With 6 tools, the count is well-scoped for the MLP tax analysis domain. Each tool serves a specific function, from basis computation and multi-year projections to estate planning and sell/hold comparisons, making the set comprehensive without being overwhelming.
The tool set provides complete coverage for MLP tax analysis, including basis computation (single and multi-year), estate planning, reference data, projections, and sell/hold comparisons. There are no obvious gaps; it supports the full lifecycle from initial investment to inheritance planning.
Available Tools
6 toolsk1_basis_computeAInspect
Compute adjusted partner basis from a single year of Schedule K-1 data using the IRS Partner's Basis Worksheet (Lines 1-14). Returns the ending adjusted basis, each worksheet line, any §731 gain (if distributions exceeded basis), and §704(d) suspended losses. Accepts structured K-1 box values (Box 1, Box 19A, Item K liabilities, etc.).
| Name | Required | Description | Default |
|---|---|---|---|
| box1 | Yes | K-1 Box 1: ordinary business income (loss) | |
| box2 | No | K-1 Box 2: net rental income | |
| box5 | No | K-1 Box 5: interest income | |
| box11 | No | K-1 Box 11: §179 / other deductions | |
| units | Yes | ||
| box13w | No | K-1 Box 13W: §199A QBI amount | |
| box19a | Yes | K-1 Box 19A: cash distributions | |
| ticker | Yes | ||
| prior_basis | Yes | Beginning-of-year adjusted basis in USD | |
| tax_bracket | No | ||
| liability_decrease | No | §752(b) liability decrease in USD | |
| liability_increase | No | §752(a) liability increase in USD |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses key behavioral traits: it returns ending adjusted basis, worksheet lines, §731 gain, and §704(d) suspended losses, which clarifies the output format and tax implications. However, it does not mention error handling, rate limits, or authentication needs, leaving some gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose, followed by output details and input requirements in two efficient sentences. Every sentence adds value without redundancy, making it appropriately sized and structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of tax computation with 12 parameters and no output schema, the description is mostly complete by explaining the purpose, output components, and input context. However, it lacks details on error cases or example usage, which could enhance completeness for such a specialized tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description adds meaning by specifying that inputs are 'structured K-1 box values (Box 1, Box 19A, Item K liabilities, etc.)', which helps interpret the schema parameters. With 75% schema description coverage, the description compensates by providing context, though it does not detail all 12 parameters individually.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Compute adjusted partner basis'), resource ('from a single year of Schedule K-1 data'), and method ('using the IRS Partner's Basis Worksheet (Lines 1-14)'). It distinguishes from sibling tools by specifying single-year computation versus multi-year alternatives like 'k1_basis_multi_year'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for single-year K-1 basis computation but does not explicitly state when to use this tool versus alternatives like 'k1_basis_multi_year' or other MLP-related tools. It mentions the data required ('structured K-1 box values') but lacks guidance on prerequisites or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
k1_basis_multi_yearAInspect
Compute a running adjusted partner basis across multiple years of K-1 data. Returns year-by-year basis erosion, accumulated §751 recapture, projected zero-basis year, §1014 step-up value if death today, and the critical basis gap — the difference between what a broker typically reports (original cost) and the true IRS-adjusted basis. Uses IRC §705, §731, §751, §1014.
| Name | Required | Description | Default |
|---|---|---|---|
| units | Yes | ||
| ticker | Yes | ||
| k1_years | Yes | Array of annual K-1 data, one per year held (max 50) | |
| tax_bracket | No | ||
| purchase_year | No | ||
| purchase_price | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It discloses the tool's purpose and outputs (basis erosion, recapture, projected zero-basis year, etc.), but doesn't mention behavioral traits like computational limits, error handling, data validation, or whether it's a read-only vs. mutation operation. The description adds value by specifying the tax code sections used, but lacks operational context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured in two sentences. The first sentence clearly states the purpose and outputs, while the second provides important context about tax code sections and the basis gap concept. Every element serves a purpose with no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a complex financial calculation tool with 6 parameters, 17% schema coverage, no annotations, and no output schema, the description is moderately complete. It explains the purpose and outputs well, but doesn't provide enough guidance on parameter usage, error conditions, or what the return format looks like. The tax code references add valuable context, but operational details are insufficient.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is only 17%, so the description must compensate. It doesn't explicitly explain individual parameters, but provides crucial context about what the tool does with the parameters (computing basis across multiple years using K-1 data and purchase information). The mention of specific tax code sections and outputs like '§1014 step-up value' gives semantic meaning to the expected inputs, though parameter-specific details are still lacking.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool computes a running adjusted partner basis across multiple years of K-1 data, specifying the exact verb ('compute') and resource ('partner basis'). It distinguishes from siblings by focusing on multi-year analysis rather than single-year computation (k1_basis_compute) or other MLP-related functions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context through the mention of K-1 data and IRS-adjusted basis calculations, suggesting it's for tax/accounting analysis of MLP investments. However, it doesn't explicitly state when to use this tool versus alternatives like k1_basis_compute or mlp_projection, nor does it provide exclusion criteria or prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mlp_estate_planningAInspect
Compute §1014 stepped-up basis and estate planning analysis for one or more MLP positions. Returns total deferred tax eliminated at death, §751 ordinary income recapture eliminated, per-beneficiary inheritance split, community-property double step-up (if applicable), and hold-vs-sell-today dollar advantage. Uses IRC §1014(a), §1014(b)(6), §751(a).
| Name | Required | Description | Default |
|---|---|---|---|
| positions | Yes | Array of MLP positions to analyze (max 20) | |
| tax_bracket | No | ||
| beneficiaries | No | Number of beneficiaries (default 1, max 20) | |
| community_property | No | Whether positions are in a community property state (doubles step-up) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the tool's purpose and outputs (e.g., tax eliminated, inheritance split) but lacks details on permissions, rate limits, error handling, or computational constraints beyond the implied analysis scope.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose, followed by specific outputs and legal references in a single, efficient sentence with no redundant information, making it highly concise and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description adequately covers the tool's purpose and outputs, but lacks details on behavioral traits, error cases, or comprehensive parameter guidance, leaving gaps for a tool with 4 parameters and complex tax analysis.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 75%, with some parameters like 'beneficiaries' and 'community_property' having descriptions. The description adds value by explaining the analysis context (e.g., community-property double step-up) and referencing tax codes, but does not fully detail all parameters beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Compute §1014 stepped-up basis and estate planning analysis') and resource ('for one or more MLP positions'), distinguishing it from sibling tools like k1_basis_compute or mlp_projection by focusing on estate planning rather than basis computation or projections.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context through references to estate planning and tax elimination at death, but does not explicitly state when to use this tool versus alternatives like mlp_sell_vs_hold or k1_basis_compute, nor does it mention prerequisites or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mlp_infoAInspect
Get reference data for a specific MLP: current distribution rate, distribution growth CAGR, default return-of-capital percentage, K-1 entity count, and operating state count. Useful for understanding an MLP's complexity and expected tax characteristics.
| Name | Required | Description | Default |
|---|---|---|---|
| ticker | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It describes what data is returned (reference data points) and hints at the tool's purpose (understanding complexity and tax characteristics). However, it doesn't disclose behavioral traits like whether it's a read-only operation, potential rate limits, error conditions, or authentication requirements. The description adds some context but lacks comprehensive behavioral disclosure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized with two sentences: the first lists the specific data points retrieved, and the second explains the utility. Every sentence adds value—no wasted words. It's front-loaded with the core purpose immediately stated.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 1 parameter with 0% schema coverage and no output schema, the description is moderately complete. It explains what data is returned and the tool's utility, but lacks details on return format, error handling, or behavioral constraints. For a simple lookup tool, this is adequate but has clear gaps in fully documenting the tool's behavior and output.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description doesn't mention the 'ticker' parameter explicitly, but with only 1 parameter and 0% schema description coverage, it compensates by clearly stating the tool is for 'a specific MLP' and listing the exact ticker values in the schema enum. The description adds meaning by explaining what the tool returns for a given MLP, though it doesn't detail parameter format or constraints beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Get reference data for a specific MLP' with specific data points listed (distribution rate, growth CAGR, etc.). It distinguishes from siblings by focusing on reference data rather than computations or projections. However, it doesn't explicitly contrast with sibling tools like 'mlp_projection' or 'mlp_estate_planning' in the description text itself.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides implied usage guidance: 'Useful for understanding an MLP's complexity and expected tax characteristics.' This suggests when to use it (for reference data and tax understanding) but doesn't explicitly state when NOT to use it or name alternatives. No explicit comparison with sibling tools like 'mlp_projection' or 'k1_basis_compute' is provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mlp_projectionAInspect
Compute a multi-year tax projection for an MLP (Master Limited Partnership) position. Returns year-by-year basis erosion, §751 accumulation, annual tax liability, terminal FMV, §1014 step-up value at death, and the break-even sell price. Uses the IRS Partner's Basis Worksheet methodology (IRC §705, §731, §751, §1014, §199A). Supported tickers: EPD, ET, MPLX, WES, PAA, NRP, USAC, SUN.
| Name | Required | Description | Default |
|---|---|---|---|
| units | Yes | Number of MLP units held | |
| years | No | Projection horizon in years (1-50, default 20) | |
| ticker | Yes | MLP ticker symbol | |
| tax_bracket | No | Federal marginal rate as decimal, e.g. 0.32 (default 0.32) | |
| purchase_price | No | Purchase price per unit in USD (optional — defaults to reasonable estimate) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden. It discloses the methodology (IRS Partner's Basis Worksheet) and legal references (IRC sections), adding useful context. However, it lacks details on behavioral traits like computational limits, error handling, or whether it's a read-only calculation vs. having side effects, leaving gaps for a tool with complex tax logic.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose, followed by specific outputs and context (methodology, supported tickers). Every sentence adds value—no fluff or repetition—making it efficiently structured and appropriately sized for the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (tax projections with multiple outputs) and no annotations or output schema, the description does well by listing key outputs and methodology. However, it could be more complete by briefly mentioning the return format (e.g., structured data per year) or assumptions (e.g., default rates), leaving minor gaps for full agent understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds no additional parameter semantics beyond implying the tool uses these inputs for projections, meeting the baseline of 3 without compensating for or exceeding schema information.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Compute a multi-year tax projection') and resource ('for an MLP position'), distinguishing it from siblings like 'mlp_info' (general info) or 'mlp_sell_vs_hold' (comparison tool). It explicitly lists the comprehensive outputs (basis erosion, §751 accumulation, etc.), making the purpose highly specific and differentiated.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context by specifying it's for MLP positions and listing supported tickers, which implicitly guides when to use it (vs. tools for other assets). However, it doesn't explicitly state when not to use it or name alternatives among siblings (e.g., 'mlp_sell_vs_hold' for sell/hold decisions), missing full differentiation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mlp_sell_vs_holdAInspect
Compare selling an MLP position now (triggering §751 recapture + §731 capital gain) versus holding and letting heirs inherit (§1014 step-up eliminates all deferred tax). Returns the break-even sell price: the unit price above which selling becomes better than holding. Uses IRC §731, §751, §1014, §1(h), §199A.
| Name | Required | Description | Default |
|---|---|---|---|
| units | Yes | ||
| ticker | Yes | ||
| years_held | No | Years the position has been held (default 10) | |
| tax_bracket | No | ||
| purchase_price | No | ||
| years_to_project | No | Years to project the hold scenario (default 10) | |
| community_property | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden but only mentions tax code calculations without addressing behavioral aspects like required permissions, data sources, computational limitations, or error conditions. It doesn't disclose whether this is a simulation, actual transaction tool, or what happens with the results.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured in two sentences: first states the comparison purpose and tax mechanisms, second specifies the output (break-even sell price). Every element serves a clear purpose with zero wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a complex tax calculation tool with 7 parameters, no annotations, and no output schema, the description provides adequate purpose and calculation context but lacks details about return format, error handling, assumptions, or limitations. It's minimally viable but leaves significant gaps in understanding tool behavior.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With only 29% schema description coverage (2 of 7 parameters have descriptions), the description compensates by explaining the core calculation logic involving specific tax codes, which provides context for how parameters like purchase_price, tax_bracket, and years_held interact. However, it doesn't explain parameters like community_property or provide format details.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific purpose: comparing selling vs holding an MLP position with detailed tax code references (§751, §731, §1014, §1(h), §199A). It distinguishes from siblings by focusing on break-even analysis rather than basis computation, estate planning, or general info/projection tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for tax optimization decisions between selling now vs inheritance planning, but doesn't explicitly state when to use this tool versus alternatives like mlp_estate_planning or mlp_projection. It provides clear context about the specific tax scenarios being analyzed.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail — every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control — enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management — store and rotate API keys and OAuth tokens in one place
Change alerts — get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption — public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics — see which tools are being used most, helping you prioritize development and documentation
Direct user feedback — users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!