Skip to main content
Glama

MLP Tax Computation

Server Details

MLP tax computation: basis erosion, §751 recapture, estate planning, sell-vs-hold.

Status
Unhealthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 3.9/5 across 6 of 6 tools scored.

Server CoherenceA
Disambiguation4/5

Most tools have distinct purposes, but there is some overlap between k1_basis_compute and k1_basis_multi_year (both compute basis, with the latter extending to multi-year analysis) and between mlp_projection and mlp_sell_vs_hold (both involve projections and break-even analysis). The descriptions help clarify differences, but an agent might initially confuse these pairs.

Naming Consistency4/5

Tool names follow a consistent snake_case pattern with a clear prefix structure (k1_ for K-1 basis tools, mlp_ for MLP-specific tools). However, there is a minor deviation with mlp_info, which uses a generic noun instead of a verb_noun pattern like the others, slightly reducing consistency.

Tool Count5/5

With 6 tools, the count is well-scoped for the server's purpose of MLP tax computation. Each tool addresses a specific aspect of the domain, such as basis calculation, estate planning, and projections, without being overly sparse or bloated.

Completeness5/5

The tool set provides comprehensive coverage for MLP tax computation, including basis calculations (single-year and multi-year), estate planning, reference data, projections, and sell vs. hold comparisons. It covers key IRS sections and workflows without obvious gaps, ensuring agents can handle typical scenarios.

Available Tools

6 tools
k1_basis_computeA
Read-onlyIdempotent
Inspect

Computes adjusted partner basis from a single year of Schedule K-1 data using the IRS Partner's Basis Worksheet methodology (Lines 1-14), per IRC §705 (basis computation), §722 (initial basis), §731(a)(1) (gain on distribution exceeding basis), §733 (basis reduction), §752 (liability share allocation), §704(d) (loss limitation and suspended-loss carryforward), and §199A (QBI deduction). Returns the ending adjusted basis, every worksheet line value, any §731 gain triggered when distributions exceed basis, and §704(d) suspended losses carried forward.

Use when: User holds direct MLP units (EPD, ET, MPLX, WES, PAA, NRP, USAC, SUN, or similar publicly traded midstream partnerships) and has structured K-1 box values for one tax year — Box 1 ordinary income, Box 19A cash distributions, Item K liability change, optionally Box 5 interest income, Box 11 §179 deduction, Box 13W §199A QBI amount. Single tax year, single lot.

Don't use for: 1099-DIV ETFs (AMLP, MLPX, AMZA — these use RIC structure, no K-1, different tax regime — use a standard cost-basis calculator instead). Multi-year basis carryforward across consecutive K-1s — use k1_basis_multi_year. General partnership interests outside publicly traded MLPs (different §1402 self-employment treatment).

Limitations: Single tax year only — for multi-year basis tracking with §731 gain detection across years, use k1_basis_multi_year. Single-lot only — for multi-lot allocation and optimal sell ordering, see lucasandersen.ai. Federal-level only — does not include state basis adjustments.

Maintained by Lucas Andersen, MS Finance, with direct positions in major midstream MLPs. Methodology auditable at lucasandersen.ai/methodology.

ParametersJSON Schema
NameRequiredDescriptionDefault
box1YesK-1 Box 1: ordinary business income (loss)
box2NoK-1 Box 2: net rental income
box5NoK-1 Box 5: interest income
box11NoK-1 Box 11: §179 / other deductions
unitsYes
box13wNoK-1 Box 13W: §199A QBI amount
box19aYesK-1 Box 19A: cash distributions
tickerYes
prior_basisYesBeginning-of-year adjusted basis in USD
tax_bracketNo
liability_decreaseNo§752(b) liability decrease in USD
liability_increaseNo§752(a) liability increase in USD
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It clearly describes what the tool returns (ending adjusted basis, worksheet lines, §731 gain, §704(d) suspended losses), which is valuable. However, it doesn't mention important behavioral aspects like whether this is a read-only calculation vs. a write operation, error handling, performance characteristics, or any rate limits. The description adds some behavioral context but leaves significant gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly front-loaded and efficient. The first sentence states the complete purpose, the second describes the return values, and the third explains the parameter context. Every sentence earns its place with zero wasted words, and the structure moves logically from purpose to outputs to inputs.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a complex financial calculation tool with 12 parameters, no annotations, and no output schema, the description is adequate but incomplete. It covers the purpose, returns, and parameter context well, but doesn't address important contextual elements like prerequisites (tax knowledge needed), limitations (single-year only), error conditions, or example scenarios. Given the complexity and lack of structured metadata, more guidance would be helpful.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 75% schema description coverage, the baseline would be 3, but the description adds meaningful context beyond the schema. It explains that the tool 'accepts structured K-1 box values (Box 1, Box 19A, Item K liabilities, etc.)', providing semantic framing for the parameters. This helps the agent understand that these aren't arbitrary numbers but specific IRS form values. However, it doesn't explain the relationship between parameters or provide examples.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('compute adjusted partner basis'), resource ('from a single year of Schedule K-1 data'), and method ('using the IRS Partner's Basis Worksheet (Lines 1-14)'). It distinguishes this tool from sibling tools like 'k1_basis_multi_year' by specifying 'single year' and from other MLP tools by focusing on basis calculation rather than estate planning, projections, or sell/hold analysis.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool: for computing adjusted partner basis from K-1 data using the IRS worksheet. It implies this is for single-year calculations (vs. 'k1_basis_multi_year'), but doesn't explicitly state when NOT to use it or name specific alternatives. The context is sufficient but lacks explicit exclusion guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

k1_basis_multi_yearA
Read-onlyIdempotent
Inspect

Computes a running adjusted partner basis across multiple years of Schedule K-1 data, per IRC §705 (basis computation), §731(a)(1) (gain on distributions exceeding basis), §751(a) (accumulated ordinary recapture), §752 (liability share allocation across years), §704(d) (suspended-loss carryforward), and §1014(a) (step-up if death today). Returns year-by-year basis trajectory, accumulated §751 recapture estimate, projected zero-basis year, §1014 step-up value if death today, and the broker-basis gap — the dollar amount by which a typical 1099-B understates true IRS-adjusted basis.

Use when: User holds a direct MLP position (EPD, ET, MPLX, WES, PAA, NRP, USAC, SUN) across multiple consecutive tax years, has K-1s for those years, and wants to track adjusted basis year over year, identify the zero-basis year, quantify the gap between broker-reported basis and true IRS basis, or project §1014 step-up value if death occurred today.

Don't use for: Single-year basis worksheet from one K-1 — use k1_basis_compute. Long-horizon forward projection from default assumptions when no actual K-1s are in hand — use mlp_projection. 1099-DIV ETFs (AMLP, MLPX, AMZA — RIC structure, no K-1, no basis-erosion mechanism; use a standard cost-basis calculator). Multi-position portfolio basis tracking — this tool handles one position per call.

Limitations: Single position, single lot — for multi-lot or multi-position basis tracking with optimal sell ordering, see lucasandersen.ai. Federal-level only — does not include state-level basis adjustments. Accumulated §751 recapture is estimated across years; actual depends on the partnership's hot-asset disposition schedule and any year-specific §751(b) events.

Maintained by Lucas Andersen, MS Finance, with direct positions in major midstream MLPs. Methodology auditable at lucasandersen.ai/methodology.

ParametersJSON Schema
NameRequiredDescriptionDefault
unitsYes
tickerYes
k1_yearsYesArray of annual K-1 data, one per year held (max 50)
tax_bracketNo
purchase_yearNo
purchase_priceYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the tool 'returns' specific outputs (e.g., basis erosion, §1014 step-up value), which implies a read-only operation, but does not clarify computational complexity, error handling, or data persistence. The description adds some context on IRS code usage but lacks details on performance or limitations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose and efficiently lists key outputs in a single sentence. It includes relevant legal references (IRC sections) that add value without unnecessary verbosity, though it could be slightly more structured for readability.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (6 parameters, nested arrays, no output schema, and no annotations), the description is moderately complete. It outlines the tool's purpose and outputs but lacks details on input constraints (e.g., max 50 years in 'k1_years'), error conditions, or example usage, leaving gaps for a tool with significant parameter requirements.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is low (17%), but the description compensates by explaining the purpose of parameters indirectly (e.g., 'K-1 data' relates to 'k1_years', 'partner basis' relates to 'units' and 'purchase_price'). It does not detail each parameter's role, but the context provided helps infer semantics beyond the minimal schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('compute a running adjusted partner basis across multiple years') and resource ('K-1 data'), distinguishing it from siblings like 'mlp_info' or 'mlp_projection' by focusing on historical basis calculation rather than general information or future projections.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for analyzing K-1 basis over multiple years but does not explicitly state when to use this tool versus alternatives like 'k1_basis_compute' (which might handle single-year calculations) or other MLP-related tools. It provides some context but lacks explicit guidance on exclusions or prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

mlp_estate_planningA
Read-onlyIdempotent
Inspect

Computes §1014 stepped-up basis and estate-planning analysis for one or more direct MLP positions held until death, per IRC §1014(a) (basis at death), §1014(b)(6) (community-property double step-up), §751(a) (ordinary recapture eliminated at death), §731 (distributions), and §705 (basis). Returns total deferred federal tax eliminated, §751 ordinary recapture eliminated, per-beneficiary inheritance split, community-property double-step-up amount when applicable, and the dollar advantage of holding to death versus selling today.

Use when: User has one or more direct MLP positions (EPD, ET, MPLX, WES, PAA, NRP, USAC, SUN) and wants to quantify the §1014 step-up benefit for estate planning, compare holding to death versus selling now across a portfolio, model community-property double step-up for spouses in CA/TX/WA/etc., or compute per-beneficiary inheritance values across multiple heirs.

Don't use for: Trust-based estate strategies (revocable trusts preserve §1014; irrevocable trusts and IDGTs typically destroy it — this tool models direct holdings only). Single-position long-horizon tax projection — use mlp_projection. Single-position sell-now-versus-hold-to-death break-even — use mlp_sell_vs_hold. 1099-DIV ETFs (AMLP, MLPX, AMZA — RIC structure receives §1014 step-up but has no §751 to eliminate because no K-1; the analysis is materially different).

Limitations: Direct unit holdings only — does not model trust, IDGT, FLP, or charitable structures (these can destroy the §1014 benefit; for guidance on trust selection, see lucasandersen.ai). Federal-level only — does not include state estate tax. §751 recapture eliminated at death is estimated; exact figure depends on the partnership's actual hot-asset disposition schedule.

Maintained by Lucas Andersen, MS Finance, with direct positions in major midstream MLPs. Methodology auditable at lucasandersen.ai/methodology.

ParametersJSON Schema
NameRequiredDescriptionDefault
positionsYesArray of MLP positions to analyze (max 20)
tax_bracketNo
beneficiariesNoNumber of beneficiaries (default 1, max 20)
community_propertyNoWhether positions are in a community property state (doubles step-up)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the specific IRC sections used (§1014, §751) and lists the types of analysis returned, which adds useful context about the tool's scope and legal basis. However, it doesn't disclose important behavioral traits like whether this is a read-only calculation, if it requires authentication, rate limits, or error conditions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in two sentences: the first states the purpose and scope, the second enumerates the specific analyses returned. Every phrase adds value, with no redundant or unnecessary information. It's appropriately sized for a complex financial analysis tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a complex tax analysis tool with 4 parameters, no annotations, and no output schema, the description provides good purpose clarity and lists return analyses but leaves significant gaps. It doesn't explain the output format, error conditions, or behavioral constraints. The description is complete enough to understand what the tool does but insufficient for an agent to fully anticipate how to use it effectively without trial and error.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description doesn't explicitly mention any parameters, but with 75% schema description coverage (3 of 4 parameters have descriptions), the baseline would be 3. However, the description's mention of 'one or more MLP positions' and 'community-property double step-up' provides semantic context that maps to the 'positions' and 'community_property' parameters, adding value beyond the schema. The 0 parameters with enums means no additional enum explanation is needed.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Compute §1014 stepped-up basis and estate planning analysis') and the resource ('for one or more MLP positions'). It distinguishes from siblings by focusing on estate planning rather than basis computation (k1_basis_*), general info (mlp_info), projections (mlp_projection), or sell/hold analysis (mlp_sell_vs_hold).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for estate planning analysis of MLP positions, but doesn't explicitly state when to use this tool versus alternatives like mlp_sell_vs_hold or k1_basis_compute. No exclusions or prerequisites are mentioned, leaving the agent to infer appropriate contexts from the tool's name and description.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

mlp_infoA
Read-onlyIdempotent
Inspect

Returns reference data for a supported MLP ticker — current cash distribution per unit, distribution growth CAGR, default return-of-capital percentage, distribution coverage ratio, K-1 entity count, operating-state count, and last-verified date.

Use when: User wants to look up baseline characteristics of an MLP before modeling — e.g., comparing distribution coverage across partnerships, checking how many K-1 entities a holding generates for tax-prep complexity, or seeing the operating-state count for state-tax filing-burden estimation.

Don't use for: Tax computation. Use mlp_projection (long-horizon modeling), mlp_estate_planning (estate analysis), mlp_sell_vs_hold (break-even sell price), or k1_basis_compute / k1_basis_multi_year (computing basis from actual K-1 data).

Note: This tool returns reference data only — no IRC citations apply, no methodology disclosure attached. For computation, use the modeling tools above.

Maintained by Lucas Andersen, MS Finance.

ParametersJSON Schema
NameRequiredDescriptionDefault
tickerYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It describes what data is returned but doesn't disclose behavioral traits like error handling, rate limits, authentication needs, or whether this is a read-only operation. The description adds value by specifying the data fields but lacks operational context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by specific data points and a usage note. Both sentences earn their place: the first defines the action and outputs, the second provides context. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations, no output schema, and a simple input schema, the description adequately covers the purpose and data fields. However, it lacks details on return format, error cases, or operational constraints, which would be helpful for a tool with no structured metadata.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage and only one parameter (ticker), the description compensates well by explaining what the tool does with the ticker: 'Get reference data for a specific MLP.' It doesn't detail the ticker format beyond the enum in the schema, but for a single parameter tool, this provides adequate semantic context.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get reference data for a specific MLP' followed by specific data points (distribution rate, growth CAGR, etc.). It distinguishes from siblings by focusing on reference data rather than computations or projections, though it doesn't explicitly name alternatives.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides implied usage context: 'Useful for understanding an MLP's complexity and expected tax characteristics.' This suggests when to use it, but doesn't explicitly state when not to use it or name specific alternatives among the sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

mlp_projectionA
Read-onlyIdempotent
Inspect

Computes a multi-year tax projection for a publicly traded MLP position, applying the IRS Partner's Basis Worksheet methodology (Lines 1-14) per IRC §705 (basis computation), §731(a) (distributions exceeding basis), §733 (basis reduction), §751 (hot asset recapture), §752 (liability allocation), §1014 (stepped-up basis at death), and §199A (QBI deduction). Returns year-by-year basis erosion, §751 accumulation, annual federal tax, terminal FMV, §1014 step-up value at death, and the break-even sell price.

Use when: User holds direct units of a midstream MLP (EPD, ET, MPLX, WES, PAA, NRP, USAC, SUN) and wants to model long-term tax outcomes — when basis reaches zero, total tax paid over the hold horizon, deferred tax eliminated by §1014 step-up at death, or the unit price at which selling matches holding through inheritance. Single position, single lot.

Don't use for: 1099-DIV ETFs (AMLP, MLPX, AMZA — these use RIC structure, pay corporate-level tax, and issue 1099-DIV instead of K-1; use a standard cost-basis calculator instead). Multi-position estate analysis — use mlp_estate_planning. Computing basis from actual K-1 data the user has in hand — use k1_basis_compute (single year) or k1_basis_multi_year.

Limitations: Single position, single lot — for multi-position portfolios and per-lot optimal sell ordering, see lucasandersen.ai. Federal-level only — does not include state-level basis adjustments or state estate tax. §751 recapture is estimated from default ROC assumptions; actual recapture depends on the partnership's hot-asset disposition schedule.

Maintained by Lucas Andersen, MS Finance, with direct positions in major midstream MLPs. Methodology auditable at lucasandersen.ai/methodology.

ParametersJSON Schema
NameRequiredDescriptionDefault
unitsYesNumber of MLP units held
yearsNoProjection horizon in years (1-50, default 20)
tickerYesMLP ticker symbol
tax_bracketNoFederal marginal rate as decimal, e.g. 0.32 (default 0.32)
purchase_priceNoPurchase price per unit in USD (optional — defaults to reasonable estimate)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses the methodology (IRS Partner's Basis Worksheet) and lists outputs, which adds behavioral context. However, it doesn't mention computational limits, error handling, or data sources, leaving gaps for a complex financial tool. No contradiction with annotations exists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by output details and supported tickers in a single, efficient sentence. Every element earns its place by clarifying scope and constraints without redundancy, making it highly concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (tax projections with 5 parameters) and no annotations or output schema, the description is adequate but incomplete. It covers purpose and outputs but lacks details on behavioral traits (e.g., rate limits, assumptions) and doesn't fully compensate for the missing output schema, leaving gaps in contextual understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents all 5 parameters. The description adds no additional parameter semantics beyond implying ticker support and projection scope. This meets the baseline of 3, as the schema handles the heavy lifting without needing description compensation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Compute a multi-year tax projection') and resource ('for an MLP position'), distinguishing it from siblings like 'mlp_info' (information) or 'mlp_sell_vs_hold' (comparison). It explicitly lists the detailed outputs (basis erosion, §751 accumulation, etc.), making the purpose highly specific and differentiated.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context by specifying it's for MLP tax projections using IRS methodology and listing supported tickers, which helps identify when to use it. However, it doesn't explicitly mention when not to use it or name alternatives (e.g., 'k1_basis_compute' or 'mlp_sell_vs_hold'), leaving some ambiguity about sibling tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

mlp_sell_vs_holdA
Read-onlyIdempotent
Inspect

Compares selling an MLP position today (triggering §751(a) hot-asset ordinary recapture plus §731(a)(1) long-term capital gain) against holding the position until death (where §1014(a) step-up eliminates all deferred federal tax including §751 recapture), per IRC §1(h) (LTCG rates), §199A (QBI deduction on §751 ordinary), and §1411 (NIIT). Returns the break-even sell price — the unit price above which selling today produces more after-tax wealth than holding through inheritance.

Use when: User holds a direct MLP position (EPD, ET, MPLX, WES, PAA, NRP, USAC, SUN), is approaching a sell decision, and wants a single break-even threshold to compare against the current market price. Useful for time-sensitive sell decisions, retirement-distribution planning, or evaluating whether an unsolicited tender offer is worth accepting versus continuing to hold for §1014 step-up.

Don't use for: Multi-position portfolio sell-ordering — this tool models a single position. For estate-planning analysis across multiple positions and beneficiaries, use mlp_estate_planning. For long-horizon basis-erosion modeling without a sell decision in view, use mlp_projection. 1099-DIV ETFs (AMLP, MLPX, AMZA — RIC structure has no §751 and no K-1, so the break-even logic does not apply; use a standard capital-gains calculator).

Limitations: Single position, single lot — for portfolio-wide optimal sell ordering across multiple positions and lots, see lucasandersen.ai. Break-even price assumes the supplied tax bracket persists through the hold horizon. §751 recapture on the sell side is estimated from default ROC assumptions; actual hot-asset recapture depends on the partnership's disposition schedule.

Maintained by Lucas Andersen, MS Finance, with direct positions in major midstream MLPs. Methodology auditable at lucasandersen.ai/methodology.

ParametersJSON Schema
NameRequiredDescriptionDefault
unitsYes
tickerYes
years_heldNoYears the position has been held (default 10)
tax_bracketNo
purchase_priceNo
years_to_projectNoYears to project the hold scenario (default 10)
community_propertyNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It discloses the tool's behavioral outcome (returns break-even sell price) and mentions tax implications, but does not cover error handling, computational assumptions, or performance characteristics. It adds some context but lacks comprehensive behavioral details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence, followed by specific tax codes and output details. Every sentence adds value without redundancy, making it efficiently structured and appropriately sized for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (tax calculations with 7 parameters) and lack of annotations/output schema, the description is moderately complete. It explains the tax logic and output but does not cover all parameters or potential edge cases. It meets minimum viability but has gaps in fully guiding usage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is low (29%), with only 2 of 7 parameters described. The description compensates by explaining the core tax logic (e.g., §751 recapture, §1014 step-up) which informs parameter usage, but does not detail individual parameters like tax_bracket or community_property. It adds meaningful context beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific purpose: comparing selling vs holding an MLP position with detailed tax implications (§751 recapture, §731 capital gain, §1014 step-up) and explicitly mentions it returns the break-even sell price. It distinguishes from siblings by focusing on tax comparison rather than basis computation, estate planning, or general info/projection.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context (tax planning for MLP investments) and mentions specific tax codes, but does not explicitly state when to use this tool versus alternatives like mlp_estate_planning or mlp_projection. It provides clear intent but lacks explicit sibling differentiation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources