Skip to main content
Glama

MLP Tax Computation

Server Details

MLP tax computation: basis erosion, §751 recapture, estate planning, sell-vs-hold.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 3.9/5 across 6 of 6 tools scored.

Server CoherenceA
Disambiguation4/5

Most tools have distinct purposes, but there is some overlap between k1_basis_compute and k1_basis_multi_year (both compute basis, with the latter extending to multi-year analysis) and between mlp_projection and mlp_sell_vs_hold (both involve projections and break-even analysis). The descriptions help clarify differences, but an agent might initially confuse these pairs.

Naming Consistency4/5

Tool names follow a consistent snake_case pattern with a clear prefix structure (k1_ for K-1 basis tools, mlp_ for MLP-specific tools). However, there is a minor deviation with mlp_info, which uses a generic noun instead of a verb_noun pattern like the others, slightly reducing consistency.

Tool Count5/5

With 6 tools, the count is well-scoped for the server's purpose of MLP tax computation. Each tool addresses a specific aspect of the domain, such as basis calculation, estate planning, and projections, without being overly sparse or bloated.

Completeness5/5

The tool set provides comprehensive coverage for MLP tax computation, including basis calculations (single-year and multi-year), estate planning, reference data, projections, and sell vs. hold comparisons. It covers key IRS sections and workflows without obvious gaps, ensuring agents can handle typical scenarios.

Available Tools

6 tools
k1_basis_computeAInspect

Compute adjusted partner basis from a single year of Schedule K-1 data using the IRS Partner's Basis Worksheet (Lines 1-14). Returns the ending adjusted basis, each worksheet line, any §731 gain (if distributions exceeded basis), and §704(d) suspended losses. Accepts structured K-1 box values (Box 1, Box 19A, Item K liabilities, etc.).

ParametersJSON Schema
NameRequiredDescriptionDefault
box1YesK-1 Box 1: ordinary business income (loss)
box2NoK-1 Box 2: net rental income
box5NoK-1 Box 5: interest income
box11NoK-1 Box 11: §179 / other deductions
unitsYes
box13wNoK-1 Box 13W: §199A QBI amount
box19aYesK-1 Box 19A: cash distributions
tickerYes
prior_basisYesBeginning-of-year adjusted basis in USD
tax_bracketNo
liability_decreaseNo§752(b) liability decrease in USD
liability_increaseNo§752(a) liability increase in USD
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It clearly describes what the tool returns (ending adjusted basis, worksheet lines, §731 gain, §704(d) suspended losses), which is valuable. However, it doesn't mention important behavioral aspects like whether this is a read-only calculation vs. a write operation, error handling, performance characteristics, or any rate limits. The description adds some behavioral context but leaves significant gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly front-loaded and efficient. The first sentence states the complete purpose, the second describes the return values, and the third explains the parameter context. Every sentence earns its place with zero wasted words, and the structure moves logically from purpose to outputs to inputs.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a complex financial calculation tool with 12 parameters, no annotations, and no output schema, the description is adequate but incomplete. It covers the purpose, returns, and parameter context well, but doesn't address important contextual elements like prerequisites (tax knowledge needed), limitations (single-year only), error conditions, or example scenarios. Given the complexity and lack of structured metadata, more guidance would be helpful.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 75% schema description coverage, the baseline would be 3, but the description adds meaningful context beyond the schema. It explains that the tool 'accepts structured K-1 box values (Box 1, Box 19A, Item K liabilities, etc.)', providing semantic framing for the parameters. This helps the agent understand that these aren't arbitrary numbers but specific IRS form values. However, it doesn't explain the relationship between parameters or provide examples.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('compute adjusted partner basis'), resource ('from a single year of Schedule K-1 data'), and method ('using the IRS Partner's Basis Worksheet (Lines 1-14)'). It distinguishes this tool from sibling tools like 'k1_basis_multi_year' by specifying 'single year' and from other MLP tools by focusing on basis calculation rather than estate planning, projections, or sell/hold analysis.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool: for computing adjusted partner basis from K-1 data using the IRS worksheet. It implies this is for single-year calculations (vs. 'k1_basis_multi_year'), but doesn't explicitly state when NOT to use it or name specific alternatives. The context is sufficient but lacks explicit exclusion guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

k1_basis_multi_yearAInspect

Compute a running adjusted partner basis across multiple years of K-1 data. Returns year-by-year basis erosion, accumulated §751 recapture, projected zero-basis year, §1014 step-up value if death today, and the critical basis gap — the difference between what a broker typically reports (original cost) and the true IRS-adjusted basis. Uses IRC §705, §731, §751, §1014.

ParametersJSON Schema
NameRequiredDescriptionDefault
unitsYes
tickerYes
k1_yearsYesArray of annual K-1 data, one per year held (max 50)
tax_bracketNo
purchase_yearNo
purchase_priceYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the tool 'returns' specific outputs (e.g., basis erosion, §1014 step-up value), which implies a read-only operation, but does not clarify computational complexity, error handling, or data persistence. The description adds some context on IRS code usage but lacks details on performance or limitations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose and efficiently lists key outputs in a single sentence. It includes relevant legal references (IRC sections) that add value without unnecessary verbosity, though it could be slightly more structured for readability.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (6 parameters, nested arrays, no output schema, and no annotations), the description is moderately complete. It outlines the tool's purpose and outputs but lacks details on input constraints (e.g., max 50 years in 'k1_years'), error conditions, or example usage, leaving gaps for a tool with significant parameter requirements.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is low (17%), but the description compensates by explaining the purpose of parameters indirectly (e.g., 'K-1 data' relates to 'k1_years', 'partner basis' relates to 'units' and 'purchase_price'). It does not detail each parameter's role, but the context provided helps infer semantics beyond the minimal schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('compute a running adjusted partner basis across multiple years') and resource ('K-1 data'), distinguishing it from siblings like 'mlp_info' or 'mlp_projection' by focusing on historical basis calculation rather than general information or future projections.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for analyzing K-1 basis over multiple years but does not explicitly state when to use this tool versus alternatives like 'k1_basis_compute' (which might handle single-year calculations) or other MLP-related tools. It provides some context but lacks explicit guidance on exclusions or prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

mlp_estate_planningAInspect

Compute §1014 stepped-up basis and estate planning analysis for one or more MLP positions. Returns total deferred tax eliminated at death, §751 ordinary income recapture eliminated, per-beneficiary inheritance split, community-property double step-up (if applicable), and hold-vs-sell-today dollar advantage. Uses IRC §1014(a), §1014(b)(6), §751(a).

ParametersJSON Schema
NameRequiredDescriptionDefault
positionsYesArray of MLP positions to analyze (max 20)
tax_bracketNo
beneficiariesNoNumber of beneficiaries (default 1, max 20)
community_propertyNoWhether positions are in a community property state (doubles step-up)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the specific IRC sections used (§1014, §751) and lists the types of analysis returned, which adds useful context about the tool's scope and legal basis. However, it doesn't disclose important behavioral traits like whether this is a read-only calculation, if it requires authentication, rate limits, or error conditions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in two sentences: the first states the purpose and scope, the second enumerates the specific analyses returned. Every phrase adds value, with no redundant or unnecessary information. It's appropriately sized for a complex financial analysis tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a complex tax analysis tool with 4 parameters, no annotations, and no output schema, the description provides good purpose clarity and lists return analyses but leaves significant gaps. It doesn't explain the output format, error conditions, or behavioral constraints. The description is complete enough to understand what the tool does but insufficient for an agent to fully anticipate how to use it effectively without trial and error.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description doesn't explicitly mention any parameters, but with 75% schema description coverage (3 of 4 parameters have descriptions), the baseline would be 3. However, the description's mention of 'one or more MLP positions' and 'community-property double step-up' provides semantic context that maps to the 'positions' and 'community_property' parameters, adding value beyond the schema. The 0 parameters with enums means no additional enum explanation is needed.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Compute §1014 stepped-up basis and estate planning analysis') and the resource ('for one or more MLP positions'). It distinguishes from siblings by focusing on estate planning rather than basis computation (k1_basis_*), general info (mlp_info), projections (mlp_projection), or sell/hold analysis (mlp_sell_vs_hold).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for estate planning analysis of MLP positions, but doesn't explicitly state when to use this tool versus alternatives like mlp_sell_vs_hold or k1_basis_compute. No exclusions or prerequisites are mentioned, leaving the agent to infer appropriate contexts from the tool's name and description.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

mlp_infoAInspect

Get reference data for a specific MLP: current distribution rate, distribution growth CAGR, default return-of-capital percentage, K-1 entity count, and operating state count. Useful for understanding an MLP's complexity and expected tax characteristics.

ParametersJSON Schema
NameRequiredDescriptionDefault
tickerYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It describes what data is returned but doesn't disclose behavioral traits like error handling, rate limits, authentication needs, or whether this is a read-only operation. The description adds value by specifying the data fields but lacks operational context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by specific data points and a usage note. Both sentences earn their place: the first defines the action and outputs, the second provides context. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations, no output schema, and a simple input schema, the description adequately covers the purpose and data fields. However, it lacks details on return format, error cases, or operational constraints, which would be helpful for a tool with no structured metadata.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage and only one parameter (ticker), the description compensates well by explaining what the tool does with the ticker: 'Get reference data for a specific MLP.' It doesn't detail the ticker format beyond the enum in the schema, but for a single parameter tool, this provides adequate semantic context.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get reference data for a specific MLP' followed by specific data points (distribution rate, growth CAGR, etc.). It distinguishes from siblings by focusing on reference data rather than computations or projections, though it doesn't explicitly name alternatives.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides implied usage context: 'Useful for understanding an MLP's complexity and expected tax characteristics.' This suggests when to use it, but doesn't explicitly state when not to use it or name specific alternatives among the sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

mlp_projectionAInspect

Compute a multi-year tax projection for an MLP (Master Limited Partnership) position. Returns year-by-year basis erosion, §751 accumulation, annual tax liability, terminal FMV, §1014 step-up value at death, and the break-even sell price. Uses the IRS Partner's Basis Worksheet methodology (IRC §705, §731, §751, §1014, §199A). Supported tickers: EPD, ET, MPLX, WES, PAA, NRP, USAC, SUN.

ParametersJSON Schema
NameRequiredDescriptionDefault
unitsYesNumber of MLP units held
yearsNoProjection horizon in years (1-50, default 20)
tickerYesMLP ticker symbol
tax_bracketNoFederal marginal rate as decimal, e.g. 0.32 (default 0.32)
purchase_priceNoPurchase price per unit in USD (optional — defaults to reasonable estimate)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses the methodology (IRS Partner's Basis Worksheet) and lists outputs, which adds behavioral context. However, it doesn't mention computational limits, error handling, or data sources, leaving gaps for a complex financial tool. No contradiction with annotations exists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by output details and supported tickers in a single, efficient sentence. Every element earns its place by clarifying scope and constraints without redundancy, making it highly concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (tax projections with 5 parameters) and no annotations or output schema, the description is adequate but incomplete. It covers purpose and outputs but lacks details on behavioral traits (e.g., rate limits, assumptions) and doesn't fully compensate for the missing output schema, leaving gaps in contextual understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents all 5 parameters. The description adds no additional parameter semantics beyond implying ticker support and projection scope. This meets the baseline of 3, as the schema handles the heavy lifting without needing description compensation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Compute a multi-year tax projection') and resource ('for an MLP position'), distinguishing it from siblings like 'mlp_info' (information) or 'mlp_sell_vs_hold' (comparison). It explicitly lists the detailed outputs (basis erosion, §751 accumulation, etc.), making the purpose highly specific and differentiated.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context by specifying it's for MLP tax projections using IRS methodology and listing supported tickers, which helps identify when to use it. However, it doesn't explicitly mention when not to use it or name alternatives (e.g., 'k1_basis_compute' or 'mlp_sell_vs_hold'), leaving some ambiguity about sibling tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

mlp_sell_vs_holdAInspect

Compare selling an MLP position now (triggering §751 recapture + §731 capital gain) versus holding and letting heirs inherit (§1014 step-up eliminates all deferred tax). Returns the break-even sell price: the unit price above which selling becomes better than holding. Uses IRC §731, §751, §1014, §1(h), §199A.

ParametersJSON Schema
NameRequiredDescriptionDefault
unitsYes
tickerYes
years_heldNoYears the position has been held (default 10)
tax_bracketNo
purchase_priceNo
years_to_projectNoYears to project the hold scenario (default 10)
community_propertyNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It discloses the tool's behavioral outcome (returns break-even sell price) and mentions tax implications, but does not cover error handling, computational assumptions, or performance characteristics. It adds some context but lacks comprehensive behavioral details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence, followed by specific tax codes and output details. Every sentence adds value without redundancy, making it efficiently structured and appropriately sized for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (tax calculations with 7 parameters) and lack of annotations/output schema, the description is moderately complete. It explains the tax logic and output but does not cover all parameters or potential edge cases. It meets minimum viability but has gaps in fully guiding usage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is low (29%), with only 2 of 7 parameters described. The description compensates by explaining the core tax logic (e.g., §751 recapture, §1014 step-up) which informs parameter usage, but does not detail individual parameters like tax_bracket or community_property. It adds meaningful context beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific purpose: comparing selling vs holding an MLP position with detailed tax implications (§751 recapture, §731 capital gain, §1014 step-up) and explicitly mentions it returns the break-even sell price. It distinguishes from siblings by focusing on tax comparison rather than basis computation, estate planning, or general info/projection.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context (tax planning for MLP investments) and mentions specific tax codes, but does not explicitly state when to use this tool versus alternatives like mlp_estate_planning or mlp_projection. It provides clear intent but lacks explicit sibling differentiation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources