Skip to main content
Glama

MLP Tax Computation Engine

Server Details

Deterministic MLP tax engine with IRS citations. 6 tools: basis, §751, estate, projections.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4/5 across 6 of 6 tools scored.

Server CoherenceA
Disambiguation4/5

Most tools have distinct purposes, such as k1_basis_compute for single-year basis, k1_basis_multi_year for multi-year basis, and mlp_estate_planning for estate analysis. However, mlp_projection and mlp_sell_vs_hold could be confused as both involve tax projections and break-even analyses, though mlp_projection is broader and mlp_sell_vs_hold is specifically comparative.

Naming Consistency5/5

All tool names follow a consistent snake_case pattern with clear prefixes (k1_ for K-1 basis tools and mlp_ for MLP-specific tools) and descriptive verbs like compute, get, and compare. This uniformity makes the tool set predictable and easy to navigate.

Tool Count5/5

With 6 tools, the set is well-scoped for a tax computation engine focused on MLPs and K-1 basis. Each tool addresses a specific aspect of the domain, such as basis calculation, estate planning, and projections, without being overly sparse or bloated.

Completeness4/5

The tool set covers key areas like basis computation (single and multi-year), estate planning, reference data, and projections, providing a comprehensive surface for MLP tax analysis. A minor gap is the lack of tools for updating or deleting data, but this is reasonable given the computational focus.

Available Tools

6 tools
k1_basis_computeAInspect

Compute adjusted partner basis from a single year of Schedule K-1 data using the IRS Partner's Basis Worksheet (Lines 1-14). Returns the ending adjusted basis, each worksheet line, any §731 gain (if distributions exceeded basis), and §704(d) suspended losses. Accepts structured K-1 box values (Box 1, Box 19A, Item K liabilities, etc.).

ParametersJSON Schema
NameRequiredDescriptionDefault
box1YesK-1 Box 1: ordinary business income (loss)
box2NoK-1 Box 2: net rental income
box5NoK-1 Box 5: interest income
box11NoK-1 Box 11: §179 / other deductions
unitsYes
box13wNoK-1 Box 13W: §199A QBI amount
box19aYesK-1 Box 19A: cash distributions
tickerYes
prior_basisYesBeginning-of-year adjusted basis in USD
tax_bracketNo
liability_decreaseNo§752(b) liability decrease in USD
liability_increaseNo§752(a) liability increase in USD
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It does well by specifying what the tool returns ('ending adjusted basis, each worksheet line, any §731 gain, and §704(d) suspended losses') and the input format ('structured K-1 box values'). However, it doesn't mention computational limitations, error handling, or authentication requirements that might be relevant for a tax calculation tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in two sentences: the first explains the computation purpose and returns, the second specifies input format. Every element adds value with zero waste, making it appropriately sized and front-loaded with the core functionality.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of tax basis calculations with 12 parameters and no output schema, the description does well by specifying what values are returned. However, for a tool with no annotations and significant computational complexity, additional context about calculation methodology limitations or edge cases would enhance completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds meaningful context about parameter semantics by specifying that inputs should be 'structured K-1 box values (Box 1, Box 19A, Item K liabilities, etc.)' and mentioning the IRS worksheet context. With 75% schema description coverage, the schema already documents most parameters well, but the description provides valuable framing for how these parameters relate to K-1 forms and basis calculations.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Compute adjusted partner basis'), the resource ('from a single year of Schedule K-1 data'), and the methodology ('using the IRS Partner's Basis Worksheet (Lines 1-14)'). It distinguishes from the sibling tool 'k1_basis_multi_year' by specifying 'single year' versus multi-year computation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('from a single year of Schedule K-1 data') and implicitly distinguishes it from 'k1_basis_multi_year' by specifying single-year versus multi-year. However, it doesn't explicitly state when NOT to use this tool or mention alternatives beyond the sibling distinction.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

k1_basis_multi_yearAInspect

Compute a running adjusted partner basis across multiple years of K-1 data. Returns year-by-year basis erosion, accumulated §751 recapture, projected zero-basis year, §1014 step-up value if death today, and the critical basis gap — the difference between what a broker typically reports (original cost) and the true IRS-adjusted basis. Uses IRC §705, §731, §751, §1014.

ParametersJSON Schema
NameRequiredDescriptionDefault
unitsYes
tickerYes
k1_yearsYesArray of annual K-1 data, one per year held (max 50)
tax_bracketNo
purchase_yearNo
purchase_priceYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden. It discloses the tool performs complex tax calculations using specific IRC sections (§705, §731, §751, §1014) and returns multiple computed values, which is helpful. However, it doesn't mention computational limitations (e.g., max 50 years from schema), error conditions, performance characteristics, or whether it's read-only vs. mutative.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in two sentences: the first states the computation and outputs, the second cites relevant IRC sections. Every element adds value without redundancy, and it's front-loaded with the core purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a complex tool with 6 parameters, no annotations, and no output schema, the description provides good purpose clarity and parameter context but lacks details on behavioral traits (e.g., computational limits, error handling) and doesn't describe output structure. Given the complexity, it should do more to compensate for missing structured data.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is low (17%), with only k1_years having a description. The description compensates by explaining the tool's purpose involves K-1 data, purchase details, and tax calculations, which helps interpret parameters like ticker (MLP symbols), units, purchase_price, and k1_years. However, it doesn't explicitly define tax_bracket or purchase_year usage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool computes 'running adjusted partner basis across multiple years of K-1 data' and lists specific outputs (basis erosion, §751 recapture, zero-basis year projection, §1014 step-up, basis gap). It distinguishes from siblings by focusing on multi-year basis computation rather than single-year calculations (k1_basis_compute) or other MLP-related functions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context through references to K-1 data, IRS-adjusted basis, and broker-reported cost differences, suggesting it's for tax/compliance analysis of MLP investments. However, it doesn't explicitly state when to use this tool versus alternatives like k1_basis_compute (single-year) or mlp_projection (future projections), nor does it mention prerequisites or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

mlp_estate_planningAInspect

Compute §1014 stepped-up basis and estate planning analysis for one or more MLP positions. Returns total deferred tax eliminated at death, §751 ordinary income recapture eliminated, per-beneficiary inheritance split, community-property double step-up (if applicable), and hold-vs-sell-today dollar advantage. Uses IRC §1014(a), §1014(b)(6), §751(a).

ParametersJSON Schema
NameRequiredDescriptionDefault
positionsYesArray of MLP positions to analyze (max 20)
tax_bracketNo
beneficiariesNoNumber of beneficiaries (default 1, max 20)
community_propertyNoWhether positions are in a community property state (doubles step-up)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden. It discloses the tool's purpose (estate planning analysis) and legal basis (IRC sections), but lacks details on behavioral traits like rate limits, error handling, computational complexity, or what happens with invalid inputs. It mentions outputs but not how they're formatted or returned.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in two sentences: the first states the purpose and scope, the second lists specific outputs and legal references. Every sentence adds value with zero wasted words, making it easy to parse and understand quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a complex tax analysis tool with 4 parameters, 75% schema coverage, and no output schema, the description is moderately complete. It outlines the analysis scope and outputs but lacks details on return format, error conditions, or limitations (e.g., the 20-position maximum mentioned in the schema). With no annotations, it should provide more behavioral context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 75%, so the description must compensate. It adds context by specifying the analysis covers 'one or more MLP positions' (aligning with the positions array) and mentions community-property considerations (relevant to the community_property parameter). However, it doesn't explain tax_bracket or beneficiaries parameters beyond what the schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Compute §1014 stepped-up basis and estate planning analysis') and resource ('for one or more MLP positions'). It distinguishes from siblings by focusing on estate planning rather than basis computation, projections, or general info tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage through its specialized focus on estate planning and tax elimination at death, suggesting it's for inheritance scenarios. However, it doesn't explicitly state when to use this tool versus alternatives like mlp_sell_vs_hold or k1_basis_compute, nor does it mention prerequisites or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

mlp_infoAInspect

Get reference data for a specific MLP: current distribution rate, distribution growth CAGR, default return-of-capital percentage, K-1 entity count, and operating state count. Useful for understanding an MLP's complexity and expected tax characteristics.

ParametersJSON Schema
NameRequiredDescriptionDefault
tickerYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes what data the tool returns (reference data points) and hints at its purpose (understanding complexity and tax characteristics), but lacks details on error handling, rate limits, authentication needs, or response format. This is adequate but leaves gaps for a tool with no annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, with the first sentence stating the core purpose and data points, followed by a second sentence explaining utility. Every sentence adds value without redundancy, making it efficient and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations, no output schema, and low schema description coverage, the description is moderately complete. It covers the tool's purpose and data scope adequately but lacks details on return values, error conditions, or operational constraints, which are important for a tool with such minimal structured data.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema has 0% description coverage, so the description must compensate. It adds meaning by specifying that the 'ticker' parameter is for a 'specific MLP' and implies it retrieves data for that entity. However, it does not explain the enum values or ticker format, leaving some parameter semantics undocumented.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Get reference data') and resources ('for a specific MLP'), listing exact data points like distribution rate and K-1 entity count. It distinguishes from siblings by focusing on reference data retrieval rather than computations, projections, or planning tasks.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('Useful for understanding an MLP's complexity and expected tax characteristics'), which implicitly differentiates it from siblings focused on basis calculations, projections, or estate planning. However, it does not explicitly state when not to use it or name specific alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

mlp_projectionAInspect

Compute a multi-year tax projection for an MLP (Master Limited Partnership) position. Returns year-by-year basis erosion, §751 accumulation, annual tax liability, terminal FMV, §1014 step-up value at death, and the break-even sell price. Uses the IRS Partner's Basis Worksheet methodology (IRC §705, §731, §751, §1014, §199A). Supported tickers: EPD, ET, MPLX, WES, PAA, NRP, USAC, SUN.

ParametersJSON Schema
NameRequiredDescriptionDefault
unitsYesNumber of MLP units held
yearsNoProjection horizon in years (1-50, default 20)
tickerYesMLP ticker symbol
tax_bracketNoFederal marginal rate as decimal, e.g. 0.32 (default 0.32)
purchase_priceNoPurchase price per unit in USD (optional — defaults to reasonable estimate)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses the methodology (IRS Partner's Basis Worksheet) and lists the outputs, which adds useful context about the tool's behavior. However, it lacks details on potential limitations (e.g., accuracy of defaults, assumptions in calculations), error handling, or performance aspects like rate limits, leaving some behavioral traits unspecified.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in two sentences: the first states the purpose and outputs, and the second adds methodology and ticker support. Every sentence provides essential information without redundancy, making it front-loaded and appropriately sized for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (tax projections with multiple outputs) and lack of annotations or output schema, the description does a good job by listing outputs and methodology. However, it could be more complete by briefly mentioning the return format (e.g., structured data per year) or any assumptions in the projection, which would help an agent understand the result better.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds no additional parameter semantics beyond what's in the schema (e.g., it doesn't explain interactions between parameters like how purchase_price affects projections). With high schema coverage, the baseline score of 3 is appropriate, as the description doesn't compensate but doesn't need to.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Compute a multi-year tax projection') and resource ('for an MLP position'), distinguishing it from siblings like mlp_info (general info) or mlp_sell_vs_hold (comparison). It explicitly lists the detailed outputs (basis erosion, §751 accumulation, etc.) and methodology (IRS Partner's Basis Worksheet), making the purpose highly specific and differentiated.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context by specifying supported tickers and the projection's focus on tax calculations, which implicitly suggests usage for MLP tax planning scenarios. However, it does not explicitly state when to use this tool versus alternatives like mlp_sell_vs_hold or k1_basis_compute, nor does it mention exclusions or prerequisites beyond the ticker list.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

mlp_sell_vs_holdAInspect

Compare selling an MLP position now (triggering §751 recapture + §731 capital gain) versus holding and letting heirs inherit (§1014 step-up eliminates all deferred tax). Returns the break-even sell price: the unit price above which selling becomes better than holding. Uses IRC §731, §751, §1014, §1(h), §199A.

ParametersJSON Schema
NameRequiredDescriptionDefault
unitsYes
tickerYes
years_heldNoYears the position has been held (default 10)
tax_bracketNo
purchase_priceNo
years_to_projectNoYears to project the hold scenario (default 10)
community_propertyNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. While it mentions tax implications and break-even calculation, it doesn't disclose critical behavioral traits: whether this is a read-only calculation or has side effects, what assumptions are made in the calculation, whether it requires authentication, or any rate limits. The description provides some context but leaves significant gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly front-loaded with the core purpose in the first sentence, followed by the specific output and tax code references. Every sentence earns its place with zero wasted words, making it highly efficient for an AI agent to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of tax calculations, 7 parameters, no annotations, and no output schema, the description is incomplete. While it explains the tax concepts and purpose well, it doesn't describe the return format, calculation assumptions, or error conditions. For a tool with this complexity and no structured output documentation, more completeness would be expected.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With only 29% schema description coverage, the description compensates well by explaining the core tax concepts (§751, §731, §1014) that drive the calculation. It clarifies that parameters relate to MLP position details and tax scenarios, though it doesn't explicitly map individual parameters to these concepts. The description adds substantial meaning beyond the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific purpose: comparing selling vs holding an MLP position with detailed tax implications (§751 recapture, §731 capital gain, §1014 step-up). It distinguishes from siblings by focusing on break-even analysis rather than basis computation, estate planning, or general info/projection tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context (tax planning decisions for MLP investments) but doesn't explicitly state when to use this tool versus alternatives like mlp_estate_planning or mlp_projection. It provides the specific tax code sections involved, which helps identify appropriate scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources