Skip to main content
Glama
petropt

petropt/petro-mcp

by petropt

decline_sensitivity

Analyze how changes in decline curve parameters affect estimated ultimate recovery (EUR) to identify key variables for petroleum engineering decisions.

Instructions

Sensitivity analysis on decline parameters for tornado chart data.

Varies each parameter (qi, Di, b, economic limit) independently and computes EUR at low/high values. Returns data sorted by impact for tornado chart visualization.

Args: qi: Base initial rate (bbl/day or Mcf/day). di: Base initial decline rate (1/month). b: Base Arps b-factor. economic_limit: Minimum economic rate (default 5.0). parameter_ranges: Optional dict mapping parameter name to [low, high]. Defaults to +/-20% of base values.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
qiYes
diYes
bYes
economic_limitNo
parameter_rangesNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses key behavioral traits: it varies parameters independently, computes EUR at low/high values, returns sorted data for visualization, and uses default ranges. However, it doesn't mention computational limits, error handling, or output format details (though an output schema exists). The description adds useful context but misses some operational details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with a purpose statement, method explanation, and parameter details in a clear 'Args:' section. It's appropriately sized without fluff, though the parameter explanations could be slightly more concise. Every sentence adds value, and it's front-loaded with the core purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (sensitivity analysis with 5 parameters), no annotations, and an output schema present, the description is fairly complete. It covers purpose, method, and parameter semantics adequately. The output schema handles return values, so the description doesn't need to explain them. It could improve by mentioning error cases or performance hints, but it's sufficient for effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It effectively explains all parameters: qi (base initial rate with units), di (base initial decline rate with units), b (base Arps b-factor), economic_limit (minimum economic rate with default), and parameter_ranges (optional dict with default behavior). This adds significant meaning beyond the bare schema, covering units, defaults, and optionality, though it could detail range constraints more.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Sensitivity analysis on decline parameters for tornado chart data.' It specifies the exact action (sensitivity analysis), the target (decline parameters), and the output format (tornado chart data). This distinguishes it from sibling tools like 'calculate_eur' or 'fit_decline' by focusing on sensitivity visualization rather than direct calculation or fitting.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage by mentioning 'tornado chart visualization' and the parameters involved, but it does not explicitly state when to use this tool versus alternatives. For example, it doesn't compare to 'price_sensitivity' or specify prerequisites like needing base decline parameters. The context is clear but lacks explicit guidance on alternatives or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/petropt/petro-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server