Skip to main content
Glama
OilpriceAPI

OilPriceAPI

Official
by OilpriceAPI

Server Quality Checklist

50%
Profile completionA complete profile improves this server's visibility in search results.
  • This repository includes a README.md file.

  • This repository includes a LICENSE file.

  • Latest release: v2.0.0

  • No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.

    Tip: use the "Try in Browser" feature on the server page to seed initial usage.

  • Add a glama.json file to provide metadata about your server.

  • This server provides 14 tools. View schema
  • No known security issues or vulnerabilities reported.

    Report a security issue

  • Are you the author?

  • Add related servers to improve discoverability.

Tool Scores

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, so the description carries the full burden. It adds that the tool 'Returns current inventory levels with changes,' which hints at the output structure. However, it omits details about data freshness, potential rate limits, or explicit read-only safety guarantees that would be helpful without annotation coverage.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Three well-structured sentences with zero waste: purpose declaration, usage trigger, and return value description. Information is front-loaded and immediately actionable.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a single-parameter tool with complete schema coverage and no output schema, the description adequately covers the tool's function, usage context, and return value. It could enhance completeness by describing the return structure in more detail, but given the low complexity, it is sufficient.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 100% schema description coverage, the baseline is appropriately met. The description mentions Cushing and SPR in the first sentence, reinforcing the parameter semantics, but does not add syntax details, validation rules, or examples beyond what the schema already provides.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the specific resource (oil storage/inventory levels) and entities (Cushing, SPR) using the verb 'Get'. While it doesn't explicitly name siblings to differentiate, the resource type clearly distinguishes it from price, futures, and production tools in the sibling list.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides explicit positive guidance ('Use when the user asks about oil inventories...') covering the key use cases. Lacks explicit negative guidance or named alternatives, but the when-to-use context is clear and specific.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full burden. Adds value by disclosing return structure ('totals and regional breakdown') but omits other behavioral traits like rate limits, data freshness, or geographic coverage scope.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Three sentences with distinct purposes: capability definition, usage trigger, and output description. No redundancy; information is front-loaded and efficiently structured.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given zero parameters and lack of output schema, description adequately covers purpose, invocation conditions, and return value structure. Could strengthen by specifying geographic/temporal data coverage.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Zero parameters present; baseline 4 per rubric for parameter-less tools. Schema coverage is trivially complete.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    States specific verb 'Get' and resource 'drilling intelligence data' with concrete details (active wells, permits issued, completions by region). Implicitly differentiates from sibling opa_get_rig_counts by specifying well/permit data, though lacks explicit sibling comparison.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides explicit positive triggers ('Use when the user asks about drilling activity, well permits, or upstream operations'). Lacks explicit 'when-not' guidance or named alternatives, but clear invocation context is provided.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full burden. Compensates by describing return structure ('Returns a table of contract months...plus market structure analysis'), which is valuable given no output schema. However, lacks operational details like rate limits, authentication requirements, or explicit read-only safety confirmation.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Three sentences efficiently structured: [1] Purpose, [2] Usage trigger, [3] Return value. Zero waste, front-loaded with action verb, appropriate length for tool complexity.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given single optional parameter and no output schema, description adequately covers essential aspects: functionality, usage context, and return format. Minor gap: does not mention default value (BZ) or explicitly reference the contract parameter in the description text, though schema covers this.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 100% with clear enum descriptions ('BZ = Brent crude, CL = WTI crude'). Description implies the scope ('all contract months') but does not add syntax or semantic details beyond the schema. Baseline 3 appropriate when schema is self-documenting.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Specific verb 'Get' + resource 'futures forward curve' + scope 'full...across all contract months'. Clearly distinguishes from siblings like opa_get_futures (likely spot/single contract) and opa_get_history via the forward curve/term structure focus.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Explicit when-to-use: 'Use when the user asks about the forward curve, contango/backwardation, or term structure.' Lacks explicit when-not-to-use or named alternatives, but the specific domain language provides clear contextual boundaries.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, description carries full disclosure burden. It adds value by stating data freshness ('latest') and output format ('Returns a table'), which helps agent predict response shape. However, lacks other behavioral traits: no mention of default behavior when unfiltered, update frequency, rate limits, or whether data is real-time vs delayed.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Four well-structured sentences with zero waste: [Purpose] -> [Usage Trigger] -> [Parameters] -> [Return Value]. Front-loaded with specific action, domain terminology placed where needed for sibling differentiation, and output format specified at the end. Appropriate length for tool complexity.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a simple 2-parameter retrieval tool without output schema or annotations, description adequately covers the essential gaps: purpose, usage triggers, parameter examples, and return type. Missing only minor details like explicit confirmation that both filters are optional (though implied by 'Can filter') or explaining fuel type acronyms.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema has 100% description coverage with examples (SINGAPORE, VLSFO, etc.), establishing baseline of 3. Description repeats these same examples without adding supplementary semantic context (e.g., explaining VLSFO=Very Low Sulfur Fuel Oil, or noting that port names must be uppercase). Does minimally reinforce optionality via 'Can filter by' phrasing.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Excellent specificity: verb 'Get' + resource 'marine fuel (bunker) prices' + scope 'major shipping ports'. Explicitly distinguishes from siblings like opa_get_price (generic) and opa_get_diesel_by_state (automotive vs marine) by specifying 'marine fuel', 'bunker', and shipping-specific fuel codes (VLSFO, MGO, IFO380).

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Explicit 'Use when...' sentence listing specific trigger terms (bunker fuel, VLSFO, MGO, IFO380, shipping fuel costs) provides clear positive guidance. While it lacks explicit 'when not to use' or named alternatives, the domain-specific terminology effectively differentiates from sibling tools like opa_get_futures or opa_get_diesel_by_state.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full burden. It adds critical context about 'Requires a paid plan with energy intelligence access' (auth/pricing constraint). However, it omits operational details like forecast horizons (how far out), update frequency, data volume/structure, or error handling when access is denied.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Four sentences, each earning its place: (1) core functionality and source, (2) usage trigger, (3) return value, (4) access requirements. No redundancy or tautology. Information is front-loaded with the specific action and source.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given no output schema exists and no annotations are present, the description provides conceptual coverage (what it returns generally) but lacks structural details about the forecast data format, commodities covered, or time granularity. Sufficient for tool selection but incomplete for result handling.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Input schema has zero parameters. Per calibration guidelines, zero parameters establishes a baseline of 4. The description correctly does not fabricate parameters, and the absence of params is implicitly clear from the descriptive focus on retrieval without filtering.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Description explicitly states 'Get energy price forecasts from EIA Short-Term Energy Outlook (STEO)'—specific verb, resource, and authoritative source. It clearly distinguishes from siblings like opa_get_history (past data) and opa_get_futures (market contracts) by emphasizing analytical forecasts/outlooks.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides explicit trigger: 'Use when the user asks about price predictions, outlooks, or where oil/gas prices are heading.' This clearly signals when to select over alternatives. Lacks explicit 'when not to use' or named sibling alternatives, though the forecast/prediction framing implicitly contrasts with current price tools.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full disclosure burden. It clarifies this returns 'latest' (current) data and the 'front-month' limitation, but omits operational details like rate limits, data freshness guarantees, or error behaviors. Adequate for a simple read operation but not comprehensive.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Four efficient sentences: purpose, usage trigger, supported contracts, and sibling alternative. Zero redundancy. Information is front-loaded with the core action, followed by disambiguation logic.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Complete for a single-parameter price retrieval tool. The sibling distinction handles the primary scope ambiguity. Minor gap: no indication of return value structure (scalar, object, units) though this is partially implied by 'price'.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 100% with complete enum descriptions ('BZ = Brent crude, CL = WTI crude'). Description reinforces these mappings ('Supports Brent (BZ) and WTI (CL) futures') but adds no syntax guidance beyond the schema. Baseline 3 appropriate for high-coverage schemas.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Excellent specificity: verb 'Get' + resource 'front-month futures contract price' + domain 'crude oil'. The term 'front-month' precisely scopes the data to the nearest expiry contract, distinguishing it from the sibling curve tool.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines5/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Explicit trigger condition ('Use when the user asks about futures, forward prices, or contract prices') and clear alternative path naming the specific sibling tool for 'full forward curve' scenarios. This directly prevents confusion between opa_get_futures and opa_get_futures_curve.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full burden. Discloses data source (AAA-sourced), geographic coverage (50 states plus DC), and input format flexibility. Missing error handling behavior and currency units, but covers essential behavioral traits for a read-only data tool.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Four sentences with zero waste: purpose (sentence 1), usage trigger (sentence 2), input format (sentence 3), source/coverage (sentence 4). Front-loaded and efficiently structured.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness5/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a single-parameter retrieval tool without output schema, description adequately covers return value ('Returns the AAA-sourced state average diesel price'), data provenance, and geographic scope. No gaps given the tool's simplicity.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 100%, providing complete parameter documentation. Description reinforces input options ('California' or 'CA') but adds no semantic meaning beyond the schema's examples. Baseline 3 appropriate when schema does heavy lifting.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Description opens with specific verb+resource: 'Get the current average retail diesel price for a US state.' Clearly distinguishes from siblings like opa_get_marine_fuels or opa_get_futures by specifying 'retail diesel' and 'US state' scope.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides explicit when-to-use: 'Use when the user asks about diesel prices in a specific state, diesel fuel costs by state, or state-level fuel prices.' Lacks explicit when-not-to-use or named alternatives, but the specific scope implies distinction from broader tools like opa_get_price or opa_compare_prices.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full disclosure burden. It successfully adds critical operational constraints ('Requires a paid plan with energy intelligence access') and return structure ('Returns country-level production figures') not inferable from schema. Could enhance with data frequency or rate limits.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Four sentences, each earning its place: purpose, usage trigger, return format, and access requirements. Information is front-loaded with no redundancy or filler.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Adequately covers the zero-parameter tool's scope despite lacking output schema, specifying country-level granularity and access restrictions. Minor gap regarding data update frequency or specific member countries covered.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Input schema contains zero parameters (empty object), triggering baseline score of 4 per rubric. No parameter clarification needed or provided.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Description opens with specific verb-resource pair ('Get the latest OPEC oil production data') and clearly distinguishes from siblings by specifying OPEC/OPEC+ scope versus general price, futures, or drilling data available in other tools.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides explicit positive guidance ('Use when the user asks about OPEC output, production quotas, supply cuts, or OPEC+ compliance') covering multiple query variants. Lacks explicit 'when not to use' or sibling alternatives, but the specificity effectively guides selection.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations, description carries full burden. Discloses return structure: 'grouped by category (oil, gas, coal, refined products, metals, forex) with 24h changes.' This explains what the tool produces beyond the schema. Lacks data freshness/source details, but covers the essential behavioral output.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    5 sentences, zero waste. Front-loaded with purpose, followed by usage context, return behavior, parameter hint, and sibling alternative. Each sentence earns its place with no redundancy.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Adequately covers the single optional parameter and explains return values (grouping and 24h changes) despite lacking an output schema. Could be improved with data source or latency information, but sufficient for a simple overview tool.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 100% (category parameter fully documented with enum and description). Description states 'Supports filtering by category,' which confirms functionality but adds minimal semantic value beyond the schema. Baseline score appropriate for high-coverage schema.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Specific verb 'Get' + resource 'current prices for all tracked energy commodities' + scope 'in one call'. Explicitly distinguishes from sibling opa_get_price: 'For a single commodity, use opa_get_price instead.' Clear and actionable.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines5/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Explicit when-to-use: 'Use when the user wants a broad market snapshot or asks about overall energy prices.' Explicit alternative named: 'For a single commodity, use opa_get_price instead.' Provides clear decision criteria between broad vs. specific queries.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden. It effectively discloses return values ('price with 24h changes, plus the spread') and input flexibility ('Accepts natural language or codes'). It lacks mention of potential rate limits, data freshness, or whether the comparison is real-time vs cached.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Four sentences each earning their place: purpose, usage trigger, return values, input format. Front-loaded with the core action and scope. No redundancy or filler.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness5/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given no output schema exists, the description adequately documents return values (prices, 24h changes, spread calculation logic). With 100% schema coverage for the single parameter and clear sibling differentiation, the description is complete for this tool's complexity.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema has 100% coverage with good examples. The description adds valuable semantic context that the parameter 'Accepts natural language or codes', clarifying that informal names (like 'brent') are valid inputs beyond formal codes, which enhances understanding beyond the schema alone.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description states the specific action ('Compare current prices'), the resource (commodities), and the scope ('between 2-5 commodities side by side'). The mention of 'side by side' and 'spread' clearly distinguishes this multi-commodity tool from siblings like opa_get_price which likely returns single prices.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides explicit when-to-use guidance with concrete examples ('Use when the user asks to compare commodities (e.g., 'Brent vs WTI', 'US gas vs EU gas')'). However, it lacks explicit 'when not to use' guidance or naming of alternatives (e.g., opa_get_price for single commodities), which prevents a 5.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, but description compensates well by disclosing return structure ('Returns high, low, average, change, and data point count') and period semantics ('day (24h), week (7d)...'). Lacks operational details like data freshness or rate limits, but covers core behavioral traits.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Four efficient sentences covering purpose, usage, return values, and parameter semantics. Zero redundancy. Information is front-loaded and logically ordered.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness5/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a 2-parameter tool without output schema, description adequately covers return values (compensating for missing output schema), parameter meanings, and usage context. Complete for the tool's complexity level.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema has 100% coverage (baseline 3), but description adds valuable semantic context beyond schema: maps period values to specific durations (24h, 7d, 30d, 365d) and reinforces commodity parameter usage through context.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Description uses specific verb 'Get' with resource 'historical price data' and scope 'over a time period'. Clearly distinguishes from siblings like opa_get_price (current) and opa_get_forecasts (future) by emphasizing historical trends and time-based performance.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides explicit when-to-use guidance: 'Use when the user asks about price trends, historical prices, or how a commodity has performed over time.' Lacks explicit 'when-not-to-use' or named sibling alternatives, but clear context is provided.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full burden. It discloses the specific return fields (oil rigs, gas rigs, total count, week-over-week change) and notes 'No parameters needed', which adequately covers behavior for a simple retrieval tool. Could mention data freshness/frequency.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Four sentences, front-loaded with the core action. Each sentence serves distinct purpose: data source, usage trigger, return values, and parameter requirements. Zero redundancy, excellent information density.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness5/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Despite lacking an output schema, the description explicitly documents the return structure (specific fields returned). Combined with usage guidance and parameter confirmation, this is complete for a simple retrieval tool with no inputs.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Zero parameters required. Schema coverage is 100% (vacuously true). Description explicitly states 'No parameters needed', confirming the schema structure. Baseline 4 achieved per instructions for 0-param tools.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Description clearly states the specific action (Get), resource (US oil and gas rig count data), and authoritative source (Baker Hughes). It distinguishes from sibling opa_get_drilling by specifying the exact data provider and return structure.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides explicit positive guidance ('Use when the user asks about drilling activity, rig counts, or oil field operations'). Lacks explicit named alternatives or negative constraints (e.g., 'do not use for historical trends, use opa_get_history instead'), but context is clear.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full burden. It successfully discloses key behavioral traits: real-time nature of data, input format flexibility (natural language vs API codes), and output structure (price, currency, 24h change, timestamp). Missing only operational details like caching or rate limits.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Four sentences, perfectly structured: purpose first, usage condition second, input semantics third, alternatives fourth. No redundant words; every sentence earns its place by conveying distinct information not present in other sentences.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness5/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Excellent completeness for a single-parameter tool. Compensates for missing output schema by detailing return values. Explicitly differentiates from 11 siblings (particularly opa_market_overview and opa_get_history). With only one required parameter and clear scope, no additional context is needed.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 100% (1 parameter fully documented), establishing baseline of 3. Description adds valuable semantic context beyond the schema by emphasizing the dual input format support (natural language vs codes) with clear examples, helping agents understand how to formulate the commodity parameter.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Description opens with specific verb ('Get') and resource ('current real-time spot price of an energy commodity'), explicitly scopes to single commodities, and distinguishes from sibling tools like opa_market_overview by noting it handles 'single commodity' vs multiple.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines5/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Explicitly states when to use ('when the user asks about a single commodity's current price') and names two specific alternatives for other use cases ('For multiple commodities at once, use opa_market_overview. For price trends, use opa_get_history').

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden. It successfully discloses 'fetched live from the API' (data freshness) and 'grouped by category' (return structure). However, it lacks mention of read-only safety, rate limits, or pagination behavior that would be helpful given the absence of annotations.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Four sentences covering purpose, usage conditions, return structure, and parameter requirements. Every sentence earns its place with zero redundancy. Information is front-loaded with the action verb and follows logically.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness5/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a simple discovery tool with no parameters and no output schema, the description adequately compensates by describing the return value structure ('full catalog... grouped by category'). The usage guidelines covering error recovery scenarios provide sufficient operational context.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Tool has zero parameters, which establishes a baseline of 4. The description confirms this with 'No parameters needed,' providing explicit validation that the empty schema is intentional.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Description uses specific verb 'List' with clear resource 'available commodities' and scope 'that can be queried for prices.' It effectively distinguishes from sibling tools like opa_get_price by positioning this as a catalog/discovery tool rather than a data retrieval tool.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines5/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides explicit when-to-use scenarios: when users ask what commodities are available, what codes to use, or when encountering 'commodity not recognized' errors. This creates clear decision boundaries against sibling tools that require known commodity codes.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

GitHub Badge

Glama performs regular codebase and documentation scans to:

  • Confirm that the MCP server is working as expected.
  • Confirm that there are no obvious security issues.
  • Evaluate tool definition quality.

Our badge communicates server capabilities, safety, and installation instructions.

Card Badge

oil-price-api MCP server

Copy to your README.md:

Score Badge

oil-price-api MCP server

Copy to your README.md:

How to claim the server?

If you are the author of the server, you simply need to authenticate using GitHub.

However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.

{
  "$schema": "https://glama.ai/mcp/schemas/server.json",
  "maintainers": [
    "your-github-username"
  ]
}

Then, authenticate using GitHub.

Browse examples.

How to make a release?

A "release" on Glama is not the same as a GitHub release. To create a Glama release:

  1. Claim the server if you haven't already.
  2. Go to the Dockerfile admin page, configure the build spec, and click Deploy.
  3. Once the build test succeeds, click Make Release, enter a version, and publish.

This process allows Glama to run security checks on your server and enables users to deploy it.

How to add a LICENSE?

Please follow the instructions in the GitHub documentation.

Once GitHub recognizes the license, the system will automatically detect it within a few hours.

If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.

How to sync the server with GitHub?

Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.

To manually sync the server, click the "Sync Server" button in the MCP server admin interface.

How is the quality score calculated?

The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).

Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.

Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).

Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/OilpriceAPI/oil-price-api'

If you have feedback or need assistance with the MCP directory API, please join our Discord server