Skip to main content
Glama

Energetica — Argentine Oil & Gas Data

Server Details

Argentine oil & gas data — 40+ curated tables: production, wells, prices, investments, trade.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 3.8/5 across 8 of 8 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool targets a distinct aspect of Argentine oil & gas data: schema exploration, data freshness, and specific queries for wells, production, investments, prices, and trade. No overlap in purposes.

Naming Consistency5/5

All tools follow a consistent snake_case verb_noun pattern (e.g., get_schema, query_production), making them predictable and easy to distinguish.

Tool Count5/5

With 8 tools, the server covers the core data areas without being overwhelming or too sparse. Each tool earns its place.

Completeness5/5

The tool set covers schema discovery, data freshness, and key queryable domains (wells, production, investments, prices, trade). The generic execute_sql tool fills any gaps, making the surface complete.

Available Tools

8 tools
execute_sqlAInspect

Execute arbitrary read-only SQL against the DuckDB database. Only SELECT and WITH statements are allowed. Use get_schema first to understand available tables and columns. Available on Professional tier and above.

ParametersJSON Schema
NameRequiredDescriptionDefault
sqlYesSQL query (SELECT or WITH only)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It states the tool is read-only and restricts statements to SELECT and WITH, which adds transparency. However, it does not disclose potential error handling, performance limits, or result format.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is exceptionally concise: two sentences that front-load the purpose and then provide constraints and guidance. Every sentence adds value with no unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has only one parameter and no output schema, the description is fairly complete. It covers the essential behavioral aspects (read-only, allowed statements) and provides usage guidance. It could mention result format or error handling, but it is still adequate for a simple tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The parameter 'sql' has 100% schema coverage with description 'SQL query (SELECT or WITH only)'. The tool description adds that it is against DuckDB and read-only. While this adds context, it largely overlaps with the schema, so the added value is moderate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the tool's function: 'Execute arbitrary read-only SQL against the DuckDB database.' It specifies that only SELECT and WITH statements are allowed and suggests using get_schema first, distinguishing it from sibling tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear when-to-use guidance (arbitrary read-only SQL) and explicit constraints (only SELECT and WITH). It also recommends using get_schema first. However, it does not directly compare with other query tools or specify when not to use this tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_data_freshnessAInspect

Get the latest data available in each table — last period, date range, and total records. Use this to understand how current the data is.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It accurately describes a read-only operation returning freshness metadata, but does not mention any potential side effects, caching, or authorization requirements. It is adequate for a simple read tool, but lacks depth.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences long, front-loaded with the action and output, and the second sentence provides usage guidance. Every sentence adds value with no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no parameters and no output schema, the description sufficiently explains what the tool returns (last period, date range, total records). It could be improved by mentioning the format or scope, but it is complete enough for an agent to understand the tool's purpose.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

There are no parameters (schema_coverage 100%), so the baseline is 4. The description does not need to add parameter details. It effectively explains what the tool returns without needing param documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it retrieves the latest data available in each table, including last period, date range, and total records. It distinguishes itself from siblings like execute_sql or query_investments by focusing on data freshness rather than data content.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly says 'Use this to understand how current the data is,' providing a clear usage context. While it doesn't specify when not to use it, the purpose is distinct enough from sibling tools that an agent can infer appropriate use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_schemaAInspect

Get the database schema and metadata. Returns table names, columns, types, descriptions, units, and example values. Use this to understand what data is available before querying.

ParametersJSON Schema
NameRequiredDescriptionDefault
tablaNoSpecific table name. Omit to get all tables.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Describes return content but does not state it is read-only or confirm lack of side effects. Implied safe by name but not explicit.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences conveying purpose, return content, and usage guidance. No filler, front-loaded, efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a simple tool with one optional parameter and no output schema. Explains return value and use case. Could mention idempotency or performance, but not required.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% for the single optional parameter 'tabla' (description: 'Specific table name. Omit to get all tables.'). Description adds usage context but no additional parameter details beyond schema. Baseline of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states it gets database schema and metadata, listing returned fields (table names, columns, types, etc.). Distinguishes from sibling tools like execute_sql and query_* by focusing on structural discovery rather than data retrieval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly advises to use before querying to understand available data. Does not explicitly list when not to use or name alternatives, but the sibling separation is clear from purpose.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

query_investmentsAInspect

Query upstream oil & gas investments in Argentina by company, basin, and year. Data includes drilling, completion, and infrastructure expenditure in millions USD.

ParametersJSON Schema
NameRequiredDescriptionDefault
cuencaNoBasin filter
empresaNoCompany filter
conceptoNoInvestment concept (e.g. perforacion, terminacion, infraestructura)
anio_desdeNoStart year
anio_hastaNoEnd year
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must cover behavioral traits. It states the data is in millions USD and implies a read-only query, but does not disclose auth requirements, rate limits, or behavior when no results found. Adequate but not comprehensive.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with no fluff. The first sentence states the action and dimensions, the second adds detail on data specifics. Every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description hints at return data (drilling, completion, infrastructure in millions USD). With 5 optional parameters, it covers the scope well but could mention the return format explicitly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, but the description adds value by explaining the investment concept (drilling, completion, infrastructure) and specifying the unit (millions USD), enhancing understanding beyond the schema alone.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb and resource: 'Query upstream oil & gas investments in Argentina by company, basin, and year.' It clearly distinguishes from siblings like query_prices, query_production, etc., by specifying the investment domain and geographic focus.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for investment data but provides no explicit guidance on when to use this tool vs alternatives like query_prices or query_production. No when-not-to-use or alternative tool mentions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

query_pricesBInspect

Query international and local energy prices. Includes WTI, Brent, Henry Hub (daily), Argentine crude prices (Escalante, Medanito), and exchange rates (official, blue, MEP, CCL).

ParametersJSON Schema
NameRequiredDescriptionDefault
serieYesPrice series to query
frecuenciaNoFrequency aggregation. Default: diario for intl prices, mensual for local
fecha_desdeNoStart date YYYY-MM-DD
fecha_hastaNoEnd date YYYY-MM-DD
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden but only lists data points without disclosing behavioral traits such as idempotency, rate limits, or whether data is cached.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, no redundant information, effectively communicates scope.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a query tool but lacks details on output format or behavior when no parameters specified; schema covers parameters well.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with descriptions; the description adds no extra meaning beyond listing series examples, which are already in the enum.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool queries energy prices, listing specific series (WTI, Brent, etc.), and distinguishes it from sibling tools focusing on other domains like investments or production.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool vs alternatives; no mention of when not to use it or what scenarios it is best suited for.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

query_productionAInspect

Query monthly oil, gas, and water production data from Argentine hydrocarbon wells. Data covers 2006-present with ~18M records. Can be filtered and grouped by basin, company, geological formation, and time period.

ParametersJSON Schema
NameRequiredDescriptionDefault
cuencaNoBasin name filter (e.g. NEUQUINA, GOLFO SAN JORGE, CUYANA, AUSTRAL, NOROESTE)
empresaNoOperating company filter (e.g. YPF, PAN AMERICAN ENERGY, VISTA, TECPETROL)
recursoNoResource type filter. Default: todos
formacionNoGeological formation filter (e.g. VACA MUERTA, D-129, CENTENARIO)
agrupar_porNoGroup results by dimension. Default: mes
fecha_desdeNoStart date YYYY-MM (e.g. 2020-01)
fecha_hastaNoEnd date YYYY-MM (e.g. 2025-12)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It discloses data scope and filter capabilities but omits behavioral details such as response format, aggregation behavior (e.g., monthly totals), pagination, or performance implications. The query nature implies read-only, but not explicitly stated.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences, front-loaded with purpose and followed by data context. No unnecessary words. Every sentence contributes meaning.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description should clarify return structure. It mentions 'data' but not columns or aggregation (e.g., results grouped by 'agrupar_por'). With 7 optional parameters, it lacks guidance on typical usage patterns or required combinations. Moderate completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. The description repeats high-level info (filters and grouping) that is already clear from schema descriptions. It adds no new semantics, examples, or constraints beyond summarizing the parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the verb (query), the resource (monthly oil, gas, and water production data), and the geographic scope (Argentine wells). It includes coverage (2006-present, ~18M records) and distinguishes from sibling tools like query_wells or query_prices by mentioning specific data type and filters.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context on what the tool does and its filtering/grouping capabilities. However, it does not explicitly state when not to use it or suggest alternatives (e.g., for metadata use query_wells). The intent is implied but lacks explicit exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

query_tradeAInspect

Query Argentina hydrocarbon trade balance (exports/imports). Covers crude oil (HS 2709), refined products (HS 2710), and natural gas (HS 2711). Values in FOB USD.

ParametersJSON Schema
NameRequiredDescriptionDefault
anioNoFilter by year
flowNoTrade flow direction
productoNoProduct category
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral traits. It states 'Query' implying a read-only operation, but lacks details on potential side effects, authentication needs, rate limits, or data freshness. The description does not contradict annotations (none exist), but minimal behavioral context is disclosed.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, front-loaded with the main purpose, and contains no extraneous words. Every element adds value, achieving ideal conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool has 3 optional parameters with high schema coverage but no output schema. The description explains product coverage and value format but omits details on return structure (e.g., time series, columns), data range, or pagination. It's adequate for a simple query but not fully complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, but the description adds meaningful context beyond enum labels: it maps product categories to HS codes (crude=2709, refined=2710, gas=2711) and clarifies values are in FOB USD. This enriches parameter understanding for an AI agent.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool queries Argentina hydrocarbon trade balance, specifies exports/imports, covers specific products with HS codes, and mentions value format (FOB USD). The verb 'query' and resource 'trade balance' are precise, and it distinguishes from sibling tools like query_investments or query_production.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for hydrocarbon trade data but provides no explicit guidance on when to use this tool versus alternatives like query_prices or query_wells. No when-not-to or alternative suggestions are given, leaving the agent to infer context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

query_wellsAInspect

Search and filter Argentine oil & gas wells. Returns well location, type, status, cumulative production, and technical data. ~50,000 wells available.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results, default 100
cuencaNoBasin filter (NEUQUINA, GOLFO SAN JORGE, etc.)
empresaNoOperating company filter
formacionNoGeological formation
provinciaNoProvince filter (NEUQUEN, CHUBUT, MENDOZA, etc.)
tipo_estadoNoWell status (e.g. Activo, Inactivo, Abandonado)
tipo_recursoNoResource type
con_produccionNoOnly wells with cumulative production > 0
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Description notes dataset size (~50,000 wells) and read-only nature, but lacks details on pagination, default limits, or rate limits; adequate but not comprehensive.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no filler, purpose front-loaded; every sentence adds essential information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Describes dataset size and return fields; missing output schema but sufficient for a filter/search tool; could mention default limit and ordering for completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage with descriptions for all 8 parameters; the description adds value by hinting at return fields but doesn't explain parameter interactions or defaults beyond schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states 'Search and filter Argentine oil & gas wells' and lists return fields (location, type, status, cumulative production, technical data), making the purpose specific and distinguishable from sibling tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool vs. siblings like query_production or query_investments; missing context about filtering strategies or prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources