DOE Energy Information
Server Details
Energy data from EIA: electricity, fuel prices, and renewables
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.3/5 across 4 of 4 tools scored.
Each tool has a clearly distinct purpose targeting different energy data domains: electricity, natural gas, petroleum, and comprehensive state profiles. There is no overlap in functionality, and the descriptions clearly differentiate what data each tool provides.
All tool names follow a consistent 'get_[resource]_data' pattern except for 'get_state_energy_profile', which maintains the same verb-first structure and clarity. The naming convention is predictable and readable throughout the set.
Four tools is reasonable for an energy data server, covering major fuel types and state profiles. However, it feels slightly thin as it lacks tools for other energy sources like coal, renewables, or nuclear, which could be expected in this domain.
The tools cover electricity, natural gas, petroleum, and state profiles, providing good data retrieval capabilities. Minor gaps exist, such as no tools for updating or managing data (though likely not needed), and missing coverage for other energy sources like coal or renewables, which agents might need to work around.
Available Tools
4 toolsget_electricity_dataAInspect
Get electricity generation, consumption, or price data from the EIA.
Returns data on electricity production, retail sales, prices, and fuel
consumption for power generation across US states and sectors.
Args:
state: Two-letter US state abbreviation (e.g. 'CA', 'TX'). Omit for national data.
sector: Sector filter. Common values: 'RES' (residential), 'COM' (commercial),
'IND' (industrial), 'TRA' (transportation), 'ALL' (all sectors).
frequency: Data frequency: 'monthly', 'quarterly', or 'annual'. Default is 'monthly'.
limit: Maximum number of records to return (default 100, max 5000).| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| state | No | ||
| sector | No | ||
| frequency | No | monthly |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses defaults (monthly, 100), max limits (5000), and omit behavior for national data; no annotations exist to contradict.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with purpose front-loaded; Args section is necessary given schema gaps, though slightly verbose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Complete coverage of all parameters and behavior; output schema exists so brief return value description is sufficient.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Comprehensively compensates for 0% schema coverage with detailed Args section explaining all 4 parameters, including examples (CA, TX) and enum mappings (RES=residential).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb+resource ('Get electricity...data') and distinguishes from siblings (natural gas/petroleum) by specifying electricity domain and EIA source.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage through data types returned (production, retail sales) but lacks explicit when/when-not guidance versus siblings like get_state_energy_profile.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_natural_gas_dataAInspect
Get natural gas production, consumption, or price data from the EIA.
Returns data on natural gas wellhead prices, marketed production, consumption
by sector, and interstate pipeline flows.
Args:
state: Two-letter US state abbreviation (e.g. 'TX', 'PA'). Omit for national data.
frequency: Data frequency: 'monthly' or 'annual'. Default is 'monthly'.
limit: Maximum number of records to return (default 100, max 5000).| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| state | No | ||
| frequency | No | monthly |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Provides behavioral defaults and constraints in Args section (default 100, max 5000, omit for national), but missing annotations require description to also disclose auth needs, rate limits, or data latency which are absent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with purpose front-loaded, followed by return value elaboration, then structured Args block; no redundant or filler text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for the tool's complexity given output schema exists; covers data scope and all parameters, though could note geographic/temporal coverage limitations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Excellent compensation for 0% schema description coverage: Args section provides precise semantics, examples ('TX', 'PA'), valid values ('monthly'/'annual'), and constraints for all three parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verb ('Get') and resource ('natural gas production, consumption, or price data from the EIA'), explicitly distinguishing from sibling electricity/petroleum tools by fuel type and data categories.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage through specific data types listed (wellhead prices, interstate flows), but lacks explicit when-to-use guidance or comparisons to get_state_energy_profile.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_petroleum_dataAInspect
Get petroleum and oil price or production data from the EIA.
Returns time series data for petroleum products including crude oil prices,
production volumes, imports, exports, and refinery operations.
Args:
series: EIA series ID for the petroleum data. Common series:
'PET.RWTC.D' (WTI crude oil daily price),
'PET.RBRTE.D' (Brent crude oil daily price),
'PET.EMM_EPMR_PTE_NUS_DPG.W' (US regular gasoline weekly price),
'PET.MCRFPUS2.M' (US crude oil production monthly).
Default is 'PET.RWTC.D'.
limit: Maximum number of records to return (default 100, max 5000).| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| series | No | PET.RWTC.D |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses key traits absent from annotations: external EIA source, time series return format, and hard limit constraint (max 5000).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Front-loaded purpose, efficient Args section, every sentence earns place (especially critical series ID examples).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Complete for a 2-parameter tool; acknowledges output schema existence by briefly noting return type without redundant detail.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Excellently compensates for 0% schema coverage by explaining the opaque 'series' parameter with 4 concrete, labeled examples and limit constraints.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states specific action (get petroleum/oil data from EIA) and distinguishes domain from electricity/natural gas siblings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage through domain-specific content (petroleum products) but lacks explicit when/when-not guidance versus sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_state_energy_profileAInspect
Get a comprehensive energy profile for a US state from the EIA.
Returns an overview of energy production, consumption, prices, and
expenditures across all fuel types for the specified state. Useful for
understanding a state's full energy landscape.
Args:
state: Two-letter US state abbreviation (e.g. 'CA', 'TX', 'NY').| Name | Required | Description | Default |
|---|---|---|---|
| state | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses return content (production, consumption, prices, expenditures) but lacks annotations and omits other behavioral details like rate limits, authentication requirements, or idempotency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with action statement, return value description, use case, and parameter documentation; every sentence adds distinct value without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequately complete for a single-parameter tool; summarizes output schema contents sufficiently given the presence of a formal output schema, though could note data source limitations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the Args section excellently compensates by specifying the format (two-letter abbreviation) and providing concrete examples (CA, TX, NY).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it retrieves a 'comprehensive energy profile' covering 'all fuel types' from EIA, explicitly distinguishing it from siblings that handle specific fuel types (electricity, natural gas, petroleum).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Indicates utility for 'understanding a state's full energy landscape' implying use for holistic analysis vs. sibling tools for specific fuels, though lacks explicit 'when not to use' guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!