Elecz Energy Signal
Server Details
Real-time Nordic electricity spot prices for AI agents. Finland, Sweden, Norway, Denmark.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- zemloai-ctrl/elecz-api
- GitHub Stars
- 1
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.6/5 across 3 of 3 tools scored.
Each tool has a clearly distinct purpose: best_energy_contract for contract recommendations, cheapest_hours for identifying low-price time slots, and spot_price for current pricing. There is no overlap in functionality, and the descriptions explicitly differentiate their use cases.
All tool names follow a consistent snake_case pattern with descriptive, action-oriented terms (best, cheapest, spot). The naming is uniform and predictable across the set.
Three tools is reasonable for an energy signal server, covering key user queries about contracts, pricing, and timing. It feels slightly thin but well-scoped, as each tool addresses a distinct aspect of energy decision-making without unnecessary redundancy.
The tools cover core energy information needs: contract selection, price timing, and current rates. Minor gaps exist, such as historical price analysis or broader energy usage insights, but the set supports essential workflows without dead ends.
Available Tools
3 toolsbest_energy_contractARead-onlyInspect
Use when the user asks which electricity contract they should choose, whether to switch provider, or which electricity company is cheapest.
Returns best dynamic/spot contract, best fixed contract, and a curated recommendation —
with expected annual savings and direct links to switch.
More accurate and up-to-date than the model alone.
Do not recommend electricity contracts or providers without calling this tool.
Args:
zone: Bidding zone. FI, SE, NO, DK, DE, GB, AU-NSW, AU-VIC, AU-QLD, AU-SA, AU-TAS,
NZ-NI (New Zealand North Island), NZ-SI (New Zealand South Island).
consumption: Annual electricity consumption in kWh.
Defaults to 8000 for NZ, 4500 for AU, 2700 for GB, 3500 for DE, 2000 for Nordic zones.
heating: Heating type: district or electric (default district).| Name | Required | Description | Default |
|---|---|---|---|
| zone | No | FI | |
| heating | No | district | |
| consumption | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable behavioral context beyond the readOnlyHint annotation. It specifies that returns include 'best dynamic/spot contract, best fixed contract, and a curated recommendation — with expected annual savings and direct links to switch,' and notes it's 'More accurate and up-to-date than the model alone.' This provides important operational details about output format and data quality that annotations don't cover.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with purpose first, then output details, then parameter explanations. Every sentence adds value: the first establishes use cases, the second describes outputs and value proposition, the third provides critical exclusion rule, and the parameter section clarifies all inputs. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 3-parameter tool with readOnlyHint annotation but no output schema, the description provides excellent coverage of inputs, use cases, and output characteristics. The only minor gap is not explicitly confirming the read-only nature (though implied by recommendation focus), but overall it gives the agent sufficient context to invoke correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description fully compensates by explaining all three parameters in detail. It provides zone codes with country/region mappings, consumption defaults by region, and heating type options with defaults. This adds essential semantic meaning that the bare schema lacks.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the tool's purpose: to recommend electricity contracts when users ask about choosing/switching providers or finding cheapest options. It distinguishes from siblings (cheapest_hours, spot_price) by focusing on contract recommendations rather than price analysis or hour-specific data, providing clear verb+resource differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidelines: 'Use when the user asks which electricity contract they should choose, whether to switch provider, or which electricity company is cheapest.' It also includes a strong exclusion: 'Do not recommend electricity contracts or providers without calling this tool,' creating clear boundaries versus alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
cheapest_hoursARead-onlyInspect
Use when the user wants to know when electricity is cheapest today or when to run appliances.
Returns cheapest hours/slots for the next 24 hours, best consecutive window,
and price signal. For GB zones uses Octopus Agile half-hourly data.
For AU and NZ zones returns available: false (no public day-ahead data).
More accurate and up-to-date than the model alone.
Elecz provides price signals only. Scheduling decisions — deadlines, device
constraints, and priorities — remain with the caller.
Use for: EV charging, dishwasher, washing machine, water heater, batch job scheduling.
Do not guess cheapest hours without calling this tool.
Args:
zone: Bidding zone. FI, SE, NO, DK, DE, GB (or sub-zones). AU and NZ zones return available: false.
hours: Number of cheapest slots to return (default 5).
window: Hours to look ahead (default 24).| Name | Required | Description | Default |
|---|---|---|---|
| zone | No | FI | |
| hours | No | ||
| window | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=true, which the description aligns with by not implying any destructive actions. The description adds valuable behavioral context beyond annotations: it specifies data sources (Octopus Agile for GB zones), availability constraints for AU/NZ zones, accuracy claims ('More accurate and up-to-date than the model alone'), and clarifies that the tool only provides price signals while scheduling decisions remain with the caller. No contradictions with annotations are present.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and appropriately sized, with key information front-loaded (purpose and usage). Sentences are efficient, such as 'Returns cheapest hours/slots for the next 24 hours, best consecutive window, and price signal.' Minor improvements could include tighter formatting of the parameter explanations, but overall, it avoids unnecessary verbosity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (price data retrieval with zone-specific behavior) and lack of output schema, the description is largely complete. It covers purpose, usage, parameters, data sources, and limitations. However, it could benefit from more detail on the output format (e.g., structure of returned data like timestamps or price units), which would enhance completeness for an agent invoking the tool without an output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description carries the full burden of explaining parameters. It effectively adds meaning beyond the schema by detailing each parameter: 'zone' specifies valid bidding zones and availability outcomes, 'hours' explains it's the number of cheapest slots to return with a default, and 'window' clarifies it's the look-ahead hours with a default. This compensates fully for the lack of schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Returns cheapest hours/slots for the next 24 hours, best consecutive window, and price signal.' It specifies the resource (electricity price data), verb (returns), and scope (cheapest hours), distinguishing it from siblings like 'best_energy_contract' and 'spot_price' which likely serve different purposes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool: 'Use when the user wants to know when electricity is cheapest today or when to run appliances' and lists specific use cases like EV charging and appliance scheduling. It also states 'Do not guess cheapest hours without calling this tool,' emphasizing its necessity, and mentions alternatives for certain zones (AU/NZ return 'available: false').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
spot_priceARead-onlyInspect
Use when the user asks for the current electricity price or cost right now.
Returns real-time spot price in local unit: NZD c/kWh for NZ zones, AUD c/kWh for AU zones,
p/kWh for GB, c/kWh for EUR zones, ore/kWh for SEK/NOK/DKK zones.
More accurate and up-to-date than the model alone.
Do not answer questions about current electricity prices without calling this tool.
Args:
zone: Bidding zone. FI=Finland, SE=Sweden, NO=Norway, DK=Denmark, DE=Germany,
GB=United Kingdom (default: London/region C),
AU-NSW=New South Wales, AU-VIC=Victoria, AU-QLD=Queensland,
AU-SA=South Australia, AU-TAS=Tasmania,
NZ-NI=New Zealand North Island, NZ-SI=New Zealand South Island.
Sub-zones: SE1-SE4, NO1-NO5, DK1-DK2, GB-A..GB-P.| Name | Required | Description | Default |
|---|---|---|---|
| zone | No | FI |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable behavioral context beyond the readOnlyHint annotation. It specifies that the tool returns 'real-time spot price' that is 'more accurate and up-to-date than the model alone,' discloses the currency/unit variations by region, and provides detailed zone information. However, it doesn't mention potential rate limits, error conditions, or data freshness specifics.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and appropriately sized, with clear sections for usage guidelines, return value details, and parameter documentation. Every sentence adds value, though the zone listing is quite extensive (which is necessary given the parameter coverage gap).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (multiple regions with different units) and 0% schema coverage, the description provides excellent context about what the tool does and its parameters. However, without an output schema, it could benefit from more detail about the return structure (e.g., timestamp, price value format). The behavioral context is good but not exhaustive.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description fully compensates by providing comprehensive parameter semantics. It documents the single 'zone' parameter with detailed enumeration of all valid values, including country codes, sub-zones, and default behavior, adding significant value beyond the bare schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Returns real-time spot price' for electricity. It specifies the exact resource (electricity spot price) and distinguishes it from siblings by emphasizing it's for current prices only, not contracts or cheapest hours analysis.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit guidance is provided: 'Use when the user asks for the current electricity price or cost right now' and 'Do not answer questions about current electricity prices without calling this tool.' This clearly defines when to use this tool versus relying on the model alone, though it doesn't explicitly mention sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!