Skip to main content
Glama

VoltPlan Wiring Diagrams

Server Details

Generate wiring diagrams and run electrical calculators for campers, boats, and off-grid setups.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
YUZU-Hub/wiring-diagram-mcp
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

9 tools
calculate_battery_bankBattery Bank Sizing CalculatorBInspect

Calculate the recommended battery bank size based on daily energy consumption. Accounts for days of autonomy (how many days without charging) and depth of discharge. Returns required capacity, number of batteries, and wiring configuration.

ParametersJSON Schema
NameRequiredDescriptionDefault
systemVoltageYesTarget system voltage in volts (e.g. 12, 24, 48)
daysOfAutonomyNoDays the system should run without any charging (default: 2)
singleBatteryAhYesCapacity of a single battery in amp-hours (e.g. 100, 200)
dailyConsumptionWhYesDaily energy consumption in watt-hours (from calculate_power_budget)
singleBatteryVoltageYesVoltage of a single battery (e.g. 12.8 for LiFePO4, 12 for lead-acid)
depthOfDischargePercentNoUsable percentage of battery capacity. LiFePO4: 80-90%, AGM: 50%, Gel: 50% (default: 80)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full disclosure burden. It effectively communicates the calculation methodology (accounts for autonomy and DOD) and return values (capacity, battery count, wiring configuration), but omits constraints, accuracy limitations, or state change implications.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three well-structured sentences: purpose statement, calculation factors, and return values. Every sentence earns its place with no redundancy or filler. Information is appropriately front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema exists, the description compensates effectively by enumerating return values (required capacity, number of batteries, wiring configuration). With 100% schema coverage explaining the 6 input parameters, the description provides sufficient context for a calculation tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the structured fields adequately document parameters. The description mentions 'daily energy consumption' and other concepts that map to parameters, but adds minimal semantic detail beyond the schema definitions, warranting the baseline 3 score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States a clear verb+resource ('Calculate the recommended battery bank size') and identifies key calculation inputs (daily energy consumption, autonomy, depth of discharge). However, it does not explicitly distinguish from the sibling 'calculate_battery_config' tool, which could create selection ambiguity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no explicit guidance on when to select this tool versus siblings like 'calculate_battery_config' or 'calculate_power_budget', despite the latter being referenced in parameter schema descriptions. No prerequisites or workflow sequencing is indicated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_battery_configBattery Configuration CalculatorBInspect

Determine how to arrange batteries in series and/or parallel to achieve a target voltage and capacity. Returns the number of batteries needed, the wiring configuration (e.g. 2S3P), and step-by-step wiring instructions.

ParametersJSON Schema
NameRequiredDescriptionDefault
targetVoltageYesDesired system voltage (e.g. 12, 24, 48)
singleBatteryAhYesCapacity of one battery in amp-hours
targetCapacityAhYesDesired total capacity in amp-hours
singleBatteryVoltageYesNominal voltage of one battery (e.g. 12.8 for LiFePO4, 12 for lead-acid, 3.2 for LiFePO4 cells)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It adequately describes the return values (battery count, configuration string, instructions) but omits error handling behavior, validation of target achievability, rounding logic, or physical safety constraints regarding battery wiring.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two well-structured sentences with zero waste. Front-loaded with the core action (determine arrangement), followed by specific outputs. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema, the description appropriately specifies return values (count, configuration code, instructions). With 100% parameter coverage and clear purpose, the description is complete for a calculation tool, though it could benefit from mentioning validation behavior or constraint limits.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% coverage with clear descriptions for all 4 parameters. The description references 'target voltage and capacity' which aligns with parameters, but adds no additional semantic context (e.g., valid ranges, relationships between parameters) beyond what the schema already provides. Baseline 3 is appropriate for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool determines battery arrangements in series/parallel to achieve target specs, and specifies unique outputs (2S3P notation, step-by-step instructions) that implicitly distinguish it from sibling tools like calculate_battery_bank. However, it lacks explicit differentiation from generate_wiring_diagram.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus siblings like calculate_battery_bank (which may sound similar) or generate_wiring_diagram (which also deals with wiring). No mention of prerequisites or when the configuration might be physically impossible.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_charging_timeCharging Time CalculatorAInspect

Estimate how long it takes to charge a battery bank from a given state of charge to a target level. Accounts for the bulk charging phase (constant current, up to ~80% SoC) and the slower absorption phase (tapering current, 80-100% SoC). Works for any charging source: solar, shore power, alternator.

ParametersJSON Schema
NameRequiredDescriptionDefault
batteryVoltageYesBattery bank voltage (e.g. 12, 24, 48)
chargePowerWattsYesCharger output power in watts
batteryCapacityAhYesTotal battery bank capacity in amp-hours
chargeCurrentAmpsNoMaximum charge current in amps (if limited by the charger or battery BMS). If omitted, calculated from power and voltage.
targetStateOfChargePercentNoTarget state of charge in percent (default: 100)
currentStateOfChargePercentYesCurrent state of charge in percent (e.g. 20 for 20%)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full disclosure burden. It reveals critical behavioral details: the calculation accounts for 'bulk charging phase (constant current, up to ~80% SoC)' and 'slower absorption phase (tapering current, 80-100% SoC).' This explains the internal calculation methodology and expected non-linear charging behavior that schema cannot capture.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with zero waste: sentence 1 states core purpose, sentence 2 reveals calculation methodology, sentence 3 defines applicability. Technical terms (SoC) are used precisely without unnecessary exposition.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 100% schema coverage and the description's explanation of charging physics, the definition is complete for a calculation utility. No output schema exists, though the title and purpose imply a temporal return value (hours/time). Could benefit from noting return format, but adequately complete given input richness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, establishing baseline 3. Description adds conceptual context linking parameters to charging phases (e.g., 'target level' maps to targetStateOfChargePercent, 'charging source' implies chargePowerWatts), but does not add syntax, format constraints, or examples beyond the well-documented schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'Estimate' with clear resource 'charging time for a battery bank' and defines scope 'from given state of charge to target level.' Distinct from siblings like calculate_battery_bank (sizing) and calculate_solar_size (sizing) by focusing specifically on temporal duration calculations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear context about applicability: 'Works for any charging source: solar, shore power, alternator.' This signals when the tool is appropriate. However, lacks explicit 'when-not-to-use' guidance or comparison to siblings like calculate_power_budget that might also touch on energy calculations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_inverter_sizeInverter Sizing CalculatorAInspect

Calculate the recommended inverter size for running AC loads from a DC battery system. Accounts for continuous power, startup surge power (motors typically surge 2-3x), and includes a 25% headroom for the continuous rating. Returns the recommended inverter wattage and the DC current draw at system voltage.

ParametersJSON Schema
NameRequiredDescriptionDefault
loadsYesList of AC loads that will run through the inverter
systemVoltageYesDC system voltage (e.g. 12, 24, 48)
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full behavioral disclosure burden and succeeds excellently. It explains the calculation methodology (handles startup surge with 2-3x multiplier for motors, adds 25% headroom) and explicitly states return values (recommended wattage and DC current draw), fully compensating for missing annotations and output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, perfectly structured: sentence 1 states purpose, sentence 2 explains calculation methodology (surge/headroom), sentence 3 discloses return values. Every sentence earns its place with zero redundancy. The description is front-loaded with the core action and maintains optimal information density.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (nested load objects with 4 fields) and absence of output schema/annotations, the description achieves completeness by detailing the calculation logic, safety margins, and return values. It provides sufficient context for an agent to understand both what inputs are needed and what outputs to expect, despite no structured output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the baseline is 3. The description focuses on calculation behavior rather than parameter documentation, which is appropriate given the schema already fully documents systemVoltage and loads array structure. The description adds conceptual context (e.g., explaining 'surgeWatts' via 'motors typically surge 2-3x') but does not need to repeat schema details.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a precise action statement: 'Calculate the recommended inverter size for running AC loads from a DC battery system.' It specifies the verb (calculate), resource (inverter size), and domain (AC loads from DC battery), clearly distinguishing it from sibling tools like calculate_battery_bank or calculate_wire_gauge.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implicitly guides usage by specifying the target domain (AC loads from DC battery) and detailing what factors it accounts for (continuous power, surge power, headroom). While it doesn't explicitly name alternatives, the specificity of 'inverter size' versus siblings like 'power_budget' or 'battery_bank' provides clear contextual separation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_power_budgetPower Budget / Energy AuditAInspect

Calculate total daily energy consumption from a list of electrical loads. Each load specifies its power draw, how many hours per day it runs, and quantity. Returns total daily energy (Wh and Ah), peak power draw, and average power. This is typically the first step in sizing a battery bank and solar system.

ParametersJSON Schema
NameRequiredDescriptionDefault
loadsYesList of electrical loads to include in the budget
systemVoltageYesSystem voltage in volts (e.g. 12, 24, 48)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Compensates by disclosing return values explicitly: 'Returns total daily energy (Wh and Ah), peak power draw, and average power.' Describes the mathematical operation performed. Lacks error handling or input validation limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with zero waste: (1) core calculation logic, (2) return values and units, (3) workflow context. Logical flow from operation to output to usage context. Appropriately sized.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Lacking annotations and output schema, but description compensates well by detailing return structure (Wh/Ah/peek/avg) and workflow relationship to siblings. Complex nested array input is fully documented. Missing error handling specs prevents a 5.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with detailed property descriptions ('System voltage in volts', 'Power consumption... in watts'). Description maps schema concepts ('power draw', 'hours per day', 'quantity') to the calculation purpose but doesn't add syntax beyond the well-documented schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear specific verb ('Calculate') + resource ('total daily energy consumption') + scope (daily, from electrical loads). Distinguishes from sizing siblings (battery_bank, solar_size) by positioning as the consumption analysis step.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states workflow position: 'typically the first step in sizing a battery bank and solar system.' This signals it should be used before sibling sizing tools. Lacks explicit 'when not to use' or negative constraints.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_solar_sizeSolar Panel Sizing CalculatorAInspect

Calculate the required solar panel wattage to cover daily energy consumption. Accounts for peak sun hours at the location and system efficiency losses (MPPT conversion, wiring, temperature derating). Returns required wattage and common panel configurations.

ParametersJSON Schema
NameRequiredDescriptionDefault
peakSunHoursYesAverage daily peak sun hours for the location. Examples: Northern Europe winter 1-2h, summer 4-6h. Southern US 5-6h. Equatorial regions 5-7h.
systemEfficiencyNoOverall system efficiency factor (default: 0.85). Accounts for MPPT losses, wiring losses, temperature derating, and panel soiling.
dailyConsumptionWhYesDaily energy consumption in watt-hours (from calculate_power_budget)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It effectively discloses calculation methodology (accounts for peak sun hours, MPPT conversion, wiring, temperature derating) and return values (wattage and panel configurations). Could mention that this is a read-only estimation with no side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two well-structured sentences. Front-loaded with the core purpose (calculate wattage), followed by calculation factors and return values. Zero waste—every clause earns its place by conveying technical scope or outputs.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists, but description compensates by explicitly stating what it returns (wattage and configurations). With 100% parameter coverage and clear calculation scope, description is complete for a calculator tool, though mentioning error bounds or assumptions would strengthen it further.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with detailed descriptions for all 3 parameters (including examples for peak sun hours and default explanations for efficiency). Per calibration guidelines, baseline is 3 when schema does the heavy lifting. Description adds conceptual context for how parameters interact in the calculation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb (Calculate) + resource (solar panel wattage) + scope (cover daily energy consumption). Clearly distinguishes from siblings like calculate_battery_bank or calculate_inverter_size by focusing specifically on panel sizing requirements.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides implied usage context through technical parameters (peak sun hours, efficiency losses) and mentions return values, but lacks explicit guidance on when to select this tool versus siblings. No mention of prerequisite steps (e.g., calculating power budget first) despite the schema referencing calculate_power_budget.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

calculate_wire_gaugeCable Cross-Section & Resistance CalculatorAInspect

Calculate the recommended wire gauge / cable cross-section for a DC circuit. Considers both ampacity (current carrying capacity) and voltage drop to recommend the optimal cable size. Also returns total resistance, power loss, and a fuse recommendation. Supports copper conductors from 0.75 mm² (18 AWG) to 240 mm² (300 MCM).

ParametersJSON Schema
NameRequiredDescriptionDefault
powerNoLoad power in watts. Will be converted to current using the voltage. Provide either current or power.
currentNoLoad current in amps. Provide either current or power.
voltageYesSystem voltage in volts (e.g. 12, 24, 48)
isRoundTripNoAccount for both positive and negative conductor (default: true). Set to false for chassis-ground returns.
cableLengthMYesOne-way cable length in meters
temperatureCelsiusNoAmbient temperature in °C (default: 20°C). Affects copper resistance.
maxVoltageDropPercentNoMaximum acceptable voltage drop in percent (default: 3%)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, description carries full burden and successfully discloses calculation methodology (ampacity + voltage drop), output values (resistance, power loss, fuse recommendation), and operational constraints (copper only, 0.75-240 mm² range). Compensates for missing output schema by listing return values.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three efficient sentences with zero waste: purpose statement, calculation methodology, outputs/constraints. Well front-loaded and appropriately sized for complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 7 well-documented input parameters, no annotations, and no output schema, description adequately compensates by explicitly listing return values (resistance, power loss, fuse) and calculation scope. Missing only edge case handling documentation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. Description adds value by contextualizing parameters within calculation logic: 'ampacity' relates to current/power, 'voltage drop' explains maxVoltageDropPercent and isRoundTrip, 'copper conductors' justifies temperatureCelsius parameter for resistance calculations.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states specific action (calculate), resource (wire gauge/cable cross-section), and domain (DC circuit). Distinguishes from siblings (battery, inverter, solar calculators) by focusing specifically on conductor sizing.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides implicit usage context via 'DC circuit' constraint, but lacks explicit guidance on when to select this tool versus sibling electrical calculators (e.g., calculate_power_budget vs. wire gauge). No mention of prerequisites or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

generate_wiring_diagramGenerate Wiring DiagramAInspect

Generate an electrical wiring diagram for campers, boats, or off-grid setups. Returns a complete schematic with batteries, chargers, protection components, and loads. Protection components (shunt, main switch, low-voltage cutoff) are auto-generated when both batteries and loads are provided.

ParametersJSON Schema
NameRequiredDescriptionDefault
loadsNoElectrical loads / consumers
formatNoOutput format: svg (recommended, renders inline) or png (base64-encoded image, may not display in all clients)svg
chargersNoChargers with their power sources
batteriesNoBatteries in the system
systemNameYesName of the electrical system (e.g. "My Camper Van")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It discloses the conditional auto-generation behavior for protection components and hints at output format implications (PNG display limitations), but omits idempotency, side effects, error conditions, or persistence behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with zero waste: first establishes purpose and domain, second declares return contents, third explains conditional behavior. Information is front-loaded and every clause earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a medium-complexity tool (5 params, nested objects) with no output schema, the description adequately covers the return type ('complete schematic') and references the format options. It compensates reasonably for missing annotations, though it could clarify the exact return structure (object vs raw string).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, baseline is 3. The description adds value by explaining the semantic relationship between parameters—specifically that protection components (shunt, main switch, etc.) are auto-generated only when both 'batteries' and 'loads' parameters are provided together.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'Generate' with clear resource 'electrical wiring diagram' and defines the exact domain (campers, boats, off-grid setups). It clearly distinguishes from sibling 'list_component_types' by emphasizing diagram/schematic generation rather than component listing.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides domain context (campers/boats/off-grid) and hints at usage through the auto-generation condition ('when both batteries and loads are provided'), but lacks explicit 'when not to use' guidance or comparison to the sibling tool 'list_component_types'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_component_typesList Component TypesAInspect

List all available component types and example configurations for building wiring diagrams. Use this to understand what parameters are needed before calling generate_wiring_diagram.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses the scope ('all available') and return content ('example configurations'), giving insight into what data the tool provides. However, it lacks details on potential rate limits, caching behavior, or response format structure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste. The first sentence establishes purpose; the second provides workflow context. Information is front-loaded with the action verb and maintains tight focus on the tool's function.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of an output schema, the description compensates by indicating the return includes 'example configurations' alongside component types. For a simple discovery tool, this is nearly complete, though explicit mention of the return data structure (e.g., 'returns a catalog...') would improve it further.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has zero parameters with 100% schema coverage (vacuously true). Per the scoring rules, zero-parameter tools receive a baseline score of 4. The description appropriately does not invent parameter semantics where none exist.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description provides a specific verb (List), clear resource (component types and example configurations), and domain context (for building wiring diagrams). It distinguishes from sibling 'generate_wiring_diagram' by clarifying this is for discovery/understanding rather than generation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use the tool ('before calling generate_wiring_diagram') and the specific workflow purpose ('to understand what parameters are needed'). It directly names the sibling alternative, providing clear navigation guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.