eda-mcp
Server Quality Checklist
This repository includes a README.md file.
This repository includes a LICENSE file.
Latest release: v0.1.0
No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.
Tip: use the "Try in Browser" feature on the server page to seed initial usage.
Add a glama.json file to provide metadata about your server.
- This server provides 39 tools. View schema
No known security issues or vulnerabilities reported.
Are you the author?
Add related servers to improve discoverability.
Tool Scores
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Fails to disclose critical CAD-specific behavior: coordinate units (mm/mils), absolute vs. relative positioning, whether connected traces/wires move with the component or become disconnected, and whether the operation is undoable.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness3/5Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence is efficient but underspecified for the domain. While not verbose, it lacks 'front-loaded' critical information (units, coordinate context) that should take priority over brevity given the CAD context.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a PCB/schematic CAD mutation tool with no annotations and undocumented parameters, the description is inadequate. Despite having an output schema (not shown), the description omits coordinate system context, units, and error handling (e.g., invalid reference designators), which are essential for correct invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters1/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 0% description coverage (only titles 'Ref', 'X', 'Y'). Description fails to compensate by not explaining that 'ref' is a reference designator (e.g., 'R1'), or what coordinate system/units X/Y use. Complete parameter documentation gap.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Move') and target ('component') with scope ('to a new position'). It implicitly distinguishes from kicad.add_component (creation vs. repositioning), though it doesn't explicitly clarify PCB vs. schematic context or coordinate system details.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this versus alternatives like kicad.place_via or kicad.route_trace. Does not mention prerequisites such as the component needing to exist beforehand (via kicad.add_component) or requiring an open project.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Fails to disclose search semantics (fuzzy vs exact matching, which fields are searched) or performance characteristics. The example 'QFP-48' hints at search capability but doesn't explain behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with purpose front-loaded followed by an example. No redundant text, though brevity comes at the cost of parameter documentation given the empty schema descriptions.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With three parameters, zero schema descriptions, and no annotations, the description should explain parameter semantics, particularly the library filter. Relies entirely on the output schema for return value documentation, which is acceptable, but leaves input parameters under-documented.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters2/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 0% description coverage. The description provides an example showing query='QFP-48' and limit=10, which partially compensates for those two parameters. However, the 'library' parameter (which filters by library name) is completely undocumented with no hint about valid values or default behavior.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
States a clear verb-object pair (search footprints) and scope (KiCad libraries). Distinguishes from sibling 'list_footprints' by implying filtered retrieval vs enumeration, though it doesn't explicitly clarify this distinction.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use this versus 'list_footprints' or 'search_symbols'. Does not mention that the library parameter can filter to specific libraries or that empty string searches all libraries.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must carry the full burden of behavioral disclosure. It fails to mention mutation side effects, validation behavior (e.g., handling duplicate references), or whether changes propagate between schematic and PCB. The single sentence provides minimal behavioral context beyond identifying the operation as a write action.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of a single, efficient sentence with no redundant words. While brevity contributes to other scoring gaps, the sentence itself earns its place without waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite the existence of an output schema (removing the burden of describing return values), the tool remains under-documented due to zero input schema coverage. For a mutation operation affecting design data, the lack of parameter explanations, format constraints, or behavioral warnings renders the description incomplete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters2/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description must compensate by explaining the two parameters ('old_ref' and 'new_ref'). It fails to do so, omitting that 'old_ref' identifies the target component and 'new_ref' specifies the desired identifier, leaving parameter semantics entirely undocumented.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific domain terminology ('reference designator') and a clear verb ('Set'), accurately describing the operation on a component. However, it does not explicitly differentiate from siblings like 'set_value' or 'set_component_field', though the term 'reference designator' implicitly distinguishes it in PCB design contexts.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives, nor does it mention prerequisites such as component existence or constraints on reference designator formats (e.g., uniqueness requirements).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. While 'Delete' implies a destructive mutation, the description fails to specify what happens to connected wires/nets, whether the deletion triggers annotation renumbering, or if the operation is reversible. For a destructive tool, this is insufficient behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
The single-sentence description is efficiently structured with the action verb front-loaded. However, given the lack of usage guidelines and behavioral transparency, the extreme brevity constitutes under-specification rather than effective conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite having an output schema (reducing the need to describe return values), the description is inadequate for a destructive operation with zero schema coverage. It omits critical context about side effects on schematic connectivity and does not sufficiently document the single required parameter.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description partially compensates by referencing the 'by reference' mechanism, which clues the agent into the purpose of the 'ref' parameter. However, it lacks details on expected format (e.g., 'R1'), case sensitivity, or validation rules that would be necessary given the empty schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description provides a clear verb ('Delete'), resource ('component from the schematic'), and scope ('by reference'). However, it does not explicitly differentiate from similar sibling tools like 'move_component' or 'set_component_field' that also operate on components.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives (e.g., when to delete vs. move), nor does it mention prerequisites such as requiring an open project or schematic. No exclusion criteria or side effects are noted.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden for behavioral disclosure. It fails to indicate whether the search supports wildcards/regex, case sensitivity behavior, what happens when no results are found, or whether this operation is read-only (though implied by 'search'). With an output schema present, return value documentation is handled elsewhere, but execution behavior remains opaque.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two efficiently structured sentences: the first front-loaded with the core purpose, the second providing a concrete usage example. No redundant or filler text is present; every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given that an output schema exists (handling return value documentation) and the tool performs a standard search operation, the description provides minimal viable context. However, with three parameters and zero schema coverage, the lack of parameter semantics documentation leaves a significant gap. Adequate but incomplete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters2/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0% (titles only, no descriptions). The description includes an example showing query and limit usage but does not explain the 'library' parameter's function (filtering by specific library name?), the query syntax (substring vs. exact match), or valid ranges for limit. The example alone is insufficient to compensate for the complete lack of schema documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'Search[es] for symbols (components) in KiCad libraries,' providing specific verb (search), resource (symbols/components), and scope (libraries). The parenthetical clarification of 'components' helps distinguish from footprints, though it could more explicitly differentiate from sibling list_components (which likely searches the current schematic vs. libraries).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides a concrete example showing syntax (query='STM32', limit=10), but offers no guidance on when to use this tool versus alternatives like get_symbol_info (for specific symbol details) or search_footprints. No prerequisites or exclusion criteria are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses the connectivity scope (cross-hierarchical) but fails to mention side effects, idempotency, coordinate units, or what the output schema contains. The mention of hierarchical connectivity provides minimal behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
The single-sentence structure is front-loaded with the action verb and contains zero waste. Every word earns its place, though given the 0% schema coverage and lack of annotations, the description is arguably too concise and lacks necessary expansion.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite having an output schema (reducing the need to describe return values), the description is inadequate for a 5-parameter CAD tool with zero schema documentation. It omits coordinate system details, valid enum values for 'shape', and placement behavior—information essential for correct invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters1/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0% and the description completely fails to compensate. It does not explain coordinate units (x, y), angle units, valid shape values (only the default 'bidirectional' appears in schema), or naming conventions for the label—critical omissions for a CAD placement tool.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description provides a specific verb ('Add') and resource ('global label'), and crucially distinguishes this tool from sibling kicad.add_net_label by specifying it 'connects across hierarchical sheets'—a critical distinction in EDA tools between local and global nets.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The parenthetical '(connects across hierarchical sheets)' implies when to use this tool (for cross-sheet connectivity), but it does not explicitly state when NOT to use it, name alternatives like add_net_label for local connections, or mention prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully communicates the state requirement (open project needed), but fails to disclose mutation effects, error conditions (e.g., invalid layer names), or whether the operation is idempotent. The 'Add' verb implies creation, but safety/reversibility details are absent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately concise at one sentence plus a prerequisite tag. The '[Requires open project]' constraint is correctly front-loaded for immediate visibility. No redundant filler text, though 'or shape' adds slight ambiguity without adding value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the existence of an output schema, the description appropriately omits return value details. However, with 6 parameters and 0% schema coverage, the description should provide more semantic context for the coordinate system and defaults (e.g., confirming x1/y1 is start point). Adequate but with clear gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters2/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, requiring the description to compensate for undocumented parameters. It mentions 'PCB layer' which maps to the 'layer' parameter, but completely fails to explain that x1/y1/x2/y2 represent endpoint coordinates or that 'width' controls line thickness. Minimal compensation for the schema deficit.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Add') and target resource ('a line or shape to a PCB layer'), distinguishing it from schematic-focused siblings like 'add_wire' by specifying 'PCB layer'. However, the 'or shape' phrasing is slightly ambiguous given the tool name and coordinate-based parameters clearly indicate a line segment.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides a prerequisite ('[Requires open project]'), but offers no guidance on when to use this tool versus alternatives like 'add_wire' (schematic) or 'create_board_outline', nor does it mention if this is for graphical elements or electrical connections.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full disclosure burden. It mentions positional placement but omits coordinate units (mm/mils), angle behavior (degrees vs radians, default implications), side effects, idempotency, or error conditions (e.g., duplicate names). Output schema exists but description doesn't hint at return structure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely tight: two sentences plus a concrete code example. Information is front-loaded with the core action, followed by functional context, then illustrative syntax. Zero redundant text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for basic operation given the output schema handles return documentation. However, with 4 parameters at 0% schema coverage and no annotations, the description underspecifies behavioral details (angle semantics, units) and sibling differentiation necessary for correct schematic capture workflows.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters2/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, requiring the description to compensate. The example illustrates name, x, and y usage but fails to document the 'angle' parameter entirely (default 0, rotation direction, units). No explanation of coordinate system origin or valid net name formats (alphanumeric, case sensitivity).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb-resource pair ('Add a net label') and scope ('at a position'). The phrase 'Primary wiring mechanism for named nets' provides functional context. However, it doesn't explicitly distinguish from sibling kicad.add_global_label (global vs local nets), which is critical in KiCad workflows.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implicit usage context by calling it the 'Primary wiring mechanism for named nets,' suggesting preference over add_wire for logical connections. Lacks explicit when-to-use guidance versus add_global_label or power symbols, and omits prerequisites like active project/schematic requirements.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses the project requirement, but omits critical behavioral details: output file format, whether it overwrites existing files, and side effects. Output schema exists per context signals, so return values need not be described.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise two-phrase structure. Front-loads the critical prerequisite constraint. No redundant or wasted text; every element serves a distinct purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Minimum viable for a single-parameter export tool. Adequately covers core function and prerequisites, but significant gap remains regarding the undocumented parameter (given 0% schema coverage) and export format specifics. Output schema existence mitigates need to describe return values.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters2/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 0% (no description field for output_path parameter). Description fails to compensate by not mentioning the parameter at all. While 'output_path' is somewhat self-explanatory, missing semantics include: whether it accepts relative/absolute paths, required file extension, or directory vs. file path.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb ('Export') + resource ('Bill of Materials') + source ('from the schematic'). Effectively distinguishes from sibling export tools (export_gerbers, export_netlist, export_pdf) by specifying BOM content. Minor gap: doesn't specify output format (CSV, XML, etc.).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit prerequisite '[Requires open project]' which is valuable context. However, lacks guidance on when to use this vs. export_netlist or export_gerbers, or whether BOM should be generated before/after other operations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Only behavioral trait disclosed is the open project requirement. Missing: file overwrite behavior, whether paths are absolute or relative, side effects on KiCad session, or what the output schema contains.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Excellent structure: prerequisite front-loaded in brackets, followed by one-sentence purpose statement, then concrete example. No wasted words; every element earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a simple two-parameter export tool with existing output schema (no need to describe return values). However, incomplete due to undocumented target parameter values and missing behavioral details (overwrite behavior) that would be necessary for safe autonomous operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters2/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 0% description coverage. Description compensates partially with example showing target='schematic', implying the parameter selects the export source. However, fails to explicitly document valid target values, output_path format requirements (absolute vs relative), or that target defaults to 'schematic'.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
States specific action (export), source (schematic or PCB), and format (SVG). Clearly distinguishes from sibling export tools (export_pdf, export_gerbers, etc.) by output format. Slight ambiguity in whether it processes both types simultaneously or requires selection via target parameter.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states prerequisite '[Requires open project]' and provides a concrete usage example. However, lacks guidance on when to choose SVG over export_pdf or other export formats, and doesn't specify valid values for the target parameter (e.g., 'pcb' vs 'schematic').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It adds valuable behavioral context by specifying the '.kicad_pro' file extension filter, but omits other important traits like whether the search is recursive, error handling for invalid directories, or safety characteristics (though 'List' implies read-only).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is optimally concise with two efficient statements: a clear purpose declaration followed immediately by a concrete usage example. No redundant or filler text is present.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's singular purpose and the presence of an output schema (which handles return value documentation), the description is minimally adequate. However, with zero schema coverage and no annotations, it lacks sufficient detail on parameter constraints and behavioral edge cases to be fully complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters2/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, requiring the description to compensate. While the example shows the parameter is a path string, the description fails to add crucial semantics such as whether the path must be absolute, if it requires specific permissions, or validation constraints.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (List) and target resource (KiCad projects/.kicad_pro files) with scope (in a directory). However, it does not explicitly differentiate from sibling tools like 'create_project' or 'open_project' to clarify when listing is preferred over other project operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides an example invocation but offers no explicit guidance on when to use this tool versus alternatives (e.g., 'use this to discover projects before opening with open_project') or prerequisites (e.g., directory must exist).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. While 'Set' implies mutation, the description fails to disclose whether changes apply immediately to existing traces, if the operation is reversible, what the output schema contains, or whether changes persist to disk without calling `save_project`.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely efficient single-sentence structure with prerequisite front-loaded in brackets. Every token earns its place—no filler words while conveying action, scope, and affected parameters.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Inadequate for a PCB design tool where units are critical (0.2 could be mm or inches). While the existence of an output schema excuses the description from detailing return values, the description misses essential domain distinctions (global vs net-class rules) and persistence behavior. The prerequisite hint alone is insufficient for safe mutation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0% (only titles provided). The description compensates by mapping the three parameters to their functions (trace width, clearance, drill size) and implying these are minimum values. However, it omits critical domain context: expected units (mm vs inches), physical definitions (e.g., clearance between what objects), and valid value ranges.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb (Set) and resource (global design rules), listing the three specific rule categories controlled. The term 'global' effectively distinguishes this from sibling `create_net_class`, which handles net-specific rules. Minor gap: doesn't explicitly clarify this is for PCB layout versus schematic.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The bracketed prefix '[Requires open project]' provides a clear prerequisite for invocation. However, it lacks guidance on when to prefer this over `create_net_class` for specific nets, and doesn't mention whether a project must be saved first or if this triggers DRC automatically.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. While 'Set' implies mutation, there's no disclosure of side effects (e.g., whether this triggers ERC, updates the netlist, or validates the reference exists). Output schema exists but behavior regarding failures is undocumented.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise at two sentences. First sentence states purpose immediately; second provides syntactic example. No filler or redundant text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple 2-parameter setter with an output schema, the description covers the minimum viable information. However, gaps remain regarding validation behavior, side effects, and differentiation from similar field-setting tools in the sibling list.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 0% description coverage (only titles 'Ref' and 'Value'). The description compensates effectively with the example showing 'R1' and '4.7k', which semantically clarifies that 'ref' is a reference designator and 'value' is the electrical value. Could explicitly state 'ref' format expectations.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Set' and target 'value property of a component'. The example clarifies this is for component values like '4.7k'. However, it doesn't distinguish from sibling tool 'set_component_field' (which could also set values) or 'set_reference'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides a concrete example showing syntax, but offers no explicit guidance on when to use this versus 'set_component_field' or other field-setting tools. No mention of prerequisites (e.g., project must be open) or error conditions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It adds valuable domain context that this is for ERC validation, but lacks operational details like error handling (what if no pin exists at x,y?), idempotency, or side effects. The output schema exists so return values don't need description.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, zero waste. First sentence establishes the action and object; second provides the domain rationale (ERC). Perfectly front-loaded with no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple 2-parameter tool with an output schema, the description is minimally adequate but has gaps. The ERC mention provides domain completeness, but the coordinate parameters (x, y) are undocumented despite 0% schema coverage, leaving critical operational context missing.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters2/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0% (only titles X/Y), so the description must compensate. While it mentions marking a 'pin', it fails to explain that x and y are schematic coordinates specifying the pin location, or what units/format are expected. The link between the parameters and the pin location is implicit at best.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Mark') and target ('unused pin'), and specifies the domain artifact ('no-connect flag'). It distinguishes from siblings like add_wire or add_component by focusing on ERC compliance for unconnected pins, though it doesn't explicitly state this is a schematic-level operation versus PCB.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides implicit usage context by stating it's 'Required to pass ERC', indicating when to use it (for unused pins during electrical validation). However, it lacks explicit when-not guidance (e.g., don't use on pins that should be connected) or comparison to alternatives like actually wiring the pin.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It adds valuable behavioral context regarding coordinate exactness, but fails to disclose mutation effects, error conditions, or reference the existing output schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences efficiently structured: purpose first, constraint second, example third. No redundant information; every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a simple geometric drawing tool with four numeric parameters and an output schema (which excuses return value documentation). However, gaps remain regarding units, grid snapping behavior, and electrical net creation implications.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 0% description coverage (only titles X1, Y1, etc.). The description compensates partially with a concrete example showing numeric values, though it omits critical context like units (mm vs mils) and coordinate system origin.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (add) and resource (wire) with scope (between two points). However, it does not differentiate from sibling tool 'add_line', which in KiCad is for graphical lines rather than electrical connections.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides only the constraint that 'Coordinates must be exact' but lacks explicit guidance on when to use this versus 'add_line' (graphical) or 'route_trace' (PCB), and omits prerequisites like having an open schematic.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully communicates that only 'unannotated' components are affected (scope limitation) and that the operation is automatic, but fails to disclose critical behavioral traits like whether this modifies the schematic file, requires a subsequent save, or is reversible.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence of seven words that front-loads the action ('Auto-assign') and immediately follows with the object and scope. No extraneous text is present.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
While the output schema handles return value documentation and the parameterless nature simplifies the contract, the description lacks essential contextual information for a CAD mutation tool—specifically, whether it operates on the current sheet or all sheets, and confirmation that it modifies design state given the absence of destructiveHint annotations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema contains zero parameters, which establishes a baseline score of 4. The description appropriately does not introduce phantom parameters, maintaining consistency with the empty schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action (auto-assign reference designators) and target scope (all unannotated components). However, it does not explicitly differentiate from the sibling tool 'set_reference', which likely handles manual assignment of individual references.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'set_reference', nor does it mention prerequisites such as requiring an open project or schematic context before invocation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses what data is returned (pins, properties, datasheet), which is valuable behavioral context. However, it omits side effects, error conditions (e.g., symbol not found), or whether the operation is read-only.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise with zero waste. Single sentence establishing purpose followed immediately by a concrete example. Information is front-loaded appropriately.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a read-only lookup tool with an output schema (which reduces the need to describe return values), but gaps remain. The 0% input schema coverage is not fully compensated, and there's no mention of error handling or parameter constraints (case sensitivity, special characters).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters2/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, requiring the description to compensate. While the example ('library='Device'', 'symbol='R'') provides implicit hints about parameter values, the description lacks explicit explanations of what constitutes a valid library name or symbol identifier format.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the specific action (Get detailed info), resource (symbol), and scope (pins, properties, datasheet). It effectively distinguishes from sibling 'search_symbols' by emphasizing retrieval of specific symbol details versus searching.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides an example call but lacks explicit guidance on when to use this versus 'search_symbols' or prerequisites like library availability. No 'when-not-to-use' or alternative recommendations are included.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses that the symbol creates a global net connection, which is key behavioral context. However, it omits mutation safety details (idempotency, error handling if invalid name provided), forcing the agent to infer destructiveness from 'Add'.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with zero waste: declaration, behavioral note, and concrete example. Information is front-loaded with the action verb, making it immediately scannable.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a 4-parameter tool with output schema (so return values needn't be explained), but gaps remain in documenting optional parameters (rotation) and behavioral edge cases. Meets minimum viable standard but lacks richness given zero annotations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 0% description coverage. The description compensates partially via the example showing name='+3V3', x=100, y=30, clarifying that name is the power net label and x/y are coordinates. However, it fails to document the rotation parameter, coordinate units, or valid name formats.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb ('Add') and resource ('power symbol') with concrete examples (VCC, GND, +3V3). Distinguishes from generic components via the power-specific examples, though it could explicitly differentiate from add_global_label which also creates global nets.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides an example invocation showing syntax, and mentions 'Creates global net connection' implying usage for power distribution. However, lacks explicit guidance on when to use this versus add_global_label or prerequisites like open project/schematic.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully notes the project state requirement ('[Requires open project]'), but omits other critical behavioral details such as whether creating a duplicate net class overwrites or errors, what units apply to the numerical parameters, and whether the operation is idempotent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately compact and front-loaded: prerequisite first, purpose second, example third. Every element earns its place—the example syntax is particularly efficient for clarifying parameter types and usage patterns without verbose explanation.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 3-parameter creation tool with an output schema, the description covers the essentials (prerequisite, purpose, example) but leaves gaps in behavioral contract (units, collision handling) and parameter documentation given the schema's lack of descriptions. The existence of the output schema excuses the lack of return value documentation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Given 0% schema description coverage, the description partially compensates by naming the parameters ('trace width and clearance') and providing a concrete example showing typical values (0.5, 0.3). However, it fails to specify units (millimeters vs inches), valid ranges, or semantic meaning of 'clearance', leaving significant gaps that the schema should have covered.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'Create[s] a net class with specific trace width and clearance'—a specific verb and resource. While it identifies what the tool does, it does not explicitly differentiate from the sibling `set_design_rules` tool, which could be confused with net class creation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description includes the prerequisite '[Requires open project]', providing essential context for when the tool can be invoked. However, it lacks explicit guidance on when to use this versus `set_design_rules` or what constitutes a valid net class name, and does not mention error conditions like duplicate names.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully communicates the project state requirement and workflow context, but fails to disclose safety characteristics (e.g., whether it overwrites existing files at output_path) or side effects on the application state.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise with zero waste. The bracketed prerequisite is front-loaded, followed by the core action and workflow context. Every word earns its place across the two sentences.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a single-parameter tool with an output schema (relieving the need to describe return values). However, given the 0% schema coverage and lack of annotations, the description should have included parameter details and file behavior to be fully complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters2/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0% (output_path has no description field). The description compensates poorly, never mentioning the parameter explicitly or clarifying path requirements (absolute vs. relative), expected file extensions, or overwrite behavior. The connection between 'Export' and 'output_path' is implicit only.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Export') and resource ('netlist') with source context ('from the schematic'). It distinguishes from sibling export tools (export_bom, export_gerbers, etc.) by specifying the netlist format, though it could further clarify the file format (e.g., .net or .csv).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Excellent prerequisite flag ('[Requires open project]') and explicit workflow guidance ('Use before syncing PCB with schematic'). This clearly signals when to use the tool in the design process, though it doesn't explicitly state error conditions or when NOT to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden but omits key behavioral details: it doesn't state that this modifies board state (destructive), what the output schema returns, or error conditions (e.g., invalid coordinates). 'Place' implies mutation but doesn't confirm it.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Optimal structure: prerequisite prefix, action statement, concrete example. Three components in minimal space with no redundant text—every element earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for basic invocation given the output schema exists (removing need to describe return values), but insufficient for a 5-parameter PCB tool with zero schema descriptions—missing net connectivity rules and coordinate system context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 0% description coverage. The description provides a concrete example showing numeric values, which helps infer units, but fails to explain the 'net' parameter's semantics (empty string default behavior) or distinguish 'drill' vs 'diameter' for users.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verb ('Place') + resource ('via') + scope ('at a position'). The prerequisite '[Requires open project]' immediately clarifies execution context, distinguishing it from project-level operations like create_project.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides critical prerequisite '[Requires open project]' which prevents misuse, but lacks explicit guidance on when to use this versus siblings like route_trace or add_wire for connectivity.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Discloses 'Straight segments only' (crucial behavioral constraint) and project requirement. However, omits mutation nature (creates objects), error conditions, and return value details (though output schema exists).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Excellent front-loading with prerequisite tag followed by core function, constraint, and example. Zero redundant text; every element earns its place in a compact three-segment structure.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for basic invocation but incomplete given 7 parameters with zero schema documentation. Missing: return value semantics (despite output schema existing), valid layer values, coordinate system details, and 'net' parameter behavior. Minimum viable for a complex routing operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 0% description coverage. Description compensates partially with concrete example showing coordinate values, width, and layer format. However, fails to explain 'net' parameter (default empty string), valid layer enumerations, or measurement units, leaving significant semantic gaps.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb ('Route') and resource ('trace') with specific constraint 'between two points'. 'Straight segments only' distinguishes it from autorouting or curved trace tools. Minor deduction for not explicitly distinguishing from schematic 'add_wire' sibling, though 'trace' implies PCB context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states prerequisite '[Requires open project]' at the front. However, lacks guidance on when to use vs alternatives like 'place_via' or 'add_wire', and doesn't mention coordinate system prerequisites or layer requirements.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden but only states that it 'Sets' a field without clarifying overwrite behavior, idempotency, whether new fields are created dynamically, or error conditions if the component doesn't exist.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence that is appropriately front-loaded with the core action, followed by helpful parenthetical examples. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a simple 3-parameter tool with an output schema, but given zero annotations and zero schema descriptions, it lacks necessary behavioral details (error handling, mutation semantics) to be fully complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description partially compensates by illustrating what field_name/field_value represent (MPN, supplier examples), but does not explicitly map the 'ref' parameter to component reference designators or describe parameter formats.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Set') and resource ('custom field on a component'), with specific examples (MPN, supplier, tolerance) that effectively distinguish it from siblings like set_reference and set_value.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The parenthetical examples implicitly guide usage toward component metadata fields, but there is no explicit guidance on when to use this versus set_reference/set_value, or prerequisites like component existence.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses the prerequisite (open project needed) and implies mutation ('Add'), but lacks details on error handling, coordinate units, layer validation, or whether duplicate text creation is idempotent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Every sentence earns its place: prerequisite warning first, then purpose statement, followed by a compact, illustrative example. No redundancy or filler text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the output schema exists, return values need no explanation. However, for a 4-parameter PCB tool with 0% schema coverage, the description should enumerate valid layer options or coordinate system details; the example alone is insufficient for complete parameter understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 0% description coverage, so the description must compensate. The embedded example provides implicit semantics for all 4 parameters, but explicit documentation of coordinate units, valid layer strings, and text constraints is missing.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Add text') and target resource ('PCB silkscreen or other layer'), effectively distinguishing this from schematic-related siblings like add_global_label or add_net_label which handle schematic text.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The '[Requires open project]' prefix establishes a clear prerequisite, but the description lacks guidance on when to use this versus alternatives (e.g., when to use add_text vs labeling features in add_component) or when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully discloses the session state side effect, but omits other behavioral traits such as error handling when the file doesn't exist, idempotency, or file locking behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is optimally concise with three efficient components: purpose declaration, behavioral context, and concrete example. Every sentence earns its place with no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given that an output schema exists (covering return values) and the tool has only one parameter, the description is reasonably complete. It could be improved by mentioning error conditions (e.g., file not found) or prerequisites, but adequately covers the core functionality.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, requiring the description to compensate. The example 'open_project(path='/path/to/project.kicad_pro')' implicitly demonstrates the expected file extension and format, but lacks explicit semantic description of what the path parameter represents.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Open an existing KiCad project') and distinguishes itself from sibling create_project by specifying 'existing'. It also explains the critical side effect of setting 'session context for all subsequent tools'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies this is a prerequisite tool by noting it sets session context for subsequent tools, but lacks explicit guidance on when to use this versus create_project, and doesn't state that it must be called before other project-specific operations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It adds prerequisite context ('Requires open project') and workflow constraint (Gerber dependency), but lacks disclosure of failure modes, side effects, or what the DRC report contains despite having an output schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste. Front-loaded constraint '[Requires open project]' appears first, followed by action and workflow dependency. Every word earns its place; no redundancy or generic filler.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the existence of an output schema, the description appropriately omits return value details. However, with zero schema coverage on parameters and no annotations, the failure to document the 'output_path' parameter leaves a significant gap in completeness for a tool of this complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters1/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Critical gap: Schema has 0% description coverage for the required 'output_path' parameter. The description fails to compensate by explaining what this path represents (report file? directory?), expected format, or semantics, leaving the parameter effectively undocumented.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states 'Run Design Rules Check on the PCB' with specific verb and resource. It effectively distinguishes from sibling tool 'run_erc' by specifying 'Design Rules' (PCB layout) vs electrical rules, and links to 'export_gerbers' as a downstream dependency.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines5/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Excellent workflow guidance: '[Requires open project]' states prerequisites explicitly, and 'Must pass before exporting Gerbers' provides clear sequencing logic relative to sibling tool export_gerbers, establishing when this tool must be invoked in the workflow.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully discloses the prerequisite state and specific check behaviors (finding unconnected pins, missing power). However, it omits whether the operation is destructive or read-only, and doesn't clarify that results are written to the output_path parameter.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely efficient two-sentence structure. Prerequisites are front-loaded in brackets, followed by the action and concrete examples. Every word earns its place with no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a diagnostic tool with an output schema (which handles return value documentation). However, given zero annotations and zero schema coverage, the description should have included parameter semantics and safety profile (read-only nature) to be complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters2/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0% and the description fails to compensate. It provides no explanation of the required 'output_path' parameter (e.g., expected file format, content of the report, or whether it overwrites existing files). This is a significant documentation gap for the sole parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Description provides specific verb ('Run'), resource ('Electrical Rules Check'), and scope ('on the schematic'). Explicit examples ('unconnected pins, missing power') further clarify functionality, effectively distinguishing from sibling run_drc which targets the board.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Front-loaded prerequisite '[Requires open project]' explicitly states when the tool can be used. The mention of 'schematic' provides clear context for choosing this over board-checking alternatives, though it doesn't explicitly name run_drc as the alternative.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. The example implies placement at coordinates but does not disclose units (mm vs. mils), side effects (what happens if ref exists), mutability (creates placement that can be moved/deleted), or output schema contents. Minimum viable behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Three efficient components: action statement, prerequisite guidance, and illustrative example. Every sentence earns its place. Front-loaded with the core purpose and no redundant text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 6 parameters (5 required) and existence of output schema, the description covers the essential usage pattern and prerequisite workflow. The example demonstrates all required fields. Could be improved by mentioning the optional rotation parameter or coordinate units, but complete enough for correct invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 0% description coverage (only titles like 'Symbol', 'Ref'). The description's example ('Device:R', 'R1', '10k', x=100, y=50) effectively documents the expected format for all required parameters, compensating significantly for the schema deficit. Only the optional rotation parameter is undocumented.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
States 'Add a symbol to the schematic' with specific verb and resource. Distinguishes from board-layout siblings (place_via, route_trace) and other schematic primitives (add_wire, add_text), though it does not explicitly differentiate from add_power_symbol which also adds schematic symbols.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit workflow guidance: 'Use search_symbols first to find the symbol ID.' This establishes a prerequisite and references the sibling search tool. Lacks explicit 'when not to use' or alternative selection guidance (e.g., vs. add_power_symbol).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It establishes that this creates a mandatory board outline, but fails to specify whether calling it multiple times overwrites existing outlines, what units are used (mm vs inches), or other side effects. It meets minimum viability but lacks rich behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is optimally structured with the critical prerequisite front-loaded, followed by purpose, importance, and a concrete example. Every element earns its place with zero redundancy or fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (4 simple parameters) and the presence of an output schema, the description is largely complete. It successfully communicates prerequisites and mandatory status. Minor deductions for failing to clarify the coordinate system or units for the parameters.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters2/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description fails to adequately compensate for the undocumented parameters. While the example demonstrates width and height usage, it completely omits explanation of the x and y parameters (origin coordinates with defaults of 100), leaving significant semantic gaps that the schema should have covered.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Draw') and resource ('PCB board outline'), effectively distinguishing it from schematic-related siblings like 'add_wire' or 'add_component'. The phrase 'Every PCB must have this' further clarifies its essential role in the PCB design workflow.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The prerequisite '[Requires open project]' is explicitly stated at the beginning, providing critical contextual guidance. The description also indicates the tool's mandatory nature ('Every PCB must have this'), though it lacks explicit guidance on when not to use it or how it interacts with existing outlines.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Successfully discloses what files are created (.kicad_pro, .kicad_sch, .kicad_pcb), indicating mutation behavior. Missing details on idempotency (overwrite behavior?), directory creation, and error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Highly efficient two-sentence structure: first sentence declares purpose and artifacts, second provides actionable example. No redundant words or repetition of schema structure. Information density is high.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a straightforward 2-parameter creation tool with output schema present. Covers the essential 'what it creates' but omits edge case behavior (e.g., existing files, directory auto-creation). Sufficient for agent selection but leaves operational ambiguity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 0% description coverage, requiring description to compensate. The example clarifies that 'name' is the project identifier and 'directory' is the filesystem path. Could improve by specifying absolute vs. relative path handling or naming constraints, but baseline semantic meaning is conveyed.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the specific action (Create), resource (KiCad project), and scope (generates .kicad_pro, .kicad_sch, and .kicad_pcb files). Effectively distinguishes from sibling 'open_project' by emphasizing file creation vs. opening existing.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides a concrete code example showing parameter usage (name='my_board', directory='/path/to/dir'), which implicitly guides usage. However, lacks explicit guidance on when to use this vs. 'open_project' or prerequisites like directory existence requirements.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Discloses the critical prerequisite (open project required) and implies read-only nature via 'summary,' but does not explicitly state safety characteristics (read-only, non-destructive) or idempotency. Given the presence of destructive siblings (delete_component, move_component), explicit confirmation of read-only behavior would strengthen trust.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Highly efficient two-sentence structure with zero waste: front-loads the prerequisite and core functionality with specific metrics in sentence one, and provides usage context in sentence two. Every clause earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriately complete given zero parameters and existence of output schema. Description enumerates the specific metrics included in the summary (dimensions, counts, clearances), providing sufficient context for an agent to understand what data is returned without duplicating the output schema. Prerequisites are clearly stated.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema contains zero parameters. Per rubric, zero parameters establishes a baseline score of 4. Description does not need to compensate for missing parameter documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
Clearly identifies the resource (board/design) and specific outputs (dimensions, component counts, min trace/space), distinguishing it from sibling export tools like export_gerbers or list operations like list_components. Lacks an explicit verb ('Generates' or 'Retrieves'), but the noun phrase 'Board summary' combined with the specific metrics listed provides sufficient clarity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states the prerequisite '[Requires open project]' and provides clear usage guidance ('Use before export as a sanity check'), establishing the temporal relationship to export operations. Does not explicitly name alternative tools or exclusion criteria, but the context is sufficiently implied.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full behavioral burden. It successfully communicates the open project constraint, but fails to disclose side effects (filesystem write), overwrite behavior, or valid target enum values ('schematic' vs 'pcb'). The mutation aspect (creating files) is implied but not explicit.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Optimal length: one sentence stating purpose, one example demonstrating usage. The prerequisite is front-loaded in brackets. Every element earns its place with zero redundancy or filler text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriate for a 2-parameter tool with an output schema (return values needn't be described). Covers primary use case and prerequisite. Minor gap: valid values for the 'target' string parameter remain undefined despite having a default value in schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0% - neither parameter has a description in the schema. The description provides an example showing output_path accepts file paths and target accepts 'schematic', but doesn't fully compensate for the schema gap. Valid values for target (likely 'pcb' alternative) and path format requirements remain undocumented.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb ('Export') + resource ('schematic or PCB') + format ('PDF') clearly stated. Square brackets efficiently flag the open project prerequisite. Distinct from sibling export tools (export_bom, export_gerbers, export_svg, export_netlist) by specifying PDF output format.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The '[Requires open project]' prefix establishes a clear prerequisite for invocation. The example demonstrates correct syntax. However, it lacks explicit guidance on when to choose this over export_svg or export_gerbers, and doesn't document valid values for the 'target' parameter beyond the example.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It partially compensates by disclosing what data is returned (references, values, positions), but lacks explicit read-only safety confirmation, prerequisite disclosure (e.g., whether an open schematic is required), or error behavior. The verb 'List' implies safety, but explicit confirmation is absent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
A single, efficiently structured sentence with zero waste. It front-loads the action ('List'), specifies the domain ('in the schematic'), and ends with return value details. Every clause earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has no input parameters and an output schema exists (per context signals), the description appropriately focuses on purpose and return value highlights. Minor gap: does not mention prerequisites (e.g., requiring an open project/schematic), which would be helpful given the sibling open_project tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has zero parameters with 100% schema coverage (empty object). With no parameters to document, this meets the baseline expectation. The description does not need to compensate for schema gaps.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('List') with clear resource ('components in the schematic') and distinguishes from siblings like add_component/delete_component (mutation vs read) and list_footprints (schematic vs PCB). The addition of 'with references, values, and positions' clarifies the scope of returned data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage through the verb 'List' (inventory/enumeration vs manipulation), but provides no explicit when-to-use guidance or alternatives. It does not clarify when to use this vs get_symbol_info for individual component details or search_symbols for finding specific types.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It states the tool retrieves 'detailed documentation,' which hints at the output, but lacks details on response format, length, or whether it returns structured data vs. plain text. Adequate but minimal given the lack of safety annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with zero waste: purpose first, usage context second, concrete example third. Each sentence earns its place. Appropriate length for a simple single-parameter utility tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the existence of an output schema (which handles return value documentation) and the tool's low complexity (1 string parameter), the description successfully covers purpose, usage triggers, and parameter semantics. Sufficiently complete, though it could explicitly mention that valid topics include names of the sibling tools.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Effectively compensates for 0% schema description coverage. The phrase 'for a tool or topic' clarifies that the 'topic' parameter accepts either tool names (referencing siblings) or general workflow topics. The concrete example 'help(topic='workflow')' demonstrates valid syntax and a valid value, adding significant semantic value beyond the bare schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Get') and resource ('detailed documentation') and explicitly scopes the tool to 'a tool or topic.' It clearly distinguishes itself from the 30+ sibling action tools (add_component, export_gerbers, etc.) by positioning itself as a documentation/utility tool rather than a design manipulation tool.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit when-to-use guidance ('Use when unsure about parameters or workflow'), giving the agent a clear trigger condition. Does not explicitly name alternatives, though this is less critical for a help utility where the alternative (using other tools directly) is implicit in the context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, description discloses important behavioral constraints: project dependency and DRC gate. However, it omits explicit mention of file creation side effects or overwrite behavior expected of an export operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences are perfectly structured: prerequisite front-loaded, core purpose stated, and operational constraint with remediation included. Zero waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the existence of an output schema, the description adequately covers operational requirements (project state, DRC checks) for this single-parameter tool. Only gap is the undocumented parameter semantics.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters2/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 0% (output_dir lacks description) and the description fails to compensate by explaining the parameter's purpose, expected format, or relationship to the Gerber files. Context makes it inferable but not documented.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Description states specific action 'Export Gerber manufacturing files' with clear resource type, distinguishing it from sibling export tools (export_bom, export_pdf, export_svg).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines5/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states prerequisite '[Requires open project]' and blocking condition 'Blocked if DRC has unresolved errors', with explicit alternative action 'run kicad.run_drc first'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. States it 'Return[s]' data implying read-only behavior, but lacks explicit safety guarantees, error conditions (e.g., no project open), or side effects. Adequate but minimal behavioral disclosure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste. Front-loaded with action verb, followed by specific scope, then usage guidance. Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a zero-parameter getter with output schema present, description provides sufficient orientation context. Mentions key return concepts (open project, capabilities) without needing to duplicate structured output schema details.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Zero parameters present (schema is empty object). Per rubric, baseline score of 4 applies. Description correctly focuses on return value rather than inventing parameter documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb 'Return' with clear resource 'session state' and concrete details ('open project, available capabilities'). Clearly distinguishes from sibling 'list_projects' (filesystem listing) by focusing on current active session state.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit guidance 'Use this to orient yourself before calling other tools' establishes clear temporal context (call first). Lacks explicit 'when not to use' or named alternatives, but the orientation guidance is strong for a diagnostic tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Discloses state dependency (requires open project) and write operation nature ('Save'). However, lacks details on atomicity, backup creation, failure modes, or specific return values despite having an output schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise at 11 words. Front-loaded with critical prerequisite '[Requires open project]' followed immediately by action and scope. No redundancy or wasted text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriate for a simple save operation with zero parameters. Prerequisite and scope are covered. Output schema exists (per context signals), so return value explanation is unnecessary in the description text.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Zero parameters present; per scoring rules this establishes a baseline of 4. Description correctly offers no parameter details since none exist, and the empty schema requires no semantic elaboration.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'Save' with clear resource 'project files' and scope 'schematic + PCB'. Effectively distinguishes from siblings like 'create_project', 'open_project', and 'export_*' tools by specifying native file persistence.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit prerequisite '[Requires open project]' establishing when the tool can be used. However, lacks explicit mention of alternatives (e.g., distinguishing from export tools for manufacturing outputs) or explicit 'when-not-to-use' guidance beyond the prerequisite.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses the open-project prerequisite and workflow dependencies, but lacks details on idempotency, whether changes persist to disk immediately, or what the output schema contains (though output schema exists separately).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Three efficient sentences: prerequisite constraint, core action, and workflow context. No redundant words; critical constraint '[Requires open project]' is front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given zero parameters and existence of output schema, description adequately covers the essential domain context (zones are PCB copper pours needing fill before manufacturing exports). Could benefit from noting idempotency or memory vs. disk persistence, but sufficient for tool selection.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has zero parameters, establishing baseline 4 per rubric. No parameters require semantic clarification.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'Fill' with resource 'zones' and scope 'all'. The mention of DRC and Gerber export dependencies clearly distinguishes this preprocessing step from sibling export/verification tools like kicad.export_gerbers and kicad.run_drc.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines5/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states the prerequisite '[Requires open project]' and provides clear sequencing guidance 'Must be done before DRC or Gerber export', indicating exactly when in the workflow this tool should be invoked.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Discloses critical prerequisite (open project required) and output content (includes positions and layers). Does not explicitly state read-only nature or error conditions, but 'List' implies safe operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence with zero waste. Prerequisite front-loaded in brackets, followed by clear action and return value description. Every clause earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a zero-parameter tool with output schema, description provides sufficient context: prerequisite stated, scope defined ('all'), and key output fields identified ('positions and layers'). No gaps requiring additional documentation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Zero parameters present, establishing baseline of 4 per scoring rules. Schema coverage is trivially 100% for empty object.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb 'List' with resource 'footprints on the PCB' and scope 'all'. Distinguishes from sibling 'search_footprints' (implies filtering vs listing all) and 'list_components' (PCB footprints vs schematic components).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states prerequisite '[Requires open project]' indicating when the tool can be used. Lacks explicit alternatives (e.g., does not mention using 'search_footprints' for filtered queries), but the prerequisite provides clear usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GitHub Badge
Glama performs regular codebase and documentation scans to:
- Confirm that the MCP server is working as expected.
- Confirm that there are no obvious security issues.
- Evaluate tool definition quality.
Our badge communicates server capabilities, safety, and installation instructions.
Card Badge
Copy to your README.md:
Score Badge
Copy to your README.md:
How to claim the server?
If you are the author of the server, you simply need to authenticate using GitHub.
However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.
{
"$schema": "https://glama.ai/mcp/schemas/server.json",
"maintainers": [
"your-github-username"
]
}Then, authenticate using GitHub.
Browse examples.
How to make a release?
A "release" on Glama is not the same as a GitHub release. To create a Glama release:
- Claim the server if you haven't already.
- Go to the Dockerfile admin page, configure the build spec, and click Deploy.
- Once the build test succeeds, click Make Release, enter a version, and publish.
This process allows Glama to run security checks on your server and enables users to deploy it.
How to add a LICENSE?
Please follow the instructions in the GitHub documentation.
Once GitHub recognizes the license, the system will automatically detect it within a few hours.
If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.
How to sync the server with GitHub?
Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.
To manually sync the server, click the "Sync Server" button in the MCP server admin interface.
How is the quality score calculated?
The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).
Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.
Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).
Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/SaeronLab/eda-mcp'
If you have feedback or need assistance with the MCP directory API, please join our Discord server