Skip to main content
Glama

Server Details

EN Diagram — structural verification for concurrent systems. Pure math, no AI.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
dushyant30suthar/endiagram-mcp
GitHub Stars
7
Server Listing
endiagram

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.4/5 across 7 of 7 tools scored. Lowest: 2.4/5.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose: compose (merging/extracting systems), equivalent (comparing systems), invariant (analyzing system properties), live (deadlock/overflow analysis), reachable (path analysis), render (visualization), and structure (structural overview). There is no overlap in functionality; an agent can easily distinguish them based on their unique analytical roles.

Naming Consistency5/5

All tool names follow a consistent pattern of single, descriptive verbs (e.g., compose, equivalent, invariant, live, reachable, render, structure). There are no deviations in style (e.g., no mixing of snake_case or camelCase), making the naming highly predictable and readable.

Tool Count5/5

With 7 tools, the server is well-scoped for its domain of EN diagram analysis. Each tool addresses a specific aspect of system analysis (e.g., composition, equivalence, invariants, liveness, reachability, rendering, structure), and none seem redundant or missing for the intended purpose.

Completeness5/5

The tool set provides comprehensive coverage for analyzing EN diagrams, including operations for composition, equivalence checking, property analysis (invariants, liveness, reachability), visualization, and structural overview. There are no obvious gaps; agents can perform a full lifecycle of analysis from system understanding to visualization without dead ends.

Available Tools

7 tools
composeAInspect

How do parts combine, or how does a part stand alone? Merge mode (source_a + source_b + links): declare which entities in A are the same as entities in B; the combined graph is wired via string-equality of shared names. Extract mode (source + subsystem): pull a named subsystem out as standalone EN with boundary inputs/outputs, actors, and locations. Valid subsystem names come from structure's subsystems field — call structure on the source first to discover them. See the server instructions for EN language syntax.

ParametersJSON Schema
NameRequiredDescriptionDefault
linksNoEntity identifications, one per line. Format: `a.<entity name>=b.<entity name>` (A's entity is the same as B's entity). `#` starts a comment. Example: `a.user session=b.authenticated session`.
sourceNoEN source code for extract mode
source_aNoEN source code or path to .en/.txt file for the first system
source_bNoEN source code or path to .en/.txt file for the second system
subsystemNoSubsystem name to extract. Valid names come from structure's `subsystems` field — call structure on the source first to discover them.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses the functional transformations (linking shared entities, creating standalone subsystems with boundary inputs/outputs, actors, and locations) but omits operational details such as output format, side effects, persistence, or error conditions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately dense with two information-rich sentences. The opening rhetorical question 'How do parts combine?' slightly reduces efficiency but is immediately followed by concrete operational details. Every subsequent clause earns its place by defining mode-specific behavior.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the bifurcated functionality (two distinct modes) and lack of annotations or output schema, the description adequately covers the input parameter relationships and operational modes. However, it fails to specify the return value format or behavior when parameters from both modes are provided simultaneously.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Although schema coverage is 100%, the description adds crucial semantic value by grouping parameters into their respective modes (merge vs extract), which the flat schema does not convey. This clarifies that source_a/source_b/links form one coherent operation while source/subsystem form another.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly defines two distinct operations (Merge mode and Extract mode) with specific verbs and resources (merging systems by linking entities, extracting subsystems as standalone EN). It distinguishes from analysis-focused siblings (equivalent, invariant, etc.) by focusing on composition. However, it assumes knowledge of what 'EN' stands for without definition.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description effectively documents which parameters belong to which mode (source_a/source_b/links for merge; source/subsystem for extract), implying usage through structure. However, it lacks explicit guidance on when to choose merge versus extract, or when to use this tool versus siblings like 'structure'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

equivalentAInspect

Are two systems the same, or what changes if I change this one? Compare mode (source_a + source_b): structural differences, edit distance, spectral equivalence. isCospectral=true means identical graph structure up to relabeling — topologically the same despite different names, actors, or locations. Evolve mode (source + patch): dry-run a change, shows diff plus new/lost bridge nodes. Patch has three directive types — plain EN statement adds an action; a line starting with - (and not containing do:) removes the named action; a statement whose action name matches an existing one replaces the original. See the server instructions for EN language syntax.

ParametersJSON Schema
NameRequiredDescriptionDefault
patchNoEN patch for evolve mode. Three directive types: plain EN statement (`actor do: X needs: Y yields: Z at: W`) adds action X; a line starting with `-` (and not containing `do:`) removes the named action; a new statement with an existing action name replaces the original. Multiple directives allowed, one per line.
sourceNoEN source code for evolve mode
source_aNoEN source code or path to .en/.txt file for the first system
source_bNoEN source code or path to .en/.txt file for the second system
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and discloses key behaviors: evolve mode performs a 'dry-run,' compare mode calculates 'spectral equivalence' (with isCospectral semantics), and outputs include structural differences, edit distance, and bridge node changes. It effectively explains what the tool returns despite lacking an output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three densely packed sentences with zero waste: opens with the core question, delineates both modes with their specific outputs, and closes with critical patch syntax. Information is front-loaded and every clause earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a dual-mode tool with 4 parameters and no output schema, the description is remarkably complete. It explains both operational modes, documents expected outputs (diff, spectral analysis, bridge nodes), and covers the domain context (EN source code, .en/.txt files). Only minor gap is not explicitly stating that all parameters are optional (0 required).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

While schema coverage is 100% (baseline 3), the description adds valuable semantic context: it maps source_a/source_b to the 'first' and 'second' systems in compare mode, associates source/patch with evolve mode, clarifies that inputs can be .en/.txt files, and specifies the '-' prefix syntax for patch actions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool determines if 'two systems are the same' and distinguishes two distinct modes: Compare mode (source_a + source_b) for structural comparison and Evolve mode (source + patch) for dry-running changes. The specific outputs (edit distance, spectral equivalence, bridge nodes) differentiate it from siblings like 'compose' or 'render'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear guidance on parameter combinations for each mode (source_a/source_b for compare, source/patch for evolve) and explains patch syntax (- prefix to remove actions). However, it lacks explicit guidance on when to use this versus sibling tools like 'structure' or 'invariant'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

invariantCInspect

What's always true — automatic findings and on-demand checks. Automatic outputs: conservationLaws (weighted entity sums constant across executions), sustainableCycles (T-invariants — action sequences returning to start state), depletableSets (entity groups whose simultaneous depletion is irreversible), behavioral.deficiency (0 means structure fully determines dynamics), behavioral.isReversible, behavioral.hasUniqueEquilibrium. On-demand via rules: encode domain-specific claims and verify them against the graph — this is how to check things the topology alone can't see (precedence, coverage, centrality bounds, resilience). See the rules parameter for supported sentence shapes.

ParametersJSON Schema
NameRequiredDescriptionDefault
rulesNoCustom structural rules, one per line. Four supported sentence shapes (regex-matched): (1) `no bridge that is also hub` — flags nodes that are both a bridge and a hub. (2) `every path from X to Y passes through at least one of [A, B, C]` — encodes precedence/coverage; use to check `A must happen before Y produces Z` by rewriting as `every path from Z's input to Z passes through [A]`. (3) `no node with centrality above 0.5` — flags over-central nodes (replace 0.5 with any threshold). (4) `removing any single node disconnects at most N others` — connectivity robustness check. Unrecognized rules return satisfied:false with an explanation listing these shapes.
sourceYesEN source code, or path to .en/.txt file
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without annotations, the description carries the full burden. It provides useful domain-specific context about what invariants are checked (conservation laws, sustainable cycles) and interprets output properties (behavioral.deficiency, isReversible). However, it omits operational concerns like computational complexity or whether the analysis is read-only.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is dense with domain jargon but fails to front-load the tool's core function. The opening question 'What's always true?' is vague, and the list of definitions consumes space without clarifying invocation semantics or return value structure.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 2-parameter analysis tool with no output schema, the description should explicitly state the analysis performed and expected return format. While it hints at output semantics via 'behavioral.deficiency' etc., it lacks a clear statement of what data structure the tool returns.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage ('source' and 'rules' are clearly documented). The description adds no parameter-specific guidance, syntax examples, or format details, but baseline 3 is appropriate when schema documentation is already complete.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description defines domain concepts (conservation laws, T-invariants, depletable sets) but lacks an active verb stating what the tool actually does (e.g., 'analyzes,' 'computes'). It reads as a glossary entry rather than a tool specification, failing to distinguish from siblings like 'structure' or 'reachable'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus alternatives like 'structure', 'live', or 'reachable'. There are no prerequisites, exclusion criteria, or workflow positioning hints to help an agent decide when this analysis is appropriate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

liveAInspect

Can it deadlock? Can entities overflow? isStructurallyLive means every siphon contains a trap — no structural deadlock possible. uncoveredSiphons are entity groups that can drain permanently, with the actors and locations affected. isStructurallyBounded means no entity can accumulate without limit. unboundedCycles are action sequences that could cause overflow. See the server instructions for EN language syntax.

ParametersJSON Schema
NameRequiredDescriptionDefault
sourceYesEN source code, or path to .en/.txt file
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and succeeds by explaining exactly what the analysis returns (isStructurallyLive, uncoveredSiphons, isStructurallyBounded, unboundedCycles) and what these concepts mean semantically (e.g., 'no structural deadlock possible').

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately compact and front-loaded with motivating questions. Every sentence defines a key output concept or behavioral trait; no waste despite the technical density of terms like 'siphon' and 'trap'.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema, the description compensates effectively by detailing the return concepts (structural liveness/boundedness indicators). For a single-parameter analysis tool, the combination of complete schema coverage and behavioral explanation provides sufficient context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage for the single 'source' parameter, the baseline is 3. The description adds no additional parameter context (file format details, size limits), but the schema is self-sufficient.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly establishes the tool analyzes structural liveness and boundedness through specific domain concepts (siphons, traps, unboundedCycles). It distinguishes from siblings like 'render' or 'compose' by focusing on deadlock and overflow detection, though it lacks an explicit 'this tool analyzes...' statement.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While the opening questions ('Can it deadlock? Can entities overflow?') imply when to use the tool, there is no explicit guidance on when to choose this over siblings like 'reachable', 'invariant', or 'equivalent', nor any prerequisites or exclusions mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

reachableAInspect

Can X reach Y? Follows directed data flow first; falls back to undirected. from and to accept entity names or action names (resolved against the program). Path shows each step with actor and location. locationCrossings counts boundary transitions. defense_nodes checks if guards cover all paths; coverage.fullCoverage=false means unguarded routes exist. See the server instructions for EN language syntax.

ParametersJSON Schema
NameRequiredDescriptionDefault
toYesTarget node name
fromYesStarting node name
sourceYesEN source code
defense_nodesNoComma-separated guard nodes to check coverage
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and successfully discloses key behavioral traits: the search strategy (directed then undirected), output structure (path with actor/location), and interpretation of results (locationCrossings counts boundaries, coverage.fullCoverage false indicates unguarded routes). It effectively compensates for the missing output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Six short sentences with zero waste. Each sentence delivers distinct, essential information: purpose, algorithm, path output, crossings metric, defense_nodes function, and coverage interpretation. Information is front-loaded with the core question.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema and annotations, the description adequately compensates by detailing output fields (path, locationCrossings, coverage) and their semantics. It could be improved by clarifying what 'EN source code' represents (the graph definition?) and explicitly mapping the abstract X/Y to parameter names.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing a baseline of 3. The description adds value by explaining that defense_nodes 'checks if guards cover all paths,' providing functional semantics beyond the schema's syntactic description ('Comma-separated guard nodes'). It implies X/Y map to from/to but doesn't explicitly confirm this mapping.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with the clear question 'Can X reach Y?' and specifies the algorithm uses 'directed data flow first; falls back to undirected.' While it implies X/Y correspond to the from/to parameters, it doesn't explicitly map them. It distinguishes from siblings (compose, equivalent, etc.) by focusing specifically on reachability analysis.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains the fallback behavior (directed to undirected) which provides implicit context for when the tool applies, but it lacks explicit guidance on when to use this versus sibling tools like 'invariant' or 'live'. No prerequisites or exclusions are stated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

renderBInspect

SVG or PNG diagram. Only call when user explicitly asks to visualize. The rendered image is delivered to the user, not injected into the model's context. See the server instructions for EN language syntax.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeNoOutput format: png (default) or svg. PNG is rasterized server-side via Batik.
viewNoGroup by: actors (partition by actor) or locations (partition by location). Default auto-detects topology.
colorNoSeed color hex (#RRGGBB) to generate a custom theme. Overrides theme parameter. One color generates the entire palette.
themeNoColor theme. Curated presets (each with light + dark variants; pair with `isDark`): `Editorial` (stone paper + rust focal, adapted from cathrynlavery/diagram-design), `Primer` (GitHub design system — blue accent, data-vis roles), `Carbon` (IBM Carbon — sharp 0px corners, corporate blue). Or seed-derived palettes generated on the fly from `color`. Pass 'dark'/'light' for the default variant. Overridden by `color` if provided.
isDarkNotrue or false. Selects the dark or light variant of a named preset. If omitted, defaults to dark unless theme=light.
outputNoFile path to save the rendered image
sourceYesEN source code, or path to .en/.txt file
qualityNoOutput quality: small, mid, or max
directionNoLayout direction: LR (left-to-right) or TB (top-to-bottom). Default auto-detects from condensation DAG aspect ratio.
structure_layersNoBitmask for structure overlays. Bits: 1=subsystems, 2=pipelines, 4=cycles, 8=forks, 16=joins, 32=hubs, 64=deadlock, 128=overflow. Default 255 (all on). Pass 0 to hide all.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full behavioral disclosure burden but fails to mention critical side effects: it writes to the filesystem (implied only by the output parameter description), may overwrite existing files, or produces graphical output. Missing mutation warnings and scope details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely compact two-sentence structure with zero redundancy. However, the brevity may be excessive given the tool's complexity (7 parameters including bitmasks and hex codes), suggesting the description may be under-specified rather than optimally concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 100% schema coverage, the description adequately covers intent but leaves significant gaps: no explanation of 'EN' domain (despite specialized terms like subsystems/pipelines), no output schema documentation (what does it return—file content, path, or status?), and no behavioral guards.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing a baseline score of 3. The description adds no parameter-specific context, syntax guidance, or semantic relationships between parameters (e.g., color overriding theme), relying entirely on the schema documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a noun phrase ('SVG diagram') rather than a specific verb+resource pattern, failing to clarify what input is being rendered (EN source code per the schema). It distinguishes from siblings only implicitly via 'visualize', lacking explicit differentiation from analysis tools like 'structure'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit usage condition ('Only call when user explicitly asks to visualize'), clearly stating when to invoke the tool. Lacks explicit 'when-not-to-use' guidance or named alternatives to siblings, preventing a higher score.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

structureBInspect

What is this system? Returns shape (Pipeline, Fork-Join, DAG, Star, Cycle, Tree, Complete, etc.), stages with roles, bridge nodes, cycles, parallelism, critical path, dominator tree, min-cuts, subsystems, interface nodes, actors (who does what, workload entropy), locations (where work happens, boundary crossings). Levers: node=X returns per-node centrality (betweenness, closeness, eigenvector) for a specific node. detect_findings=true flags named structural risks — unguarded-sink (sinks reachable via only pipeline actions, no JOIN/HUB gating), single-cut-path (source-sink pairs with only one vertex-disjoint path), multi-cut-path (paths with redundant defense, min-cut > 1). See server instructions for EN language syntax.

ParametersJSON Schema
NameRequiredDescriptionDefault
nodeNoNode name. When provided, returns per-node centrality (betweenness, closeness, eigenvector) for this specific node instead of the overview.
sourceYesEN source code, or path to .en/.txt file
detect_findingsNoSet to 'true' to flag named structural findings. Possible values: unguarded-sink, single-cut-path, multi-cut-path.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of disclosure. While it enumerates computational outputs (topology, cycles), it fails to disclose behavioral traits such as whether the operation is read-only, computationally expensive, requires specific permissions, or has side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is information-dense but slightly awkwardly structured, opening with a question ('What is this system?') and using dense colon-separated lists. While every clause conveys relevant output features, the structure could be more scannable for an agent.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (multiple graph analysis concepts) and lack of output schema, the description adequately enumerates return components (stages, bridge nodes, actors, locations). However, it lacks details on response format or examples, leaving some gaps for a tool with rich analytical output.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Despite 100% schema coverage (baseline 3), the description adds valuable functional context: it clarifies that the 'node' parameter triggers 'per-node centrality' analysis and 'detect_findings' enables 'structural pattern detection', helping the agent understand why to use these optional parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the tool provides a 'Complete structural overview' of a system, listing specific graph-theoretic outputs like 'dominator tree', 'min-cuts', 'critical path', and 'topology'. However, it does not explicitly differentiate from siblings like 'reachable' or 'live' which might also analyze system graphs.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description offers no explicit guidance on when to use this tool versus siblings (e.g., when to use 'structure' vs 'reachable'). It only implicitly suggests usage through the optional parameter explanations ('pass node for per-node centrality'), lacking clear prerequisites or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.