Skip to main content
Glama

Server Details

13 deterministic graph tools for structural analysis. No AI inside the computation.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
dushyant30suthar/endiagram-mcp
GitHub Stars
4
Server Listing
endiagram

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

7 tools
composeAInspect

How do parts combine? Merge mode (source_a + source_b + links): merge two systems by linking shared entities. Extract mode (source + subsystem): extract a subsystem as standalone EN with boundary inputs/outputs, actors, and locations.

ParametersJSON Schema
NameRequiredDescriptionDefault
linksNoEntity links e.g. 'a.node1=b.node2'
sourceNoEN source code for extract mode
source_aNoEN source code or path to .en/.txt file for the first system
source_bNoEN source code or path to .en/.txt file for the second system
subsystemNoSubsystem name for extract mode
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses the functional transformations (linking shared entities, creating standalone subsystems with boundary inputs/outputs, actors, and locations) but omits operational details such as output format, side effects, persistence, or error conditions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately dense with two information-rich sentences. The opening rhetorical question 'How do parts combine?' slightly reduces efficiency but is immediately followed by concrete operational details. Every subsequent clause earns its place by defining mode-specific behavior.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the bifurcated functionality (two distinct modes) and lack of annotations or output schema, the description adequately covers the input parameter relationships and operational modes. However, it fails to specify the return value format or behavior when parameters from both modes are provided simultaneously.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Although schema coverage is 100%, the description adds crucial semantic value by grouping parameters into their respective modes (merge vs extract), which the flat schema does not convey. This clarifies that source_a/source_b/links form one coherent operation while source/subsystem form another.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly defines two distinct operations (Merge mode and Extract mode) with specific verbs and resources (merging systems by linking entities, extracting subsystems as standalone EN). It distinguishes from analysis-focused siblings (equivalent, invariant, etc.) by focusing on composition. However, it assumes knowledge of what 'EN' stands for without definition.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description effectively documents which parameters belong to which mode (source_a/source_b/links for merge; source/subsystem for extract), implying usage through structure. However, it lacks explicit guidance on when to choose merge versus extract, or when to use this tool versus siblings like 'structure'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

equivalentAInspect

Are two systems the same? Compare mode (source_a + source_b): shows structural differences, edit distance, and spectral equivalence — isCospectral true means identical structure despite different names. Evolve mode (source + patch): dry-run a change, shows diff plus new/lost bridge nodes. Prefix action name with - in patch to remove it.

ParametersJSON Schema
NameRequiredDescriptionDefault
patchNoEN patch for evolve mode
sourceNoEN source code for evolve mode
source_aNoEN source code or path to .en/.txt file for the first system
source_bNoEN source code or path to .en/.txt file for the second system
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and discloses key behaviors: evolve mode performs a 'dry-run,' compare mode calculates 'spectral equivalence' (with isCospectral semantics), and outputs include structural differences, edit distance, and bridge node changes. It effectively explains what the tool returns despite lacking an output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three densely packed sentences with zero waste: opens with the core question, delineates both modes with their specific outputs, and closes with critical patch syntax. Information is front-loaded and every clause earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a dual-mode tool with 4 parameters and no output schema, the description is remarkably complete. It explains both operational modes, documents expected outputs (diff, spectral analysis, bridge nodes), and covers the domain context (EN source code, .en/.txt files). Only minor gap is not explicitly stating that all parameters are optional (0 required).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

While schema coverage is 100% (baseline 3), the description adds valuable semantic context: it maps source_a/source_b to the 'first' and 'second' systems in compare mode, associates source/patch with evolve mode, clarifies that inputs can be .en/.txt files, and specifies the '-' prefix syntax for patch actions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool determines if 'two systems are the same' and distinguishes two distinct modes: Compare mode (source_a + source_b) for structural comparison and Evolve mode (source + patch) for dry-running changes. The specific outputs (edit distance, spectral equivalence, bridge nodes) differentiate it from siblings like 'compose' or 'render'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear guidance on parameter combinations for each mode (source_a/source_b for compare, source/patch for evolve) and explains patch syntax (- prefix to remove actions). However, it lacks explicit guidance on when to use this versus sibling tools like 'structure' or 'invariant'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

invariantCInspect

What's always true? conservationLaws are weighted entity sums constant across all executions. sustainableCycles are action sequences that return the system to its starting state (T-invariants). depletableSets are entity groups where simultaneous depletion is irreversible. behavioral.deficiency 0 means structure fully determines dynamics. behavioral.isReversible and behavioral.hasUniqueEquilibrium describe convergence properties.

ParametersJSON Schema
NameRequiredDescriptionDefault
rulesNoStructural rules to check, one per line
sourceYesEN source code, or path to .en/.txt file
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without annotations, the description carries the full burden. It provides useful domain-specific context about what invariants are checked (conservation laws, sustainable cycles) and interprets output properties (behavioral.deficiency, isReversible). However, it omits operational concerns like computational complexity or whether the analysis is read-only.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is dense with domain jargon but fails to front-load the tool's core function. The opening question 'What's always true?' is vague, and the list of definitions consumes space without clarifying invocation semantics or return value structure.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 2-parameter analysis tool with no output schema, the description should explicitly state the analysis performed and expected return format. While it hints at output semantics via 'behavioral.deficiency' etc., it lacks a clear statement of what data structure the tool returns.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage ('source' and 'rules' are clearly documented). The description adds no parameter-specific guidance, syntax examples, or format details, but baseline 3 is appropriate when schema documentation is already complete.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description defines domain concepts (conservation laws, T-invariants, depletable sets) but lacks an active verb stating what the tool actually does (e.g., 'analyzes,' 'computes'). It reads as a glossary entry rather than a tool specification, failing to distinguish from siblings like 'structure' or 'reachable'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus alternatives like 'structure', 'live', or 'reachable'. There are no prerequisites, exclusion criteria, or workflow positioning hints to help an agent decide when this analysis is appropriate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

liveAInspect

Can it deadlock? Can entities overflow? isStructurallyLive means every siphon contains a trap — no structural deadlock possible. uncoveredSiphons are entity groups that can drain permanently, with the actors and locations affected. isStructurallyBounded means no entity can accumulate without limit. unboundedCycles are action sequences that could cause overflow.

ParametersJSON Schema
NameRequiredDescriptionDefault
sourceYesEN source code, or path to .en/.txt file
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and succeeds by explaining exactly what the analysis returns (isStructurallyLive, uncoveredSiphons, isStructurallyBounded, unboundedCycles) and what these concepts mean semantically (e.g., 'no structural deadlock possible').

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately compact and front-loaded with motivating questions. Every sentence defines a key output concept or behavioral trait; no waste despite the technical density of terms like 'siphon' and 'trap'.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema, the description compensates effectively by detailing the return concepts (structural liveness/boundedness indicators). For a single-parameter analysis tool, the combination of complete schema coverage and behavioral explanation provides sufficient context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage for the single 'source' parameter, the baseline is 3. The description adds no additional parameter context (file format details, size limits), but the schema is self-sufficient.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly establishes the tool analyzes structural liveness and boundedness through specific domain concepts (siphons, traps, unboundedCycles). It distinguishes from siblings like 'render' or 'compose' by focusing on deadlock and overflow detection, though it lacks an explicit 'this tool analyzes...' statement.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While the opening questions ('Can it deadlock? Can entities overflow?') imply when to use the tool, there is no explicit guidance on when to choose this over siblings like 'reachable', 'invariant', or 'equivalent', nor any prerequisites or exclusions mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

reachableAInspect

Can X reach Y? Follows directed data flow first; falls back to undirected. Path shows each step with actor and location. locationCrossings counts boundary transitions. defense_nodes checks if guards cover all paths. coverage.fullCoverage false means unguarded routes exist.

ParametersJSON Schema
NameRequiredDescriptionDefault
toYesTarget node name
fromYesStarting node name
sourceYesEN source code
defense_nodesNoComma-separated guard nodes to check coverage
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and successfully discloses key behavioral traits: the search strategy (directed then undirected), output structure (path with actor/location), and interpretation of results (locationCrossings counts boundaries, coverage.fullCoverage false indicates unguarded routes). It effectively compensates for the missing output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Six short sentences with zero waste. Each sentence delivers distinct, essential information: purpose, algorithm, path output, crossings metric, defense_nodes function, and coverage interpretation. Information is front-loaded with the core question.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema and annotations, the description adequately compensates by detailing output fields (path, locationCrossings, coverage) and their semantics. It could be improved by clarifying what 'EN source code' represents (the graph definition?) and explicitly mapping the abstract X/Y to parameter names.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing a baseline of 3. The description adds value by explaining that defense_nodes 'checks if guards cover all paths,' providing functional semantics beyond the schema's syntactic description ('Comma-separated guard nodes'). It implies X/Y map to from/to but doesn't explicitly confirm this mapping.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with the clear question 'Can X reach Y?' and specifies the algorithm uses 'directed data flow first; falls back to undirected.' While it implies X/Y correspond to the from/to parameters, it doesn't explicitly map them. It distinguishes from siblings (compose, equivalent, etc.) by focusing specifically on reachability analysis.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains the fallback behavior (directed to undirected) which provides implicit context for when the tool applies, but it lacks explicit guidance on when to use this versus sibling tools like 'invariant' or 'live'. No prerequisites or exclusions are stated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

renderBInspect

SVG diagram. Only call when user explicitly asks to visualize.

ParametersJSON Schema
NameRequiredDescriptionDefault
viewNoGroup by: actors (partition by actor) or locations (partition by location). Default auto-detects topology.
colorNoSeed color hex (#RRGGBB) to generate a custom theme. Overrides theme parameter. One color generates the entire palette.
themeNoColor theme: dark or light
outputNoFile path to save the SVG
sourceYesEN source code, or path to .en/.txt file
qualityNoOutput quality: small, mid, or max
structure_layersNoBitmask for structure overlays. Bits: 1=subsystems, 2=pipelines, 4=cycles, 8=forks, 16=joins, 32=hubs. Default 63 (all on). Pass 0 to hide all.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full behavioral disclosure burden but fails to mention critical side effects: it writes to the filesystem (implied only by the output parameter description), may overwrite existing files, or produces graphical output. Missing mutation warnings and scope details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely compact two-sentence structure with zero redundancy. However, the brevity may be excessive given the tool's complexity (7 parameters including bitmasks and hex codes), suggesting the description may be under-specified rather than optimally concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 100% schema coverage, the description adequately covers intent but leaves significant gaps: no explanation of 'EN' domain (despite specialized terms like subsystems/pipelines), no output schema documentation (what does it return—file content, path, or status?), and no behavioral guards.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing a baseline score of 3. The description adds no parameter-specific context, syntax guidance, or semantic relationships between parameters (e.g., color overriding theme), relying entirely on the schema documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a noun phrase ('SVG diagram') rather than a specific verb+resource pattern, failing to clarify what input is being rendered (EN source code per the schema). It distinguishes from siblings only implicitly via 'visualize', lacking explicit differentiation from analysis tools like 'structure'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit usage condition ('Only call when user explicitly asks to visualize'), clearly stating when to invoke the tool. Lacks explicit 'when-not-to-use' guidance or named alternatives to siblings, preventing a higher score.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

structureBInspect

What is this system? Complete structural overview: shape (topology), stages with roles, bridge nodes, cycles, parallelism, critical path, dominator tree, min-cuts, subsystems, interface nodes. Includes actors (who does what, workload entropy) and locations (where work happens, boundary crossings). Optional: pass node for per-node centrality, detect_findings for structural pattern detection.

ParametersJSON Schema
NameRequiredDescriptionDefault
nodeNoNode name for centrality query
sourceYesEN source code, or path to .en/.txt file
detect_findingsNoSet to 'true' to detect structural findings
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of disclosure. While it enumerates computational outputs (topology, cycles), it fails to disclose behavioral traits such as whether the operation is read-only, computationally expensive, requires specific permissions, or has side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is information-dense but slightly awkwardly structured, opening with a question ('What is this system?') and using dense colon-separated lists. While every clause conveys relevant output features, the structure could be more scannable for an agent.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (multiple graph analysis concepts) and lack of output schema, the description adequately enumerates return components (stages, bridge nodes, actors, locations). However, it lacks details on response format or examples, leaving some gaps for a tool with rich analytical output.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Despite 100% schema coverage (baseline 3), the description adds valuable functional context: it clarifies that the 'node' parameter triggers 'per-node centrality' analysis and 'detect_findings' enables 'structural pattern detection', helping the agent understand why to use these optional parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the tool provides a 'Complete structural overview' of a system, listing specific graph-theoretic outputs like 'dominator tree', 'min-cuts', 'critical path', and 'topology'. However, it does not explicitly differentiate from siblings like 'reachable' or 'live' which might also analyze system graphs.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description offers no explicit guidance on when to use this tool versus siblings (e.g., when to use 'structure' vs 'reachable'). It only implicitly suggests usage through the optional parameter explanations ('pass node for per-node centrality'), lacking clear prerequisites or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.