Skip to main content
Glama
This connector has been deprecated

It has been replaced by endiagram-mcp

Ownership verified

Server Details

12 deterministic graph-theory tools for structural analysis. Describe systems in EN syntax — get topology, bottlenecks, blast radius, critical paths. No AI inside the computation.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

7 tools
composeAInspect

How do parts combine? Merge mode (source_a + source_b + links): merge two systems by linking shared entities. Extract mode (source + subsystem): extract a subsystem as standalone EN with boundary inputs/outputs, actors, and locations.

ParametersJSON Schema
NameRequiredDescriptionDefault
linksNoEntity links e.g. 'a.node1=b.node2'
sourceNoEN source code for extract mode
source_aNoEN source code or path to .en/.txt file for the first system
source_bNoEN source code or path to .en/.txt file for the second system
subsystemNoSubsystem name for extract mode
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It adequately explains outputs for extract mode (standalone EN with boundary inputs/outputs, actors, locations) and the merge operation's purpose. However, it omits safety information, side effects, validation rules, or whether these operations are in-memory transformations versus persistent mutations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description efficiently covers two complex modes in two sentences. While the opening interrogative ('How do parts combine?') is slightly unconventional, the subsequent colon-delimited structure effectively distinguishes Merge vs Extract modes. No extraneous information, though parameter examples in the description mirror the schema.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the dual-mode complexity and lack of output schema, the description adequately covers the primary behaviors and outputs (boundary I/O, actors, locations). It appropriately delegates parameter details to the well-documented schema. Minor gaps exist regarding mode precedence/error handling when mixing parameter sets.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Despite 100% schema coverage (baseline 3), the description adds significant value by explaining parameter relationships and mode-specific groupings—clarifying that source_a/source_b/links work together for merge while source/subsystem work together for extract. This semantic grouping aids agent selection beyond raw schema definitions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly defines two distinct operations: merge mode (combining two systems via shared entities) and extract mode (pulling out a subsystem as standalone EN). It specifies resources (systems, subsystems) and actions (merge, extract). However, it lacks explicit differentiation from sibling tools like 'structure' or 'render' that might operate on similar entities.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implicitly guides usage by mapping parameter sets to modes (source_a/source_b/links for merge; source/subsystem for extract). However, it lacks explicit guidance on when NOT to use this tool versus alternatives like 'structure', and does not clarify if modes are mutually exclusive or what happens if parameters from both modes are provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

equivalentAInspect

Are two systems the same? Compare mode (source_a + source_b): shows structural differences, edit distance, and spectral equivalence — isCospectral true means identical structure despite different names. Evolve mode (source + patch): dry-run a change, shows diff plus new/lost bridge nodes. Prefix action name with - in patch to remove it.

ParametersJSON Schema
NameRequiredDescriptionDefault
patchNoEN patch for evolve mode
sourceNoEN source code for evolve mode
source_aNoEN source code or path to .en/.txt file for the first system
source_bNoEN source code or path to .en/.txt file for the second system
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Explicitly states 'dry-run' for evolve mode, clarifying this is a simulation tool, not destructive. Discloses specific output concepts (spectral equivalence, edit distance, new/lost bridge nodes) and interprets key flags ('isCospectral true means identical structure despite different names'). Missing auth/permissions details, but dry-run disclosure covers primary safety concern.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three dense sentences with zero waste. Front-loaded with the core purpose (equivalence checking), followed by mode-specific explanations, and ending with actionable syntax guidance. Each sentence conveys distinct information about functionality, outputs, or syntax without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 4 parameters, 100% schema coverage, and no output schema, the description adequately explains what the tool returns (structural differences, edit distance, spectral data, diffs, bridge nodes). Addresses domain complexity (spectral equivalence) without requiring external knowledge. Could be improved with error condition or empty-result handling notes, but covers the essential behavioral contract.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing a baseline of 3. The description adds significant value by explaining how parameters interact: it maps source_a/source_b to Compare mode and source/patch to Evolve mode. Crucially, it adds the '-' prefix syntax rule for the patch parameter that is absent from the schema description, which is essential for correct invocation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Opens with a clear question ('Are two systems the same?') and immediately defines two distinct modes: Compare (source_a + source_b) for structural/spectral analysis and Evolve (source + patch) for dry-run simulation. Mentions domain-specific concepts like 'isCospectral' and 'bridge nodes' that clearly differentiate it from siblings like 'compose' or 'render'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly maps parameter combinations to modes: Compare mode requires source_a and source_b, Evolve mode requires source and patch. Explains what each mode returns (structural differences vs diff + bridge nodes) and includes critical syntax guidance ('Prefix action name with - in patch to remove it'). Lacks explicit 'when not to use' guidance, but the dual-mode structure provides clear usage boundaries.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

invariantCInspect

What's always true? conservationLaws are weighted entity sums constant across all executions. sustainableCycles are action sequences that return the system to its starting state (T-invariants). depletableSets are entity groups where simultaneous depletion is irreversible. behavioral.deficiency 0 means structure fully determines dynamics. behavioral.isReversible and behavioral.hasUniqueEquilibrium describe convergence properties.

ParametersJSON Schema
NameRequiredDescriptionDefault
rulesNoStructural rules to check, one per line
sourceYesEN source code, or path to .en/.txt file
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure but fails to deliver. It does not indicate whether this is a read-only analysis, what the return format is (despite no output schema), whether it performs validation or computation, or any performance characteristics. The description only defines domain concepts without explaining tool behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is relatively compact but poorly structured for an AI agent. It opens with a question rather than a declarative purpose statement, and the dense technical definitions (T-invariants, behavioral.deficiency) are presented without establishing the tool's function first. While not verbose, the sentence ordering prioritizes domain jargon over actionable clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of annotations and output schema, the description should explain what the tool returns (e.g., a list of invariants, a boolean validation result, or behavioral properties). It defines the concepts but does not connect them to the tool's output or explain how to interpret results, leaving critical gaps for agent invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage ('Structural rules to check' and 'EN source code'), establishing a baseline of 3. The description adds no additional context about how the 'rules' parameter relates to the invariant types mentioned (conservation laws, cycles) or how the source code is parsed, but it does not contradict the schema either.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description implies the tool deals with system invariants by defining specific types (conservationLaws, sustainableCycles, depletableSets), but lacks a clear action verb stating what the tool does (e.g., 'analyzes', 'identifies', 'computes'). The rhetorical question 'What's always true?' is insufficient as a purpose statement, and the description does not differentiate from siblings like 'structure' or 'reachable'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. Given siblings like 'structure', 'reachable', and 'equivalent', the description should specify that this tool is for finding invariant properties of the system rather than checking reachability or structural composition, but it provides no such context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

liveAInspect

Can it deadlock? Can entities overflow? isStructurallyLive means every siphon contains a trap — no structural deadlock possible. uncoveredSiphons are entity groups that can drain permanently, with the actors and locations affected. isStructurallyBounded means no entity can accumulate without limit. unboundedCycles are action sequences that could cause overflow.

ParametersJSON Schema
NameRequiredDescriptionDefault
sourceYesEN source code, or path to .en/.txt file
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively explains the analysis semantics—defining isStructurallyLive, uncoveredSiphons, isStructurallyBounded, and unboundedCycles—giving the agent clear expectations of what the analysis computes and returns. It implies idempotency through the nature of the analysis but doesn't explicitly state safety or side-effect properties.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description uses a rhetorical question structure that efficiently frames the analysis purpose. Each sentence defines a specific output field or concept (siphons, traps, cycles) with no wasted words. The density of technical information is appropriate for the domain, though an explicit 'This tool analyzes...' opening would improve scannability.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite lacking an output schema, the description compensates effectively by documenting all key return concepts (isStructurallyLive, uncoveredSiphons, isStructurallyBounded, unboundedCycles) and their meanings. For a single-parameter analysis tool, this level of output documentation provides sufficient context for invocation, though error conditions are not addressed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, providing baseline documentation for the 'source' parameter. The description references 'EN source code' which aligns with the schema but adds no additional semantic detail about the parameter format, validation rules, or file handling behavior beyond what the schema already states.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies this as a structural analysis tool for deadlock and overflow detection using domain terminology (siphons, traps, cycles). It specifies the analysis targets 'EN' (presumably a modeling language) entities and actors. However, it lacks an explicit introductory sentence stating the tool's primary function (e.g., 'Analyzes an EN model for structural liveness properties').

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this versus siblings like 'reachable', 'invariant', or 'structure'. While the description explains what liveness/boundedness means, it doesn't indicate prerequisites (e.g., model validation) or contrast this static analysis with other verification approaches available on the server.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

reachableAInspect

Can X reach Y? Follows directed data flow first; falls back to undirected. Path shows each step with actor and location. locationCrossings counts boundary transitions. defense_nodes checks if guards cover all paths. coverage.fullCoverage false means unguarded routes exist.

ParametersJSON Schema
NameRequiredDescriptionDefault
toYesTarget node name
fromYesStarting node name
sourceYesEN source code
defense_nodesNoComma-separated guard nodes to check coverage
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full disclosure burden and succeeds in explaining the algorithm (directed/undirected), output structure (path steps with actor/location), and return fields (locationCrossings, coverage.fullCoverage). It doesn't explicitly state the tool is read-only/safe, though this is implied by the analytical nature.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with six sentences, front-loaded with the core question. Every sentence earns its place: algorithm explanation, output path format, crossing counts, defense logic, and coverage interpretation. No redundant or filler text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema, the description compensates effectively by documenting return fields (Path, locationCrossings, coverage.fullCoverage). It adequately covers the 4 parameters' behavioral implications, though it could better explain what 'EN source code' entails for the required 'source' parameter.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, establishing a baseline of 3. The description adds marginal value by clarifying that defense_nodes 'checks if guards cover all paths' (slightly expanding on the schema's 'check coverage'), but doesn't add significant semantic detail beyond the schema for 'source', 'from', or 'to' parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the core reachability query ('Can X reach Y?') and specifies the algorithm uses directed data flow with undirected fallback. However, it fails to explicitly state that this analyzes 'EN source code' (revealed in the schema), leaving the domain context implicit.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains the directed-to-undirected fallback behavior and when defense_nodes checking applies, providing implicit context for usage. However, it lacks explicit guidance on when to use this versus siblings like 'live' or 'invariant', and doesn't state prerequisites for the analysis.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

renderBInspect

SVG diagram. Only call when user explicitly asks to visualize.

ParametersJSON Schema
NameRequiredDescriptionDefault
viewNoGroup by: actors (partition by actor) or locations (partition by location). Default auto-detects topology.
colorNoSeed color hex (#RRGGBB) to generate a custom theme. Overrides theme parameter. One color generates the entire palette.
themeNoColor theme: dark or light
outputNoFile path to save the SVG
sourceYesEN source code, or path to .en/.txt file
qualityNoOutput quality: small, mid, or max
structure_layersNoBitmask for structure overlays. Bits: 1=subsystems, 2=pipelines, 4=cycles, 8=forks, 16=joins, 32=hubs. Default 63 (all on). Pass 0 to hide all.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden but fails to disclose that it writes a file to disk (implied by 'output' parameter but not stated), processes 'EN source code' (mentioned only in schema), or whether rendering is CPU-intensive. The behavioral traits (file creation, parsing) remain undocumented.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The two-sentence structure is appropriately concise and front-loaded. However, 'SVG diagram' is telegraphic/fragmented rather than a complete sentence, slightly reducing clarity despite the brevity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (7 parameters including bitmask layers, quality settings, and theming) and lack of annotations/output schema, the description is insufficient. It omits the EN domain context, file output nature, and what the visualization represents (actors/locations/pipelines).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, documenting all 7 parameters including the bitmask logic and theme options. The description adds no parameter-specific guidance, but with complete schema coverage, this meets the baseline expectation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'SVG diagram' is a noun phrase implying the output format but lacks a clear action verb (e.g., 'Generates' or 'Renders'). While 'visualize' hints at the function, it doesn't specify that it transforms EN source code into an SVG file, leaving the core transformation underspecified.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The phrase 'Only call when user explicitly asks to visualize' provides explicit conditional guidance on invocation. However, it omits what to use instead when analysis (not visualization) is needed, or how this differs from siblings like 'structure' or 'compose' that may analyze the same source.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

structureBInspect

What is this system? Complete structural overview: shape (topology), stages with roles, bridge nodes, cycles, parallelism, critical path, dominator tree, min-cuts, subsystems, interface nodes. Includes actors (who does what, workload entropy) and locations (where work happens, boundary crossings). Optional: pass node for per-node centrality, detect_findings for structural pattern detection.

ParametersJSON Schema
NameRequiredDescriptionDefault
nodeNoNode name for centrality query
sourceYesEN source code, or path to .en/.txt file
detect_findingsNoSet to 'true' to detect structural findings
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It comprehensively lists what gets analyzed (actors, locations, entropy, boundary crossings) but omits operational behavior: computational complexity, side effects, read-only status, or output format expectations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is information-dense and front-loaded with key analytical concepts. The opening question 'What is this system?' is slightly rhetorical but serves as a thematic header. Every subsequent phrase enumerates specific analysis dimensions without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema, the description compensates by listing analytical components returned (topology, cycles, subsystems, etc.). However, it lacks detail on output structure/format and omits behavioral constraints expected for a complex analysis tool with no annotation coverage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing a baseline of 3. The description mentions 'pass node for per-node centrality' and 'detect_findings for structural pattern detection', which align with but do not significantly expand upon the schema's existing descriptions. It does not clarify the 'source' parameter's expected EN code format.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies this as a comprehensive structural analysis tool using specific technical terminology (dominator tree, min-cuts, topology, critical path). However, it does not explicitly differentiate from siblings like 'reachable' or 'compose' which might overlap in graph analysis domains.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Usage is implied through the specificity of technical concepts listed—an expert user would infer this is for deep static structural analysis versus dynamic or targeted queries. However, there is no explicit 'when to use' guidance or comparison to alternatives like 'reachable' or 'live'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources