Skip to main content
Glama

Server Quality Checklist

75%
Profile completionA complete profile improves this server's visibility in search results.
  • Disambiguation5/5

    Each tool has a clearly distinct purpose with no overlapping functionality. Module lifecycle tools (create/delete), button management (deploy/delete/update/get), output window controls, and plugin status checks are all cleanly separated by scope and resource type.

    Naming Consistency5/5

    All 12 tools follow a consistent snake_case verb_noun pattern (check_plugin_status, create_pynet_module, get_buttons_data, etc.). Verbs are semantically appropriate (create/deploy/delete/get/update/configure/check/list/send) and consistently applied.

    Tool Count5/5

    The 12 tools provide appropriate coverage for the domain: plugin connectivity (2), UI window management (2), module lifecycle (3), button CRUD operations (4), and script execution (1). The scope is well-defined without bloat or gaps.

    Completeness4/5

    Covers full CRUD for ScriptButtons and CRD for ButtonsModules, plus execution and diagnostics. Minor gaps include lack of module update capability and no single-button retrieval (only list by module), but the core lifecycle and deployment workflows are fully supported.

  • Average 2.7/5 across 12 of 12 tools scored.

    See the tool scores section below for per-tool breakdowns.

  • This repository includes a README.md file.

  • This repository includes a LICENSE file.

  • Latest release: v1.0.2

  • No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.

    Tip: use the "Try in Browser" feature on the server page to seed initial usage.

  • This repository includes a glama.json configuration file.

  • This server provides 12 tools. View schema
  • No known security issues or vulnerabilities reported.

    Report a security issue

  • This server has been verified by its author.

  • Add related servers to improve discoverability.

Tool Scores

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden but fails to disclose idempotency behavior (fail vs overwrite on duplicate name), required permissions, or what the creation process entails beyond the single verb.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness3/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The four-word sentence is not verbose, but given the undocumented parameters and lack of annotations, it is inappropriately brief and fails to front-load critical context necessary for correct invocation.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Considering the 0% schema coverage, existence of an output schema, and numerous siblings with potentially overlapping functionality (delete_pynet_module, script button tools), the description lacks the necessary depth for an agent to select and invoke the tool correctly.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters1/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 0% for both required parameters (pid, name), and the description offers no compensatory explanation. The semantics of 'pid' (potentially process ID, project ID, or plugin ID) and 'name' remain undefined.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose3/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description states a clear verb (Creates) and resource (ButtonsModule), but introduces ambiguity by using 'ButtonsModule' when the tool name refers to 'pynet_module'. It does not clarify the relationship between these terms or differentiate from sibling tools like delete_pynet_module.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    No guidance provided on when to use this tool versus alternatives (e.g., update_script_button), nor any mention of prerequisites or preconditions for creating a module.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries full responsibility for behavioral disclosure. It implies a read-only operation ('Get') but fails to define what 'available' means (initialized? visible? ready for input?), error conditions for invalid pids, or side effects. While an output schema exists, the description should still clarify behavioral semantics.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness3/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is brief (8 words) and front-loaded, but 'Get if' is inefficient phrasing (prefer 'Check whether' or 'Returns whether'). More importantly, the extreme brevity results in under-specification; the single sentence fails to earn its place because it omits essential parameter documentation that the schema fails to provide.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Despite having only one parameter and an output schema to handle return value documentation, the description is incomplete. The 0% schema parameter coverage combined with zero descriptive context about the 'pid' parameter or availability semantics leaves the agent without sufficient information to correctly invoke the tool or interpret results in context.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters1/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Critical failure: the schema has 0% description coverage for the 'pid' parameter, and the description completely omits any mention of this required parameter. Given the sibling tools suggest a process management context, 'pid' likely means Process ID, but the agent should not be forced to guess the parameter's meaning or valid values.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose3/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clarifies that 'status' specifically refers to availability (boolean state), which narrows the scope beyond what the function name implies. However, the phrasing 'Get if' is grammatically awkward and borders on tautology, merely restating the tool's intent in slightly different words without rich operational context.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    No guidance provided on when to use this tool versus siblings like 'check_plugin_status' or 'configure_output_window'. No mention of prerequisites (e.g., should this be called after creating a window?) or workflow integration. The agent must infer usage context from the function name alone.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, so the description carries full burden. It mentions 'PyNet' providing domain context, but lacks critical behavioral details: whether changes are persistent, affect all users, require specific permissions, or what the output schema contains. The 'toggles' terminology also creates ambiguity about idempotency.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Extremely concise at one sentence. However, brevity comes at the cost of omitting necessary detail for a tool with zero schema coverage and no annotations. The front-loading is adequate but the word 'toggles' may be imprecise.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Inadequate given the 0% schema coverage and lack of annotations. While an output schema exists (reducing the need to describe return values), the description fails to explain the parameters or behavioral constraints necessary for correct invocation.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters2/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 0% schema description coverage, the description must compensate but fails to explain either parameter. It mentions 'visibility' which loosely maps to `is_available`, but provides no context for `pid` (likely a process/instance ID) or the exact semantics of availability versus visibility.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose3/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description identifies the resource (PyNet log/output window) and action (toggles visibility), but the verb 'toggles' is potentially misleading given the boolean `is_available` parameter (which implies idempotent setting, not toggling between states). It does not explicitly distinguish from sibling `get_output_window_status`.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    No explicit guidance on when to use this tool versus alternatives. While the sibling `get_output_window_status` suggests a read-only counterpart, the description does not clarify the relationship or when to prefer one over the other.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden of behavioral disclosure but fails to indicate whether execution is synchronous, what side effects occur, or safety implications. It does not explain the "PyNet engine" context or what distinguishes "direct" execution. The minimal disclosure leaves critical behavioral traits like state mutation and error handling undocumented.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness3/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The six-word description contains no redundant or filler language, presenting a single focused clause. However, given the tool's complexity with three undocumented parameters and no annotations, the extreme brevity results in under-specification rather than efficient communication. The structure is appropriately front-loaded but insufficiently substantial for the tool's complexity.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a tool with three required parameters and zero schema documentation, the description fails to explain parameter purposes, execution prerequisites, or behavioral constraints. While an output schema exists (reducing the need for return value documentation), the description omits critical context needed for safe and correct invocation. The coverage is inadequate for the tool's evident complexity.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters1/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    The input schema has 0% description coverage for all three parameters (pid, script_name, content), yet the description compensates by mentioning none of them. It provides no hints about whether pid refers to a process identifier, what format content expects (code vs. commands), or how script_name is used. This complete absence of parameter context forces the agent to guess semantics for required fields.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    "Direct script execution in the PyNet engine" clearly identifies the core action (execution) and resource (scripts in PyNet). The adjective "Direct" provides implicit differentiation from sibling tools like deploy_script_button, though it lacks explicit comparison. It successfully conveys the tool's function without tautology.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no guidance on when to use this tool versus alternatives such as deploy_script_button or create_pynet_module. It lacks prerequisites, conditions, or explicit comparisons to related operations. The agent must infer usage context solely from the word "Direct" without knowing what constitutes indirect execution.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden. It discloses the destructive nature via 'Permanently,' but omits critical behavioral details like cascade effects, impact on active instances (relevant given list_active_instances sibling), or recovery possibilities.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness3/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The single sentence is efficiently structured and front-loaded, but the extreme brevity constitutes under-specification for a destructive two-parameter operation. Every word earns its place, yet more sentences are needed.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Inadequate for a destructive operation with undocumented parameters. While an output schema exists (reducing the need for return value description), the combination of zero schema coverage, missing annotations, and unspecified parameter semantics leaves critical gaps.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters2/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 0% and the description fails to compensate. It mentions neither 'pid' (process ID?) nor 'module_id', nor explains their relationship to the module being deleted or format expectations.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose3/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description states the action (deletes) and resource (module) with permanence noted, but 'module' is generic and lacks domain context (pynet). It fails to distinguish from sibling tool delete_script_button or clarify the relationship to create_pynet_module.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides no guidance on when to use this tool versus alternatives like delete_script_button, nor any prerequisites (e.g., whether the module must be inactive) or warnings about dependencies.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries full burden but only minimally discloses that the target must be 'existing.' It lacks critical behavioral details: failure modes when the button doesn't exist, whether updates are atomic, side effects, or authorization requirements.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The single sentence wastes no words, but given the high complexity (8 parameters, mutation operation, 0% schema coverage), it is arguably undersized rather than efficiently concise. No structural issues.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Despite having an output schema (reducing the need to describe return values), the description is incomplete given the 0% schema coverage and lack of annotations. Critical context missing includes: the meaning of 'pid,' the relationship between module_id and dest_module_id, and required field semantics.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters1/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 0%, yet the description fails to compensate. It vaguely references 'metadata' without mapping which of the 8 parameters constitute metadata versus identifiers (e.g., pid, module_id, button_id vs name, tooltip), or explaining dest_module_id's purpose.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the verb (Updates) and resource (ScriptButton metadata), and implicitly distinguishes from creation by specifying 'existing.' However, it fails to differentiate from sibling tools like deploy_script_button or delete_script_button.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    No guidance provided on when to use this versus deploy_script_button, or prerequisites such as the button needing to exist beforehand. No mention of error conditions or required permissions.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, so the description carries full behavioral disclosure burden. While 'Installs' implies a write operation, the description lacks critical details: idempotency behavior (error if exists vs. overwrite), side effects, required permissions, or what the output schema contains.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The single sentence is efficiently structured with zero redundancy. However, given the complete lack of schema descriptions and annotations, the description is underspecified rather than optimally concise—critical information is missing entirely, not just compressed.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    With 6 parameters, 4 required, 0% schema coverage, and no annotations, the tool is moderately complex but undocumented. While an output schema exists (reducing description burden for return values), the description fails to address the parameter semantics, prerequisites, or behavioral edge cases necessary for reliable invocation.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters2/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema has 0% description coverage, requiring the description to compensate. It only implicitly maps 'ButtonsModule' to 'module_id' and 'ScriptButton' to the operation. It fails to explain 'pid' (likely process ID), distinguish between 'name' (display label?) and 'script_Path' (executable?), or clarify valid values for 'icon_name'.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description uses specific verb 'Installs' and identifies the resource 'ScriptButton' and target 'ButtonsModule', clarifying the core operation. However, it doesn't distinguish from sibling 'update_script_button'—both could involve 'installing' or configuring buttons, leaving ambiguity about whether this creates new vs. modifies existing.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    No guidance provided on when to use this tool versus 'update_script_button' or 'delete_script_button'. No prerequisites mentioned (e.g., whether the ButtonsModule must exist beforehand, or if the script_Path must be valid/accessible).

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden of behavioral disclosure but only implies read-only access through the verb 'Lists'. It fails to describe what data is returned, whether the operation is idempotent, or any rate limiting, despite the existence of an output schema that could have been referenced.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description consists of a single efficient sentence with no wasted words. However, the extreme brevity becomes a liability given the complete absence of schema descriptions and annotations, resulting in insufficient documentation rather than effective conciseness.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given zero parameter schema coverage and no annotations, the description is incomplete. While the output schema presumably covers return values, the undocumented pid parameter, lack of usage guidelines against similar retrieval tools, and absence of behavioral details leave critical gaps for correct invocation.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters2/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 0%, requiring the description to fully compensate for both parameters. It only implicitly documents module_id by referencing 'module ID', while the pid parameter is completely unexplained, leaving half of the required inputs semantically undocumented.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description uses a specific verb ('Lists') and resource ('script buttons') with clear scope ('for a specific module ID'). It implicitly distinguishes from sibling mutation tools like delete_script_button and update_script_button by operation type, but does not explicitly contrast with similar retrieval tools such as get_pynet_ui_layout.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    No guidance is provided on when to use this tool versus alternatives, or what prerequisites are needed for the module_id. The description lacks information on how to obtain valid module IDs or when to prefer this over get_pynet_ui_layout for UI layout data.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, so the description must carry the full burden of behavioral disclosure. While 'Fetches' implies a read-only operation, the description does not explicitly confirm safety, idempotency, potential error states, or whether this operation is expensive/computationally heavy.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single, efficient sentence with no filler words. It is appropriately front-loaded with the action verb. However, given the lack of schema documentation and annotations, this brevity results in under-specification rather than optimal clarity.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    While the presence of an output schema excuses the description from detailing return values, the complete absence of documentation for the required 'pid' parameter (in both schema and description) creates a critical gap for a tool that appears to target a specific instance via that identifier.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters2/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 0% schema description coverage for the required 'pid' parameter, the description must compensate but fails to mention the parameter entirely. It does not explain what 'pid' represents (process ID, plugin ID, etc.) or provide an example, leaving the parameter semantics undocumented.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description uses a specific verb ('Fetches') and identifies the exact resource ('full UI structure') and components ('ButtonsModules and ScriptButtons'). However, it does not explicitly differentiate from the sibling tool 'get_buttons_data', which could cause confusion about which retrieval tool to use.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    There is no guidance on when to use this tool versus alternatives (e.g., 'get_buttons_data'), no mention of prerequisites for the 'pid' parameter, and no indication of when this call is necessary versus cached or static layouts.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden. 'Handshake ping' implies a lightweight, read-only connectivity check, but the description omits failure modes, timeout behavior, or what constitutes 'responsive' versus unresponsive.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Single sentence, efficiently front-loaded with no redundant words. Every term ('Handshake', 'ping', 'plugin listener', 'responsive') contributes meaningful information about the tool's function.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the tool has an output schema (covering return values) and only one parameter, the description adequately covers the conceptual purpose but is incomplete due to the total absence of parameter documentation and lack of behavioral details that annotations would normally provide.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters1/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    The schema has 0% description coverage (only 'title': 'Pid'), and the description fails to compensate by explaining what 'pid' represents (process ID, plugin ID, etc.) or how to obtain it. The parameter is completely undocumented in natural language.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly identifies the action ('Handshake ping') and target ('plugin listener'), specifying it checks responsiveness. It distinguishes from siblings like get_output_window_status by specifying 'plugin listener' rather than other components.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    No explicit guidance on when to use this versus sibling tools like get_output_window_status or list_active_instances. No mention of prerequisites or conditions where this check is preferred.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Since no annotations are provided, the description carries full behavioral disclosure burden. It effectively communicates the destructive, irreversible nature via 'Permanently removes,' but fails to disclose idempotency behavior, potential side effects on dependent systems, or what the output schema contains.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The single sentence is appropriately front-loaded with the critical 'Permanently' qualifier warning of irreversible behavior. However, 'by Id' is slightly ambiguous given three ID-type parameters exist, slightly reducing precision.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given this is a destructive operation with zero annotation coverage and 0% schema documentation, the description is insufficient for safe invocation. It fails to explain the three required parameters or reference the existing output schema, leaving agents to guess the semantics of 'pid' and the expected return structure.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters2/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 0% schema description coverage, the description must compensate for three undocumented required parameters. While 'from a module' hints at module_id and 'ScriptButton... by Id' implies button_id, it completely omits explanation of 'pid' (process/project/plugin ID) and fails to clarify the relationship between these three identifiers.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the specific action ('Permanently removes'), target resource ('ScriptButton'), and scope ('from a module'). It effectively distinguishes from siblings like delete_pynet_module (which deletes entire modules) and update_script_button (which modifies rather than removes).

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no guidance on when to use this tool versus alternatives (e.g., when to use delete_script_button vs. update_script_button), nor does it mention prerequisites like verifying button existence via get_buttons_data beforehand.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden of behavioral disclosure. It adequately explains what the tool returns (PIDs) and the filtering condition (active PyNet listener), but lacks details on error handling, performance characteristics, or behavior when no instances are found.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description consists of a single, efficient sentence that is front-loaded with the action verb. Every word earns its place, clearly conveying the tool's function without redundancy or unnecessary elaboration for a simple read operation.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the low complexity (zero parameters, read-only operation) and presence of an output schema, the description is sufficiently complete. It appropriately identifies the domain-specific context (Navisworks/PyNet) and the specific return value type (PIDs) without needing to replicate detailed return structure documentation.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    The tool has zero parameters, establishing a baseline of 4. The description adds valuable context by explaining what data is being retrieved and why no input parameters are required (it lists all active instances), which helps the agent understand the empty schema.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description uses a specific verb ('Retrieves') and clearly identifies the resource (PIDs) and scope (Navisworks instances with active PyNet listener). It implicitly distinguishes from siblings like check_plugin_status by focusing on listener-specific instance enumeration, though it does not explicitly contrast with alternatives.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no explicit guidance on when to use this tool versus alternatives (e.g., whether to use this before send_command), nor does it mention prerequisites or exclusions. Usage must be inferred from the tool name and domain context.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

GitHub Badge

Glama performs regular codebase and documentation scans to:

  • Confirm that the MCP server is working as expected.
  • Confirm that there are no obvious security issues.
  • Evaluate tool definition quality.

Our badge communicates server capabilities, safety, and installation instructions.

Card Badge

PyNetBridge MCP server

Copy to your README.md:

Score Badge

PyNetBridge MCP server

Copy to your README.md:

How to claim the server?

If you are the author of the server, you simply need to authenticate using GitHub.

However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.

{
  "$schema": "https://glama.ai/mcp/schemas/server.json",
  "maintainers": [
    "your-github-username"
  ]
}

Then, authenticate using GitHub.

Browse examples.

How to make a release?

A "release" on Glama is not the same as a GitHub release. To create a Glama release:

  1. Claim the server if you haven't already.
  2. Go to the Dockerfile admin page, configure the build spec, and click Deploy.
  3. Once the build test succeeds, click Make Release, enter a version, and publish.

This process allows Glama to run security checks on your server and enables users to deploy it.

How to add a LICENSE?

Please follow the instructions in the GitHub documentation.

Once GitHub recognizes the license, the system will automatically detect it within a few hours.

If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.

How to sync the server with GitHub?

Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.

To manually sync the server, click the "Sync Server" button in the MCP server admin interface.

How is the quality score calculated?

The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).

Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.

Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).

Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/rafa2403nunez-droid/PyNetBridge'

If you have feedback or need assistance with the MCP directory API, please join our Discord server