Skip to main content
Glama

Server Quality Checklist

83%
Profile completionA complete profile improves this server's visibility in search results.
  • Disambiguation3/5

    The three recalculation tools (calibrate_cores, recalc_core_mix, recalc_signs) operate on similar embedding-based concepts but distinct domain layers (etalons, core_mix, signs), which could confuse agents without deep knowledge of the L2/L3/L4 hierarchy. While get/suggest pairs are well-distinguished, the technical jargon around 'cores' and 'signs' creates ambiguity for general-purpose agents.

    Naming Consistency3/5

    Most tools follow verb_noun patterns (read_file, suggest_parents), but inconsistencies exist: 'embed' lacks a noun (should be get_embedding), 'recalc' is abbreviated while 'calibrate' is not, and 'index_all' uses a determiner instead of a specific target. The mix of full words and abbreviations reduces predictability.

    Tool Count4/5

    Thirteen tools is appropriate for this specialized domain covering Obsidian file operations, embedding-based graph traversal, and multi-stage recalculation pipelines. The count supports the full workflow from indexing to sign calculation without excessive bloat, though it approaches the upper limit for a focused server.

    Completeness3/5

    The surface covers the embedding lifecycle well (index_all → recalc_signs → recalc_core_mix) and file I/O, but lacks file deletion, explicit parent linking (only suggestions), and semantic search capabilities despite having embedding functionality. These gaps limit agents' ability to fully manage the knowledge graph lifecycle.

  • Average 2.6/5 across 13 of 13 tools scored. Lowest: 1.5/5.

    See the tool scores section below for per-tool breakdowns.

  • This repository includes a README.md file.

  • This repository includes a LICENSE file.

  • Latest release: v2.1.0

  • Tools from this server were used 1 time in the last 30 days.

  • Add a glama.json file to provide metadata about your server.

  • This server provides 13 tools. View schema
  • No known security issues or vulnerabilities reported.

    Report a security issue

  • This server has been verified by its author.

Tool Scores

  • Behavior1/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, so the description carries the full burden of behavioral disclosure. The term 'Format' implies a transformation, but the description fails to state whether this modifies the entity in-place, returns a formatted string, requires specific permissions, or handles errors. No mention of output format, side effects, or rate limits.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness2/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    While brief (5 words), this represents under-specification rather than efficient communication. The single sentence fails to earn its place because it conveys no actionable information beyond the tool name itself, violating the principle that every sentence should add value beyond structured fields.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness1/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a tool with no output schema and no annotations, the description is radically incomplete. It omits critical context such as return value structure, error conditions, whether the operation is destructive, or what 'compact' formatting entails compared to standard formatting.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters1/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 0% schema description coverage for the 'path' parameter, the description must compensate by explaining what the path refers to (file system? entity ID? hierarchical path?). It provides no such clarification, leaving the agent uncertain about the expected input format or valid values.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose2/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description 'Format compact entity structure formula' essentially restates the tool name (format_entity_compact) with minor word substitution, making it tautological. While 'Format' is a specific verb, 'compact entity structure formula' is jargon-heavy and fails to clarify what resource is being formatted or how this differs from siblings like read_file or write_file.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no guidance on when to use this tool versus alternatives. Given siblings include read_file, write_file, and various entity operations (get_children, suggest_parents), the absence of differentiation criteria forces the agent to guess when formatting is appropriate versus reading or modifying entities.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden of behavioral disclosure but offers almost no details regarding whether the operation is read-only, whether it returns direct children only or includes nested descendants, or what the return format looks like. The word 'Get' implies a read operation, but lacks specifics about error handling when paths don't exist or whether the tool follows symbolic links.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness3/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description consists of only three words, making it appropriately brief for a simple tool structure, though it suffers from under-specification rather than effective conciseness. The single phrase is front-loaded at the start, but fails to earn its place by conveying sufficient actionable information for an agent to confidently invoke the tool.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the lack of annotations, output schema, and parameter descriptions, combined with the presence of functionally similar siblings like `list_files`, `get_parents`, and `read_file`, the description is insufficiently complete. It fails to explain the hierarchical relationship concept, the distinction between listing and getting children, or what differentiates this operation from simply listing files in a directory.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters2/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    The input schema has 0% description coverage for the required 'path' parameter, and the description fails to compensate by explaining expected path formats (absolute vs relative), whether directories or file paths are expected, or validation constraints. While the description implies a location is needed to retrieve children, it provides no semantic meaning, examples, or constraints for the parameter itself.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose2/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description 'Get child files' is essentially a tautology that restates the tool name 'get_children' with minimal variation, merely replacing 'get' with 'Get' and 'children' with 'child files'. While it identifies the resource type as files, it fails to distinguish this tool from siblings like `list_files` or clarify what constitutes a 'child' (direct vs recursive) in this filesystem context.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no guidance on when to use this tool versus alternatives such as `list_files` (which also lists files) or `get_parents` (the inverse operation), nor does it specify prerequisites like path format requirements or necessary permissions. There is no mention of when not to use the tool or what conditions would make it appropriate versus reading a specific file.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions 'filters' but does not disclose what filtering logic is used (glob patterns? regex? exact match?), return format (paths, objects, IDs?), pagination behavior, or what metadata is excluded when 'no_metadata' is used. The term 'base' remains undefined.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness2/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Extreme under-specification rather than appropriate conciseness. Five words for a four-parameter filtering tool is insufficient. The phrase 'with filters' is particularly wasteful given it references four specific filter parameters that are left undocumented, creating a misleading impression that the description covers the filtering capability.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Inadequate for the schema complexity. With four filter parameters, zero schema descriptions, no output schema, and no annotations, the tool requires substantial descriptive coverage. The description omits the filtering mechanism, return structure, and domain-specific terminology ('base', 'sign'), leaving critical gaps.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters1/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 0% (no parameter descriptions), and the description completely fails to compensate. It does not explain what 'level' (nesting depth?), 'sign' (signature? indicator?), 'subfolder' (path prefix?), or 'no_metadata' actually control. With four undocumented parameters, the agent has no semantic guidance for invocation.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose3/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description states the basic action ('List files') but uses ambiguous scope ('in base' - unclear if this means a database, root directory, or specific workspace). It fails to distinguish from sibling tools like 'read_file' (content retrieval) or 'get_children' (hierarchical listing), leaving the agent uncertain about the specific resource domain.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides no guidance on when to use this versus 'read_file', 'get_children', or other navigation tools. No mention of prerequisites (e.g., whether the 'base' needs to be initialized first) or when filtering is preferable to retrieving all files.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries full responsibility for behavioral disclosure but provides none. It does not state whether indexing is destructive (overwrites existing indices), idempotent, how long it takes for large databases, or what the return value indicates. The safety profile and side effects are completely opaque.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness3/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is extremely concise at five words, and while it avoids filler, it is undersized for the complexity of the tool. The structure is front-loaded (verb-first), but the brevity results in insufficient information transfer for an agent to use the tool correctly.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the lack of annotations, 0% schema coverage, and absence of an output schema, the description needs to explain parameters, return values, and behavioral characteristics. It explains only the basic action, leaving critical gaps in how to interpret the boolean parameter and what the operation returns or affects.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters1/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 0% for the 'with_embeddings' boolean parameter, and the description completely fails to compensate. There is no mention of this parameter despite it being the only configuration option for the tool. The description must explain what embeddings are generated and their impact on the indexing behavior, but it does not.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose3/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description states the basic action (index) and scope (all files in database), but 'index' is ambiguous—it could mean creating search indices, updating database indexes, or generating embeddings. Given the 'with_embeddings' parameter and 'embed' sibling tool, the description fails to clarify whether this creates a search index or relates to the embedding process.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    No guidance provided on when to use this versus 'list_files' (which likely just retrieves file metadata) or 'embed' (which may handle vector generation). No mention of when to set 'with_embeddings' to true versus false, or whether this operation should be run routinely or as a one-time setup.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, so the description carries the full disclosure burden. While it mentions 'embeddings' indicating the algorithm type, it fails to state whether this is read-only, what the return format is, or potential side effects. 'Suggest' implies non-destructive behavior but this should be explicit.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness3/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The single sentence is front-loaded with the action, but extreme brevity results in underspecification. Given the 0% schema coverage and lack of annotations, this length is insufficient to convey necessary context.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    With 2 undocumented parameters, no annotations, and no output schema, the description provides inadequate context. It does not explain the relationship to the embedding system or the nature of the suggestions returned.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters1/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 0% and the description fails to compensate. It does not explain what 'path' refers to (file path? entity identifier?) or what 'top_n' controls (number of suggestions? ranking threshold?). Both parameters are completely undocumented.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose3/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description states the basic action ('Suggest parents') and method ('based on embeddings'), but fails to distinguish from sibling tool 'get_parents' or clarify what 'parents' refers to in this domain (file system, knowledge graph, etc.). It meets minimum viability but lacks specificity for sibling differentiation.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    No guidance provided on when to use this versus 'get_parents' or other sibling tools. No prerequisites mentioned (e.g., whether embeddings must be pre-calculated via 'embed' or 'index_all').

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, yet the description fails to disclose critical behavioral traits: the embedding model used, output format/structure (vector array dimensions), rate limits, or whether the operation is deterministic. The description carries the full disclosure burden and provides minimal information.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness3/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The four-word description contains no redundancy, but conciseness here manifests as under-specification rather than efficient precision. Given the complete absence of schema descriptions and annotations, this level of brevity is inappropriate.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    With no output schema, no annotations, and zero parameter description coverage, the tool description bears full responsibility for explaining return values and behavior. It fails to do so, providing insufficient context for safe and effective invocation.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters2/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 0% schema description coverage, the description must compensate for the undocumented 'text' parameter. While it implies the parameter contains input text, it fails to specify constraints, expected format, encoding, or maximum length, leaving critical semantic gaps.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose3/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description provides a specific verb ('Get') and resource ('embedding') but remains vague regarding the embedding type (vector, semantic, etc.) and does not explicitly differentiate from siblings, though the operation is implicitly distinct from the file and calculation tools listed.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    No guidance is provided regarding when to use this tool versus alternatives, prerequisites (e.g., text length limits), or expected use cases. The description states what it does but not when to invoke it.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, so the description must carry the full burden of behavioral disclosure. It fails to specify the return format (array of paths? IDs?), read-only nature, or error handling (what happens if the path doesn't exist?).

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness3/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The single sentence is front-loaded with the core action, but given the lack of annotations and schema descriptions, this level of brevity constitutes under-specification rather than efficient conciseness.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    With zero schema coverage, no annotations, no output schema, and a confusing sibling relationship ('suggest_parents'), the description should provide significantly more detail about the tool's behavior, return values, and relationship to the broader toolset.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters2/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 0% (the 'path' parameter has no description). The description mentions 'from file' which weakly implies the parameter is a file path, but does not specify format requirements (absolute vs relative), validation rules, or semantics. Inadequate compensation for the schema gap.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose3/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    States the basic action (Get parent links) and target (file), but is vague about what 'parent links' means in this domain (hierarchical? filesystem? semantic?). Critically, it fails to distinguish from sibling tool 'suggest_parents', which likely proposes potential parents rather than retrieving existing ones.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides no guidance on when to use this versus 'suggest_parents' or 'get_children'. No mention of prerequisites (e.g., whether the file must be indexed first) or error conditions.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden of behavioral disclosure. It only adds 'based on content' to indicate file reading/analysis behavior, but fails to disclose whether this is read-only, what format suggestions take, or performance characteristics like rate limits.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Single sentence, front-loaded with no redundant words. While extremely brief, it avoids waste—though brevity borders on under-specification given the undocumented 'context' parameter.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Inadequate for a tool with a nested object parameter ('context') and no output schema. The description fails to explain the return value format, the nature of suggested metadata, or the structure expected for the 'context' object.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters2/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 0%, requiring the description to compensate. While 'path' is somewhat self-evident, the required 'context' parameter (a nested object) is completely undocumented with no indication of what properties it expects or its purpose.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose3/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description provides a specific verb ('suggest') and resource ('metadata') and indicates the analysis basis ('based on content'), avoiding tautology. However, it lacks specificity about what metadata entails (tags, categories, properties?) and only minimally distinguishes from sibling 'suggest_parents' by naming a different target resource.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    No guidance provided on when to use this versus siblings like 'index_all', 'read_file', or 'suggest_parents'. No mention of prerequisites such as whether the file must exist or be indexed first.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full disclosure burden. 'Write' is ambiguous: does it overwrite existing files? Does it create parent directories? How is metadata serialized into YAML? No behavioral traits disclosed beyond the upfront mention of frontmatter.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness3/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Extremely brief (4 words) and front-loaded, but brevity undermines clarity for a 3-parameter mutation tool. Single sentence structure wastes no space, yet lacks necessary elaboration given the complexity (nested object, no output schema).

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Inadequate for a file-writing tool with nested object parameters and no output schema. Missing critical safety information (destructive potential, path resolution rules) and operational details (encoding, atomicity). Description leaves agent guessing about failure modes and exact frontmatter construction.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters2/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema has 0% description coverage. Description mentions 'YAML frontmatter' implying metadata parameter purpose, but fails to explain path format expectations (absolute/relative?), content structure (should it exclude frontmatter?), or how the metadata object maps to YAML. Insufficient compensation for schema gaps.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    States specific verb (Write) and resource (file), and mentions distinct feature (YAML frontmatter). Differentiates from sibling read_file by action direction. Loses point for not clarifying if this creates, overwrites, or appends.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides no guidance on when to use versus alternatives (like read_file), prerequisites (directory must exist?), or safety warnings about overwriting existing files. Agent must infer usage from parameter names alone.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden. It successfully discloses the return structure (metadata + content) which compensates partially for the missing output schema, but fails to mention error handling (e.g., file not found), side effects, or permission requirements.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Single sentence, front-loaded with the verb, zero redundant words. Every word earns its place by conveying domain (Obsidian), action (Read), and return structure (metadata + content).

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Adequate for a simple single-parameter tool, as it partially describes the return value (critical given no output schema). However, the complete lack of parameter documentation and absence of error behavior keeps it from being fully complete.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters2/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 0% (path parameter undocumented in schema). The description mentions reading a 'file' but does not explain the `path` parameter's expected format, whether it should be relative/absolute, or provide an example. Insufficient compensation for the schema gap.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Clearly states the action (Read) and resource (Obsidian file), and specifies the scope includes metadata (YAML frontmatter + content). However, it does not explicitly differentiate from sibling tools like `list_files` or `write_file`.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides no guidance on when to use this tool versus alternatives (e.g., `list_files` for directory listings or `get_children` for relationship queries). No prerequisites or exclusion criteria are mentioned.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full burden. 'Recalculate' implies computation but does not disclose whether this modifies persistent state (destructive), requires specific permissions, or is idempotent. Critical behavioral traits remain unspecified.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Four-word description with zero redundancy. Every term contributes specific domain meaning ('etalon' indicates calibration standard, 'embeddings' indicates vector representations).

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Adequate for a zero-parameter tool, but given the presence of similar sibling tools ('recalc_core_mix') and lack of output schema, the description should explain the domain-specific 'etalon' concept and side effects to be complete.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Input schema has zero parameters, establishing baseline of 4. Description appropriately requires no parameter clarification.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Provides specific verb ('Recalculate') and resource ('core etalon embeddings'), identifying the operation clearly. However, it does not explicitly differentiate from siblings like 'recalc_core_mix' or 'embed', leaving ambiguity about which recalculation tool to use.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    No guidance provided on when to use this tool versus 'recalc_core_mix' or 'recalc_signs', nor any mention of prerequisites or conditions. The description states what the tool does but not when an agent should invoke it.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully communicates the non-destructive nature regarding YAML ('Does not modify YAML'), but omits other critical behavioral traits such as performance implications of processing 'all files', side effects (does it write to a database or return values?), and what sign_auto actually represents.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description consists of exactly two high-value sentences with zero waste: the first establishes the operation and scope, while the second provides critical safety information. It is appropriately front-loaded and succinct.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a single-parameter tool with full schema coverage, the description adequately covers the basic operation. However, given the lack of annotations, absence of an output schema, and the potentially expensive nature of recalculating embeddings for all files, the description could benefit from additional context regarding return values or execution warnings.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    The input schema has 100% description coverage for the dry_run parameter, establishing a baseline score of 3. The description text does not explicitly discuss the parameter or add usage context beyond what the schema already provides.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the specific action (recalculate), target resource (sign_auto), scope (all files), and method (content embeddings). It implicitly distinguishes from sibling 'recalc_core_mix' by specifying 'sign_auto' rather than 'core_mix'.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    While the description notes a safety constraint ('Does not modify YAML'), it provides no explicit guidance on when to use this tool versus alternatives like 'suggest_metadata' or 'recalc_core_mix', nor does it mention prerequisites such as requiring embeddings to exist first.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries full behavioral disclosure burden. It successfully explains the directional flow (bottom-up) and hierarchical processing levels, but fails to disclose safety profile (destructive vs. safe), idempotency, or what the operation returns given no output schema exists.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Extremely efficient two-sentence structure: first sentence defines the operation and algorithm, second defines prerequisites. Zero redundancy; every clause conveys essential workflow or behavioral information.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Adequate for a zero-parameter tool, explaining the internal calculation hierarchy (quants/modules/patterns) and prerequisites. However, significant gaps remain regarding return values, side effects, and success/failure indicators given the absence of both output schema and annotations.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Input schema contains zero parameters, establishing a baseline of 4. The description appropriately does not fabricate parameter semantics where none exist.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the specific action (recalculate), target resource (core_mix), and method (bottom-up aggregation through L4→L3→L2). It effectively distinguishes from sibling 'recalc_signs' by implying different calculation targets and hierarchies.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Explicitly states workflow prerequisites: 'Run after index_all with embeddings or after recalc_signs.' This provides clear sequencing context, though it lacks explicit 'when not to use' guidance or alternative tool comparisons.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

GitHub Badge

Glama performs regular codebase and documentation scans to:

  • Confirm that the MCP server is working as expected.
  • Confirm that there are no obvious security issues.
  • Evaluate tool definition quality.

Our badge communicates server capabilities, safety, and installation instructions.

Card Badge

NOUZ-MCP MCP server

Copy to your README.md:

Score Badge

NOUZ-MCP MCP server

Copy to your README.md:

How to claim the server?

If you are the author of the server, you simply need to authenticate using GitHub.

However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.

{
  "$schema": "https://glama.ai/mcp/schemas/server.json",
  "maintainers": [
    "your-github-username"
  ]
}

Then, authenticate using GitHub.

Browse examples.

How to make a release?

A "release" on Glama is not the same as a GitHub release. To create a Glama release:

  1. Claim the server if you haven't already.
  2. Go to the Dockerfile admin page, configure the build spec, and click Deploy.
  3. Once the build test succeeds, click Make Release, enter a version, and publish.

This process allows Glama to run security checks on your server and enables users to deploy it.

How to add a LICENSE?

Please follow the instructions in the GitHub documentation.

Once GitHub recognizes the license, the system will automatically detect it within a few hours.

If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.

How to sync the server with GitHub?

Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.

To manually sync the server, click the "Sync Server" button in the MCP server admin interface.

How is the quality score calculated?

The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).

Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.

Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).

Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/KVANTRA-dev/NOUZ-MCP'

If you have feedback or need assistance with the MCP directory API, please join our Discord server