Skip to main content
Glama

Server Quality Checklist

67%
Profile completionA complete profile improves this server's visibility in search results.
  • Disambiguation4/5

    Most tools have distinct purposes, but there is some overlap between recall_mementos and search_mementos, as both are for retrieval with nuanced differences (conceptual vs. precise). Tools like adjust_memento_confidence, boost_memento_confidence, and apply_memento_confidence_decay are clearly differentiated, focusing on manual adjustment, reinforcement, and automated decay respectively, minimizing confusion.

    Naming Consistency5/5

    All tool names follow a consistent snake_case pattern with a clear verb_noun structure (e.g., adjust_memento_confidence, create_memento_relationship, get_related_mementos). There are no deviations in naming conventions, making the set predictable and easy to parse.

    Tool Count4/5

    With 17 tools, the count is slightly high but reasonable for a memory management system covering storage, retrieval, relationships, confidence management, and statistics. It aligns well with the server's purpose, though it might feel a bit heavy compared to simpler domains.

    Completeness5/5

    The tool set provides comprehensive coverage for a memory management domain, including CRUD operations (store, get, update, delete), search and recall variants, relationship management, confidence handling, statistics, and onboarding. There are no obvious gaps; it supports full lifecycle management from creation to decay and retrieval.

  • Average 4/5 across 17 of 17 tools scored. Lowest: 2.3/5.

    See the tool scores section below for per-tool breakdowns.

  • This repository includes a README.md file.

  • This repository includes a LICENSE file.

  • Latest release: v0.2.35

  • No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.

    Tip: use the "Try in Browser" feature on the server page to seed initial usage.

  • Add a glama.json file to provide metadata about your server.

  • This server provides 17 tools. View schema
  • No known security issues or vulnerabilities reported.

    Report a security issue

  • Are you the author?

  • Add related servers to improve discoverability.

Tool Scores

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full disclosure burden but fails to deliver. It does not specify partial vs. full replacement semantics, what happens if the memory_id doesn't exist, whether the operation is atomic, or what constitutes a successful update. The term 'update' implies mutation but lacks critical behavioral context.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness2/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    While brief at four words, this represents under-specification rather than effective concision. Given the low schema coverage, lack of annotations, and mutation complexity, the description is inappropriately sized—every sentence fails to earn its place because critical information is absent.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a 6-parameter mutation tool with no output schema and minimal input documentation, the description is severely incomplete. It lacks coverage of updatable fields, return values, error scenarios, and relationships to the 15+ sibling tools in the memento ecosystem.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters2/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is only 17% (just memory_id described). The description fails to compensate for this gap, mentioning none of the five undocumented parameters (title, content, summary, tags, importance). It does not explain that unspecified fields likely remain unchanged, nor the 0-1 scale for importance, nor tag formatting expectations.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose3/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    States the specific action (update) and resource (memento) and distinguishes from create/delete siblings via 'existing'. However, fails to differentiate from similar update-like siblings such as adjust_memento_confidence or boost_memento_confidence, leaving ambiguity about which modification tool to use.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The word 'existing' minimally implies this requires a pre-existing memento ID, suggesting when NOT to use it for new entries. However, it provides no explicit guidance on when to choose this over store_memento, adjust_memento_confidence, or other mutation tools, and mentions no prerequisites or error conditions.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, so the description must carry full behavioral disclosure burden, yet it reveals nothing about performance characteristics (is this expensive?), return format, caching behavior, or whether this aggregates deleted vs active mementos.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Single sentence with no redundancy, though arguably too brief given the lack of structured metadata (annotations/output schema) and sibling complexity.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Minimal but technically sufficient for a zero-parameter tool; however, with no output schema and vague 'statistics' claim, the description fails to set expectations for what data structure or metrics will be returned.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Input schema has zero parameters, which per scoring rules establishes a baseline of 4. The description correctly implies no filtering is needed, matching the empty schema.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose3/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    States it retrieves 'statistics about the memento database' but fails to specify what statistics (count, confidence distribution, storage size, temporal metrics?), leaving it ambiguous compared to sibling tools like get_recent_memento_activity or get_low_confidence_mementos which imply specific result sets.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides no guidance on when to use this versus other retrieval tools like get_recent_memento_activity or search_mementos, nor mentions if this is for diagnostics, monitoring, or analytical purposes.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full burden of behavioral disclosure, but offers none. Does not confirm read-only safety despite 'search' implying it, describe return format/pagination, explain result sorting, or mention any limits beyond the parameter constraint.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Single sentence efficiently front-loaded with action and resource. Lists key filter fields concisely. No wasted words, though brevity may be excessive given the lack of annotations and output schema—slightly too terse for the complexity.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Incomplete for a 7-parameter search tool with no annotations and no output schema. Description omits what distinguishes a 'relationship' from a 'memento', what fields are returned, result ordering, and how results relate to the 'temporal' filter. Lacks critical context for confident invocation.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 100% schema description coverage, the schema already documents all parameter semantics (e.g., 'Filter by scope', 'Matches any'). Description lists the fields parenthetically but adds no additional meaning, examples, or syntax guidance beyond the schema. Baseline 3 is appropriate.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Clear verb (search) and resource (memento relationships) with specific enumeration of filterable fields (scope, conditions, evidence, components). Distinguishes from sibling 'search_mementos' by targeting relationships rather than mementos themselves, though could explicitly clarify the distinction from 'contextual_memento_search'.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides no guidance on when to use this tool versus siblings like 'search_mementos', 'contextual_memento_search', or 'get_related_mementos'. Does not mention that all parameters are optional or suggest combinations for common queries.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full burden. Partially succeeds by disclosing that relationships are destroyed alongside the memento (cascade behavior). However, fails to mention irreversibility, authorization requirements, or impact on related mementos when their relationships are severed.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Extremely concise at 7 words with front-loaded verb. No redundant text. However, brevity borders on under-specification for a destructive tool that warrants safety warnings.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    As a destructive mutation tool with cascade effects, no output schema, and zero annotations, the description is insufficient. It mentions relationship deletion but omits critical safety context (permanent data loss, orphaned relationship handling) required for responsible invocation.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 100% (memory_id fully documented in schema). Description adds no parameter-specific guidance, but per rubric guidelines, baseline is 3 when schema coverage is high.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    States specific verb 'Delete' and resource 'memento' plus scope 'all its relationships', effectively distinguishing from sibling read tools (get_memento, search_mementos) and update tools. However, lacks explicit guidance distinguishing when to use this versus update_memento.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides no guidance on when to use this tool versus alternatives, nor warnings about irreversibility. For a destructive cascade operation, the absence of prerequisites or safety considerations is a significant gap.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full burden. Discloses return structure composition (counts by type, up to 20 recent memories, unresolved problems) which is valuable. However, fails to declare read-only nature or safety characteristics despite many mutating siblings (delete, update, store).

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Excellent structure with clear sections: purpose statement, Returns description, and EXAMPLES. Front-loaded with zero waste. Each sentence earns its place by conveying distinct information.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Compensates well for missing output schema by explicitly documenting return components and the 20-item limit. With only 2 simple parameters fully documented in schema, complete coverage achieved. Minor gap: could declare read-only status given lack of annotations.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 100% providing complete param documentation. Examples section adds valuable usage context clarifying that days looks back in time and project filters by path, exceeding baseline expectations for high-coverage schemas.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    States specific action (Get summary), resource (memento activity), and scope (recent, session context). Mentions return of 'unresolved problems' which distinguishes from sibling get_memento_statistics. However, 'session context' is vague and doesn't explicitly contrast with search_mementos or contextual_memento_search.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides concrete examples showing usage patterns (days=7, project filter), which implicitly guide usage. However, lacks explicit 'when to use this vs alternatives' guidance. Doesn't clarify when to prefer this over search_mementos or get_memento_statistics.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full burden. Discloses mechanics thoroughly: base boost (+0.10), cap (1.0), validation bonus (+0.10-0.20), and maximum bound. Missing explicit statement on idempotency or error behavior when memory_id is invalid, but covers primary behavioral traits well.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Well-organized with clear headers ('Use for', 'Usage patterns', 'Boost mechanics'). Front-loaded with the core purpose. Length is appropriate for the complexity, though the mechanics section slightly duplicates schema details (default values) while adding context.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Comprehensive coverage of usage scenarios and mathematical mechanics given no output schema exists. Explains the mutual exclusivity logic between memory_id and relationship_id implicitly through usage patterns. Does not clarify return values or error states, but sufficient for invocation decisions.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 100% (all 4 params fully documented). Description adds value by specifying default boost amounts (+0.10) and validation ranges in the 'Boost mechanics' section, providing semantic context for the 'boost_amount' parameter beyond the schema's generic number description.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    States specific action ('Boost confidence') and resource ('memory'), and implies narrow scope through 'when successfully used'. Does not explicitly differentiate from sibling 'adjust_memento_confidence' (which implies bidirectional modification), leaving slight ambiguity about why choose this specific tool.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides explicit 'Use for' and 'Usage patterns' sections listing specific contexts (reinforcing valid knowledge, after applying solutions, team confirmation). Lacks explicit 'when-not-to-use' guidance or naming of alternatives like 'adjust_memento_confidence', preventing a 5.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden. It discloses important domain-specific behavior (relationship directionality, common semantic types), but omits operational details like idempotency, whether mementos must exist prior to linking, or what constitutes success/failure.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Well-structured with clear information hierarchy: purpose statement first, followed by relationship taxonomy, concrete examples, and optional parameters. The examples are verbose but earn their place by demonstrating directional semantics that would otherwise be ambiguous.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a creation tool with no output schema and no annotations, the description adequately covers the semantic domain model but lacks operational completeness. It should ideally disclose whether the operation validates memento existence, if it's idempotent, or what return value indicates success.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Despite 100% schema coverage (baseline 3), the description adds substantial value by enumerating specific relationship type taxonomies (SOLVES, CAUSES, etc.) and providing concrete usage examples that clarify the directional semantics of from_memory_id vs to_memory_id.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description immediately states the specific action ('Link two mementos') and the resource type ('typed relationship'), clearly distinguishing this creation tool from siblings like store_memento (which creates the mementos themselves) or get_related_mementos (retrieval).

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    While it lacks explicit 'do not use when' exclusions, the EXAMPLES section and enumerated relationship types (SOLVES, CAUSES, etc.) provide clear contextual guidance on when and how to use the tool, demonstrating semantic patterns like solution→problem directionality.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full burden. Discloses key behaviors: default threshold < 0.3, sorting order (lowest first), returns relationships (not just mementos), includes last access time. Missing only minor details like rate limits or pagination.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Well-structured with front-loaded purpose statement followed by organized bullet lists (Use for/Features/Returns). Every section adds value. Slightly verbose format compared to ultra-concise single-sentence descriptions, but appropriate given lack of output schema requiring behavioral description.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Compensates effectively for missing output schema by detailing return structure (relationships with associated memories, confidence scores, access times). Two simple optional parameters with complete schema coverage. No annotations but description adequately covers read-only safety.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema has 100% description coverage (threshold and limit documented), establishing baseline 3. Description reinforces threshold semantics (filtering behavior, default < 0.3) but does not add syntax details, validation nuances, or usage guidance beyond schema documentation.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Opens with specific verb+resource: 'Find memories with low confidence scores.' Clearly distinguishes from sibling confidence tools (adjust, boost, apply decay) by focusing on retrieval/querying rather than mutation. Scope is precisely defined by the confidence threshold concept.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides explicit 'Use for:' section with 4 concrete scenarios (obsolete knowledge identification, cleanup, QA, review needs). Offers clear context for invocation. Lacks explicit 'when not to use' or named alternatives (e.g., vs search_mementos), preventing a 5.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full burden. Discloses that include_relationships shows 'connected memories' and notes default behavior (true). Missing error handling (what if ID invalid?) and return format details.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Three distinct, front-loaded segments (purpose, usage guidance, parameter tip) plus example. No filler. Each sentence earns its place.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Adequate for a 2-parameter retrieval tool without output schema. Covers the core retrieval action, relationship expansion behavior, and integration with sibling workflows. Minor gap on error scenarios.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 100%, establishing baseline 3. Description adds critical semantic detail that include_relationships defaults to true, plus a concrete usage example showing syntax.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Specific verb 'Retrieve' + resource 'memento' + scope 'by ID' clearly distinguishes this from siblings like 'search_mementos' (multi-result) and 'store_memento' (create).

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Explicitly states workflow context: 'Use when you have a memory_id from search results or store_memento.' Links the tool into the usage chain. Lacks explicit 'when not to use' or named alternatives.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full burden. Discloses default value for max_depth (1) which schema lacks. However, omits safety characteristics (read-only vs destructive), error behavior when memory_id not found, and performance implications of increasing depth.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Excellent structure: one-sentence purpose summary, parameter filter note, then EXAMPLES section. Every sentence earns its place. Front-loaded with clear intent before diving into parameter specifics.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given 100% schema coverage and clear examples, description is nearly complete. Minor gap: no mention of return value structure (list of mementos) since no output schema exists, though this is somewhat inferable from the tool name and examples.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema has 100% description coverage (baseline 3). Description adds value by providing concrete enum-like examples for relationship_types (SOLVES, CAUSES) and their semantic meanings ('find solutions', 'find root causes'), plus noting the default depth value.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Clear specific verb ('Find') + resource ('mementos') + mechanism ('via relationships'). Distinguishes from sibling tools like get_memento (single fetch), search_mementos (text search), and create_memento_relationship (mutating operation) by emphasizing graph traversal from a specific starting memory.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides concrete EXAMPLES section showing two distinct usage patterns (finding solutions via SOLVES, finding root causes via CAUSES with depth). However, lacks explicit when-not-to-use guidance or comparison to similar traversal tools like search_memento_relationships_by_context.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, so description must carry full burden. While it comprehensively details what content is returned (protocol sections, retrieval flows), it fails to explicitly disclose behavioral traits like read-only status, idempotency, or side effects that would normally be covered by annotations.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness3/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is quite verbose (effectively ~30 lines), but for an onboarding tool this length is functionally justified. It uses clear structural headers (MEMENTO ONBOARDING PROTOCOL, OPTIMIZED RETRIEVAL) that aid readability despite the bulk.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the single-parameter simplicity and lack of output schema, the description adequately compensates by exhaustively detailing what the tool returns. However, it could explicitly mention the return format (string vs structured object).

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters5/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Despite 100% schema coverage describing the 'topic' parameter, the description adds substantial semantic value by detailing exactly what content each enum value returns (e.g., 'protocol': Full onboarding protocol, 'retrieval_flow': Optimized retrieval guide).

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Description opens with specific verb ('Get') and resource ('comprehensive onboarding protocol for Memento'), clearly positioning this as a guidance/documentation tool that distinguishes itself from operational siblings like store_memento or search_mementos.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines5/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides explicit when-to-use directive ('Run memento_onboarding() at session start'), includes decision trees for sibling selection ('Known tags → search_mementos, Conceptual → recall_mementos'), and explicitly names alternative tools for specific tasks.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior5/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full burden. Excellent coverage: discloses mathematical decay formula, minimum confidence floor (0.1), tag-based protections (no decay for security/auth tags), relationship scope (updates relationships, not memories directly), and detailed return value structure (count, summary, breakdown).

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Well-structured with clear section headers (Use for, Intelligent decay rules, Decay formula, Returns). Front-loaded with primary purpose. Slightly verbose but every section earns its place—the formula, protection rules, and return specifications are essential for safe invocation. No filler content.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness5/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Comprehensive coverage for a complex operation. Compensates for missing output schema by explicitly documenting return values (count, summary, breakdown). Covers complex behavioral tiers (critical/high/general/temporary), mathematical formula, and edge case handling (minimum confidence) despite simple single-parameter input.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema has 100% description coverage for the single memory_id parameter, including behavior when omitted (system-wide) versus provided (specific memory). Main description focuses on algorithmic behavior rather than repeating parameter details, which is appropriate given comprehensive schema documentation.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Excellent specificity: 'Apply automatic confidence decay based on last access time' clearly identifies the verb (apply), resource (confidence/memento relationships), and mechanism (time-based decay). Distinguishes from sibling tools like adjust_memento_confidence and boost_memento_confidence by emphasizing the automatic, time-driven nature versus manual adjustments.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides explicit 'Use for:' section with three concrete scenarios (system maintenance, intelligent decay rules, monthly routines). However, lacks explicit 'when not to use' guidance or direct comparison to manual adjustment alternatives (adjust_memento_confidence, boost_memento_confidence) in the sibling list.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Discloses two-phase process ('Find related memories, Search only within that set') and guarantees 'No leakage outside context' without annotations. Explains 'semantic scoping without embeddings' as implementation detail. Minor gap: does not specify error behavior (e.g., invalid memory_id) or result ordering.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Perfectly structured with clear headers (WHEN TO USE, HOW TO USE, RETURNS). Every sentence adds value—technical mechanism, usage scenarios, parameter guidance, and return guarantees. No redundancy or fluff despite comprehensive coverage.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Excellent coverage given 3 parameters, 100% schema coverage, and no output schema. Describes return values conceptually ('Matches found only within related memories'). Minor deduction: lacks error handling description (e.g., orphaned memory_id) and pagination/rate limit details given relationship traversal complexity.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 100% schema coverage (baseline 3), description adds valuable semantic context: memory_id is the 'context root,' max_depth controls 'relationship traversal depth,' and parameters operate in a coordinated two-phase process. Explains interaction between parameters beyond isolated schema definitions.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Specifies 'Search only within the context of a given memento' with clear verb (search) and resource (mementos). Distinguishes from sibling tool 'search_mementos' by emphasizing 'scoped search' and 'within context,' clarifying this narrows results to related memories rather than global search.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines5/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Explicit 'WHEN TO USE' section lists three specific scenarios: 'Searching within a specific problem context,' 'Finding solutions in related knowledge,' and 'Scoped discovery.' Also provides 'HOW TO USE' with clear parameter mapping, effectively guiding selection over alternatives like general search.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full burden. Discloses critical fuzzy-matching behavior (handles plurals, tenses, case variations) and persistence model ('long-term knowledge that survives across sessions'). Minor gap: does not describe return format or empty-result behavior despite no output schema being present.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Excellent structured format with clear semantic headers (BEST FOR, DO NOT USE FOR, EXAMPLES, FALLBACK). Every sentence provides actionable guidance; no filler content. Front-loaded with primary purpose statement followed by specific behavioral constraints and examples.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given 5 parameters with 100% schema coverage and no output schema, description comprehensively covers usage patterns, sibling differentiations, and query optimization strategies. Minor deduction for not describing return structure given absence of output schema, though fallback guidance mitigates this.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 100%, establishing baseline 3. Description adds value through concrete usage examples (e.g., 'recall_mementos(query="timeout fix")') that illustrate effective query patterns and demonstrate project_path usage, enhancing understanding of parameter intent beyond schema definitions.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Description opens with specific verb-resource combination ('finding mementos using natural language queries') and explicitly distinguishes from sibling tool search_mementos by delineating exact terms/acronyms as 'LESS EFFECTIVE FOR' that alternative, while positioning itself for 'conceptual queries' and 'fuzzy matching'.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines5/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides comprehensive when-to-use guidance via structured sections: 'BEST FOR' (conceptual queries), 'DO NOT USE FOR' (temporary session context), 'LESS EFFECTIVE FOR' (acronyms/proper nouns with explicit alternative 'use search_mementos'), and 'FALLBACK' recommendation. Explicitly scopes applicability vs alternatives.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full disclosure burden. Provides excellent behavioral context via confidence scale (0.0-1.0 ranges with semantic labels) and notes it 'Overriding automatic decay' indicating interference with background processes. Minor gap: doesn't explicitly state if adjustments are permanent or reversible.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Perfectly structured with clear sections: purpose statement, 'Use for' bullets, 'Examples' bullets, and 'Confidence ranges' reference. Front-loaded with the core action. Every line provides actionable guidance or domain context.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness5/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Comprehensive coverage for a confidence adjustment tool. Includes domain-specific knowledge (confidence scale interpretation), interaction patterns with automatic decay, and operational examples. No output schema exists, but none required given the action-oriented nature.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 100%, establishing baseline 3. Description adds significant value: confidence ranges explain numeric semantics, and concrete examples demonstrate parameter usage patterns (reason='Verified in production').

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Opens with specific verb 'adjust' targeting 'confidence of a relationship'. Implicitly distinguishes from sibling 'apply_memento_confidence_decay' (manual vs automatic) and 'boost_memento_confidence' (general adjustment vs specific boosting). The domain context (memento relationships) is clear.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines5/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Explicit 'Use for:' section lists three specific scenarios including 'Overriding automatic decay' which clearly signals when to use this manual tool versus the automatic sibling. Concrete examples show valid/invalid correction cases.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations, description carries full burden and succeeds by disclosing search semantics: tag normalization to lowercase, tolerance modes (strict/normal/fuzzy), and match logic (any/all). Minor gap: no mention of result ordering or pagination behavior beyond schema defaults.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Highly structured with clear sections (purpose, usage guidelines, parameters, note, examples, sibling recommendation). Front-loaded with priority instruction ('USE THIS TOOL FIRST'). No wasted words; examples are concise and illustrative.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness5/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For an 11-parameter search tool with no output schema, coverage is complete: parameter guidance, concrete examples demonstrating syntax, behavioral notes (case normalization), and clear sibling boundaries. No requirement to describe return values.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Despite 100% schema coverage (baseline 3), description adds strategic value: tags are 'most reliable for acronyms', search_tolerance and match_mode are contextualized with use-case logic that pure schema descriptions lack.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Opens with specific verb+resource ('Advanced search... for precise retrieval of mementos') and immediately distinguishes from sibling recall_mementos via 'USE THIS TOOL FIRST (not recall)' and the 4-bullet use case list (acronyms, proper nouns, etc.).

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines5/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Exceptional explicit guidance: 'USE THIS TOOL FIRST (not recall)' followed by enumerated when-to-use scenarios, and explicit alternative at the end: 'For conceptual/natural language queries, use recall_mementos instead.' This is a model for sibling differentiation.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior5/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, yet description fully discloses behavioral traits: return value ('Returns memory_id'), hard limits (500 char title, 50KB content, 50 tags), uniqueness constraint on ID, and mutation scope (cross-session persistence). Exceeds baseline for zero-annotation tools.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Exceptionally structured with clear headers (USE FOR, LIMITS, TAGGING BEST PRACTICE, EXAMPLES). Information is front-loaded with required/optional distinction immediately after the one-line purpose. Every section serves a distinct purpose; length is justified by complexity and lack of annotations.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness5/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Comprehensive for an 8-parameter tool with nested objects and no output schema. Covers input constraints, return values, type enumerations with usage notes, relationship to sibling tools, and operational best practices. Leaves no critical gaps despite high complexity.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 100% (baseline 3), but description adds substantial value: validation limits not in schema (max 500 chars, 50KB), tagging best practices (acronym handling), and critical type clarifications ('decision' and 'pattern' are not valid standalone types). Elevates understanding beyond raw schema.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Opens with specific verb+resource ('Store a new memento') and explicitly distinguishes from siblings via 'USE FOR: Long-term knowledge...' vs 'DO NOT USE FOR: Temporary session state', clearly differentiating it from session-based tools and update/delete siblings.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines5/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides explicit dual guidance: 'USE FOR' and 'DO NOT USE FOR' blocks clearly define scope boundaries. Also references sibling `create_memento_relationship` for linking memories, guiding the agent toward the correct workflow.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

GitHub Badge

Glama performs regular codebase and documentation scans to:

  • Confirm that the MCP server is working as expected.
  • Confirm that there are no obvious security issues.
  • Evaluate tool definition quality.

Our badge communicates server capabilities, safety, and installation instructions.

Card Badge

mcp-memento MCP server

Copy to your README.md:

Score Badge

mcp-memento MCP server

Copy to your README.md:

How to claim the server?

If you are the author of the server, you simply need to authenticate using GitHub.

However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.

{
  "$schema": "https://glama.ai/mcp/schemas/server.json",
  "maintainers": [
    "your-github-username"
  ]
}

Then, authenticate using GitHub.

Browse examples.

How to make a release?

A "release" on Glama is not the same as a GitHub release. To create a Glama release:

  1. Claim the server if you haven't already.
  2. Go to the Dockerfile admin page, configure the build spec, and click Deploy.
  3. Once the build test succeeds, click Make Release, enter a version, and publish.

This process allows Glama to run security checks on your server and enables users to deploy it.

How to add a LICENSE?

Please follow the instructions in the GitHub documentation.

Once GitHub recognizes the license, the system will automatically detect it within a few hours.

If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.

How to sync the server with GitHub?

Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.

To manually sync the server, click the "Sync Server" button in the MCP server admin interface.

How is the quality score calculated?

The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).

Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.

Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).

Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/x-hannibal/mcp-memento'

If you have feedback or need assistance with the MCP directory API, please join our Discord server