Skip to main content
Glama

Server Quality Checklist

83%
Profile completionA complete profile improves this server's visibility in search results.
  • Disambiguation5/5

    The two tools have completely distinct purposes: one exclusively stores/augments memories after responses, while the other retrieves them at the start of turns. No functional overlap exists between the write and read operations.

    Naming Consistency3/5

    While both use snake_case, they follow different grammatical patterns: 'recall' is a simple action verb, while 'advanced_augmentation' is an adjective-noun phrase describing a feature. A consistent pair would use matching patterns like 'store_memory' and 'recall_memory' or 'augment' and 'recall'.

    Tool Count3/5

    Two tools provides the absolute minimum viable surface for a memory system (read/write), but feels thin for the domain. Memory management typically requires additional operations like delete, update, or list, making this borderline for a complete memory server.

    Completeness3/5

    The server covers basic create (store) and read (recall) operations but lacks update, delete, or enumeration capabilities. Users cannot correct stored memories, remove outdated facts, or browse all stored context, creating notable gaps in the memory lifecycle.

  • Average 3.9/5 across 2 of 2 tools scored.

    See the tool scores section below for per-tool breakdowns.

  • This repository includes a README.md file.

  • This repository includes a LICENSE file.

  • Latest release: v0.1.0

  • No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.

    Tip: use the "Try in Browser" feature on the server page to seed initial usage.

  • This repository includes a glama.json configuration file.

  • This server provides 2 tools. View schema
  • No known security issues or vulnerabilities reported.

    Report a security issue

  • This server has been verified by its author.

  • Add related servers to improve discoverability.

Tool Scores

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full burden. It discloses what types of memories are retrieved (context, preferences, facts) and implies relevance ranking, but omits safety profile (read-only status), failure modes (no memories found), or return format details.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Two sentences with zero waste: first defines the action, second provides temporal usage guidance. Information density is optimal.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Adequate for a single-parameter retrieval tool without output schema. Description compensates partially by specifying what content is fetched (preferences, facts), though it could clarify return structure or empty-result behavior.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 100%, establishing a baseline of 3. The description mentions 'query' but adds minimal semantic detail beyond the schema's definition ('The user message or search query'). No clarification needed given comprehensive schema.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Clear verb ('Retrieve') and resource ('memories') with scope ('relevant...for a given query'). However, it does not explicitly differentiate from sibling 'advanced_augmentation', though the functions appear distinct.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Explicitly states when to invoke ('Call at the start of user turns') and explains the value proposition ('fetch prior context, preferences, and facts'). Lacks explicit 'when not to use' guidance or alternative comparisons.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden. It successfully conveys durability ('durable facts', 'across sessions') but omits critical behavioral details: whether calls are idempotent, if storage is additive or overwriting, limits, or error conditions.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Extremely concise with two sentences containing zero waste. Front-loaded with the action ('Store durable facts') and immediately followed by timing guidance ('after drafting'). Every word earns its place.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Adequate for a 2-parameter tool without output schema. The description explains the cross-session persistence mechanism but, lacking annotations, should ideally disclose side effects, storage scope (per-user vs global), or relationship to the recall mechanism.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 100%, establishing a baseline of 3. The description implies the parameters are used to extract facts for storage but does not explicitly map 'user_message' or 'assistant_response' to the extraction process or explain why both are required.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the tool stores 'durable facts and preferences' using specific verbs (store, persist) and identifies the resource (user context). It effectively distinguishes from sibling 'recall' by emphasizing the write operation ('Store') versus the implied read operation of the sibling.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides explicit workflow guidance: 'Call after responding' and 'after drafting a response.' However, it lacks explicit reference to sibling 'recall' as the retrieval alternative, though this is implicitly clear from the contrasting action verbs.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

GitHub Badge

Glama performs regular codebase and documentation scans to:

  • Confirm that the MCP server is working as expected.
  • Confirm that there are no obvious security issues.
  • Evaluate tool definition quality.

Our badge communicates server capabilities, safety, and installation instructions.

Card Badge

mcp-memori MCP server

Copy to your README.md:

Score Badge

mcp-memori MCP server

Copy to your README.md:

How to claim the server?

If you are the author of the server, you simply need to authenticate using GitHub.

However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.

{
  "$schema": "https://glama.ai/mcp/schemas/server.json",
  "maintainers": [
    "your-github-username"
  ]
}

Then, authenticate using GitHub.

Browse examples.

How to make a release?

A "release" on Glama is not the same as a GitHub release. To create a Glama release:

  1. Claim the server if you haven't already.
  2. Go to the Dockerfile admin page, configure the build spec, and click Deploy.
  3. Once the build test succeeds, click Make Release, enter a version, and publish.

This process allows Glama to run security checks on your server and enables users to deploy it.

How to add a LICENSE?

Please follow the instructions in the GitHub documentation.

Once GitHub recognizes the license, the system will automatically detect it within a few hours.

If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.

How to sync the server with GitHub?

Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.

To manually sync the server, click the "Sync Server" button in the MCP server admin interface.

How is the quality score calculated?

The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).

Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.

Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).

Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/MemoriLabs/mcp-memori'

If you have feedback or need assistance with the MCP directory API, please join our Discord server