NeverOnce
Server Quality Checklist
This repository includes a README.md file.
This repository includes a LICENSE file.
Latest release: v0.1.2
No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.
Tip: use the "Try in Browser" feature on the server page to seed initial usage.
This repository includes a glama.json configuration file.
- This server provides 7 tools. View schema
No known security issues or vulnerabilities reported.
This server has been verified by its author.
Add related servers to improve discoverability.
Tool Scores
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of disclosure. It reveals nothing about the nature of the statistics, potential performance characteristics of gathering them, or whether this operation is expensive. The only behavioral hint is 'Get' implying a read operation, but this is minimal.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise at four words in a single sentence. While not wasteful, it is arguably underspecified rather than appropriately minimal. However, the front-loading is correct and the sentence earns its place as a functional summary.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of an output schema (which handles return value documentation) and zero parameters, the description provides the minimum viable context for invocation. However, significant gaps remain regarding what specific statistics are monitored and the tool's relationship to the memory store lifecycle.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Zero parameters exist in the input schema, which qualifies for the baseline score of 4 per the scoring rules. No additional semantic clarification is required or possible for non-existent parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose3/5Does the description clearly state what the tool does and how it differs from similar tools?
The description states the basic action ('Get') and target ('memory store statistics'), but 'memory store' remains ambiguous (is this a cache, database, or specific service?) and does not specify what statistics are returned (count, size, usage?). It distinguishes poorly from siblings like 'recall' or 'check' which might also access the memory store.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
There is no guidance on when to use this tool versus siblings like 'recall' (likely for retrieving stored values) or 'check' (likely for validation). The description lacks any explicit when/when-not conditions or prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It successfully discloses the 'Corrections surface first' ranking behavior, but omits other key behaviors like pagination details, relevance scoring methodology, namespace scoping rules, or whether this is read-only (implied but not stated).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with purpose front-loaded in two sentences, followed by necessary Args documentation (required due to schema deficiencies). No wasted words, though the Args block format is informal.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a 3-parameter search tool with existing output schema. Covers basic search semantics, but gaps remain: namespace values/rationale and query syntax (e.g., supports wildcards?) are unspecified. Acceptable but not comprehensive.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description compensates adequately by documenting all three parameters in the Args block. 'What to search for' and 'Max results' are clear, though 'Filter by namespace' is somewhat circular and could explain what namespaces represent.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
States specific verb 'Search' and resource 'memories' clearly. 'Corrections surface first' adds distinctive behavioral detail that implicitly relates to the 'correct' sibling tool, though it doesn't explicitly contrast with all siblings like 'check' or 'store'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Lacks explicit when-to-use guidance or alternatives. While 'Corrections surface first' hints at ranking behavior, there is no explicit guidance on when to choose this over siblings like 'check' (which likely verifies) or 'stats' (which likely summarizes).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of disclosure. It states 'Delete' but fails to specify whether this is permanent, if confirmation is required, what the return value indicates (despite output schema existing), or any side effects on related memories.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely efficient: front-loaded purpose statement followed by an Args docstring block. No redundant text; every element serves a functional purpose for a single-parameter tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a simple 1-parameter tool with an output schema (so return values need not be explained), but incomplete given zero annotations and the destructive nature of the operation—lacks warnings or behavioral context that would make it fully self-contained.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0% (property lacks description field), so the description compensates by documenting the parameter in the Args block: 'memory_id: The memory to delete.' While minimal, this successfully conveys the parameter's purpose beyond the schema's title 'Memory Id' and type 'integer'.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verb (Delete) + resource (memory) + scope (by ID). Effectively distinguishes from sibling tools like 'recall' (retrieve), 'store' (create), and 'check' (verify) through its explicit destructive intent.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance provided on when to use versus alternatives (e.g., when to delete vs. update/correct), no warnings about irreversibility, and no prerequisites mentioned despite being a destructive operation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden. It successfully conveys parameter constraints (1-10 importance scale, comma-separated tags) but omits behavioral details like idempotency rules, duplicate handling, or persistence guarantees.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with a concise purpose statement followed by an Args block. Every line adds value beyond the schema. Minor deduction for the list format which could be redundant if the schema were well-documented, but necessary given the 0% coverage.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 5 parameters with zero schema coverage, the description adequately documents all inputs through the Args section. The existence of an output schema reduces the need to describe return values, though the description could still benefit from mentioning side effects or error conditions.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters5/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0% (only titles provided). The Args section comprehensively compensates by defining all 5 parameters with semantic meaning, format specifications (e.g., comma-separated tags with example), and valid ranges (1-10 importance) that the schema lacks.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the core action with 'Store a memory' (specific verb + resource). While it implicitly contrasts with siblings like 'recall' and 'forget', it does not explicitly differentiate when to use this versus similar memory-related operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to invoke this tool versus alternatives such as 'correct' or 'check', nor does it mention prerequisites, initialization requirements, or post-conditions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It discloses that corrections are 'stored' (implying persistent state/read operation) and that results may 'change your approach' (affecting downstream logic). However, it omits details about the output format, what happens when no corrections exist, or whether this operation has side effects like logging the check.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
Efficiently organized with the core purpose front-loaded ('Pre-flight check'). The 'Args:' section, while slightly informal, effectively segments parameter documentation. No filler sentences—every line contributes either purpose definition or parameter explanation.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the output schema exists, the description appropriately skips return value details. However, for a tool interacting with a correction/memory system (evidenced by siblings: store, forget, recall, correct), the description inadequately explains what constitutes a 'correction' or how the namespace isolation works. Adequate for basic invocation but missing domain context that would help agents formulate effective planned_action strings.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description compensates by documenting both parameters inline: 'planned_action: Describe what you're about to do' and 'namespace: Filter by namespace'. While 'namespace' documentation is somewhat circular, it provides essential semantic meaning for both parameters that the JSON schema completely lacks.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states this is a 'Pre-flight check' to 'see if any corrections apply before taking an action', providing a specific verb (check) and resource (corrections). It distinguishes from sibling 'correct' (which likely applies corrections) by positioning itself as a read-only validation step. However, it assumes familiarity with what 'corrections' means in this system without defining the domain concept.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use: 'Call this before doing something to see if there's a stored correction that should change your approach.' This provides clear temporal context (pre-action) and purpose (to validate approach). Lacks explicit 'when not to use' or direct reference to siblings like 'correct' or 'store', but the guidance is actionable and specific.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure and succeeds well. It explicitly describes the side effects: 'Helpful memories get stronger. Unhelpful ones decay.' This explains the learning mechanism impact, which is critical context for a feedback tool that the raw schema cannot convey.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is optimally structured: purpose statement first, behavioral mechanism second (the learning loop), then parameter documentation. Every sentence earns its place—no redundant repetition of the tool name, no filler text. The progression from 'what' to 'why' to 'how' is logical and efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (2 primitive parameters) and the existence of an output schema (noted in context signals), the description is sufficiently complete. It explains the domain-specific learning behavior (strengthening/decay) that agents need to understand the tool's role in the broader system, without needing to document return values.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0% (properties lack descriptions), but the Args section compensates effectively. It adds crucial semantic context that memory_id originates 'from recall results' (establishing the workflow dependency) and clarifies that did_help indicates whether the memory 'was useful' (translating the boolean into domain meaning).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a specific action ('Mark whether a surfaced memory actually helped') that clearly identifies the verb and resource. It distinguishes from siblings like 'recall' (retrieval) and 'store' (creation) by specifying this is a feedback operation on already-surfaced memories, and further differentiates from 'correct' or 'forget' by focusing on utility marking rather than content modification or deletion.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The parameter documentation provides clear contextual guidance by stating memory_id comes 'from recall results,' implying the temporal sequence (use after recall). However, it lacks explicit 'when not to use' guidance or differentiation from potentially similar siblings like 'correct' (which might also modify memory metadata).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and successfully discloses key behavioral traits: maximum importance prioritization ('Always surfaces first') and hierarchical relationship to other memories ('override normal memories'). Missing minor details like persistence guarantees or reversibility, but covers the critical behavioral profile.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Every sentence earns its place: purpose statement, behavioral priority, usage condition, and differentiation from siblings. The structured Args section efficiently documents parameters without redundancy. Front-loaded and appropriately sized for the complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Complete for a 4-parameter memory tool with output schema. The description covers purpose, usage context, all parameters (compensating for empty schema), and behavioral优先级. No gaps requiring consultation of external documentation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters5/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The Args section fully compensates for 0% schema description coverage by documenting all 4 parameters with clear semantics: 'content' defines correct behavior, 'context' specifies applicability, 'tags' notes the comma-separated format, and 'namespace' explains organizational purpose. Adds substantial value beyond the bare schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with the specific action 'Store a correction' and immediately distinguishes this from the sibling 'store' tool by noting that corrections 'override normal memories.' The additional behavioral details ('Always maximum importance. Always surfaces first') clarify the special status of this operation versus regular memory storage.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit when-to-use guidance: 'Use this when the AI made a mistake and you want to prevent it from happening again.' It implies the alternative (normal memories) by stating corrections override them, though it stops short of explicitly naming the 'store' sibling or stating when NOT to use this tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GitHub Badge
Glama performs regular codebase and documentation scans to:
- Confirm that the MCP server is working as expected.
- Confirm that there are no obvious security issues.
- Evaluate tool definition quality.
Our badge communicates server capabilities, safety, and installation instructions.
Card Badge
Copy to your README.md:
Score Badge
Copy to your README.md:
How to claim the server?
If you are the author of the server, you simply need to authenticate using GitHub.
However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.
{
"$schema": "https://glama.ai/mcp/schemas/server.json",
"maintainers": [
"your-github-username"
]
}Then, authenticate using GitHub.
Browse examples.
How to make a release?
A "release" on Glama is not the same as a GitHub release. To create a Glama release:
- Claim the server if you haven't already.
- Go to the Dockerfile admin page, configure the build spec, and click Deploy.
- Once the build test succeeds, click Make Release, enter a version, and publish.
This process allows Glama to run security checks on your server and enables users to deploy it.
How to add a LICENSE?
Please follow the instructions in the GitHub documentation.
Once GitHub recognizes the license, the system will automatically detect it within a few hours.
If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.
How to sync the server with GitHub?
Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.
To manually sync the server, click the "Sync Server" button in the MCP server admin interface.
How is the quality score calculated?
The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).
Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.
Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).
Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/WeberG619/neveronce'
If you have feedback or need assistance with the MCP directory API, please join our Discord server