Skip to main content
Glama

Server Details

MemoryOracle - 10 agent memory tools: vector store, recall, summarization, redaction.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
ToolOracle/memoryoracle
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.3/5 across 10 of 10 tools scored. Lowest: 2.7/5.

Server CoherenceA
Disambiguation5/5

Each tool targets a distinct aspect of memory management (store, query, update, delete, export, link, stats, health, session capture), with no overlapping purposes.

Naming Consistency5/5

All tools follow a consistent verb_noun pattern (e.g., store_memory, query_memory, list_memories), with 'forget' being the only minor deviation but still clear.

Tool Count5/5

10 tools is well-scoped for a memory management server, covering all necessary operations without unnecessary clutter.

Completeness5/5

Covers full lifecycle: create, read, update, delete, plus advanced features like cross-referencing, session summaries, and GDPR export, with no obvious gaps.

Available Tools

10 tools
cross_referenceBInspect

Verify a memory against FeedOracle Trust Layer evidence, or link two memories together. Adds trust verification metadata.

ParametersJSON Schema
NameRequiredDescriptionDefault
symbolNoToken symbol for Trust Layer
link_toNoAnother memory_id to link
memory_idYesMemory to cross-reference
namespaceNo
trust_claimNoOverride claim text for Trust Layer check
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description bears full responsibility for disclosing behavioral traits. It mentions adding trust verification metadata but does not specify side effects, permissions, or whether the operation is destructive. This is insufficient for a mutation tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences with the primary purpose and a secondary effect. It is front-loaded and efficient, though it could be slightly more structured (e.g., separating the two use cases).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With 5 parameters, no output schema, and no annotations, the description is incomplete. It does not clarify when to 'verify' vs 'link', what the trust claim override is, or what the returned metadata contains. The dual purpose adds ambiguity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 80% (4 of 5 parameters have descriptions). The description does not add further detail about parameters beyond the general purpose, so it meets the baseline but does not enhance understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states two distinct purposes: verifying a memory against Trust Layer evidence or linking two memories. It uses specific verbs and resources, and distinguishes from sibling tools that handle storage, query, or export, because no sibling does cross-referencing.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for verification or linking but does not explicitly state when to use which operation or provide exclusions or alternatives. Given that sibling tools do not overlap, this is minimally adequate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

export_memoriesBInspect

Export all memories as JSON. GDPR Art. 20 data portability. Optional category filter.

ParametersJSON Schema
NameRequiredDescriptionDefault
formatNojson
categoryNo
namespaceNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden but only states that data is exported as JSON. It does not disclose whether the operation is read-only, requires authentication, is reversible, or any limitations (e.g., rate limits, data volume).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is very concise (one sentence plus GDPR reference) and front-loads the main purpose. However, it lacks structured formatting (e.g., bullet points) but is clear for a simple export tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple tool (3 optional params, no output schema), the description adequately covers the basic behavior and GDPR context, but omits details on the format and namespace parameters, and does not describe the JSON output structure.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0%, and only the 'category' parameter is briefly mentioned as 'optional category filter'. The 'format' (with default json) and 'namespace' parameters are not explained, leaving the agent unclear about their roles.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool exports all memories as JSON, with a specific verb and resource, and distinguishes it from siblings like list_memories or store_memory by emphasizing GDPR data portability and an optional category filter.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for GDPR compliance (Art. 20) and notes the optional category filter, but does not explicitly state when not to use or compare to alternative tools like list_memories for general retrieval.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

forgetAInspect

Delete memories permanently. GDPR-compliant. Can forget by ID, category, age, or everything.

ParametersJSON Schema
NameRequiredDescriptionDefault
categoryNoForget all memories in this category
memory_idNoSpecific memory to forget
namespaceNo
forget_allNoNuclear option: forget everything in namespace
older_than_daysNoForget memories older than N days
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must carry the full burden. It discloses permanence and GDPR compliance, but fails to mention important behavioral traits such as lack of required parameters, parameter combination constraints, irreversibility beyond 'permanently', and return value. This is insufficient for a potentially destructive operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with purpose, no wasted words. Efficiently conveys core functionality and filter methods.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite the tool's complexity (5 optional parameters, destructive action, no output schema), the description is too brief. It omits guidance on parameter interactions, return value, and precautionary warnings. A more comprehensive description is needed for safe agent use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 80% (4 of 5 parameters described). The description paraphrases the filter options but adds no significant meaning beyond what the schema already provides. The namespace parameter lacks description in both schema and description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Delete memories permanently' with a specific verb and resource. It distinguishes from siblings by being the only destructive tool, and lists the available filtering methods (ID, category, age, everything).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies use cases via 'GDPR-compliant' and lists deletion options, but does not explicitly state when to use versus alternatives like 'update_fact' or provide exclusions. The context is clear but lacks direct guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

health_checkBInspect

MemoryOracle health, capabilities, and storage stats.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description must disclose behavior. It mentions 'health, capabilities, storage stats' but does not indicate whether it is read-only, requires auth, or has side effects. Critical behavioral traits are missing.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence with 6 words, no wasted content. Front-loaded with the tool's purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a health check tool with no parameters and no output schema, the description is adequate but minimal. It could provide more context about what 'health' entails or if it includes connectivity info.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

There are no parameters, so the schema is fully covered. The description adds value by naming the categories of output (health, capabilities, storage stats), which is helpful beyond the empty schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool returns health, capabilities, and storage stats. It is a specific verb+resource (check health) and distinct from sibling tools like memory_stats. However, it could be more precise about what 'capabilities' means.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives. It does not state prerequisites, when not to use, or provide context about typical usage scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_memoriesBInspect

Browse all memories with filters and sorting. Categories, importance, recency, or access frequency.

ParametersJSON Schema
NameRequiredDescriptionDefault
sortNorecent, importance, accessed, createdrecent
limitNo
offsetNo
categoryNo
namespaceNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must fully disclose behavioral traits. It only states 'browse all memories' without indicating side effects, authorization needs, or whether it is read-only. The lack of context on what the tool does beyond listing is a significant gap.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise, consisting of a single sentence that conveys the core purpose. It front-loads the action 'browse all memories' and lists filtering options efficiently, making it easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description does not explain the return format or pagination behavior (limit/offset), which is important for a listing tool. Given the 5 parameters and no output schema, additional context on what the response contains would improve completeness. It is adequate but not thorough.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description mentions categories, importance, recency, and access frequency, which map to the 'sort' and 'category' parameters but not 'limit', 'offset', or 'namespace'. With only 20% schema coverage, the description adds some value but does not fully compensate for undocumented parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool browses all memories with filters and sorting. It mentions specific attributes like categories, importance, recency, and access frequency, which aligns with the input schema. This distinguishes it from siblings like cross_reference or query_memory that have different purposes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for browsing and filtering memories but does not explicitly state when to use this tool versus alternatives like query_memory or cross_reference. There are no usage exclusions or context provided, leaving the agent without guidance on tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

memory_statsCInspect

Usage dashboard: memory count, storage, categories, most accessed, recent memories.

ParametersJSON Schema
NameRequiredDescriptionDefault
namespaceNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It implies a read-only dashboard but does not disclose any behavioral traits like permissions, rate limits, or what happens if namespace is missing. Minimal transparency beyond the function name.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, front-loaded with 'Usage dashboard'. Efficiently lists included elements. However, it could be slightly more concise by removing the colon and dash.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (one optional param, no output schema), the description is minimally adequate. It mentions included data types but does not describe return format, pagination, or default behavior. For a stats tool, output shape is important contextual information.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has one optional parameter 'namespace' with 0% schema description coverage. Description does not mention this parameter at all, failing to add meaning or usage context. Agent gets no help understanding how namespace affects the stats.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states it provides a usage dashboard with memory count, storage, categories, most accessed, and recent memories. It differentiates from sibling tools like list_memories (which likely lists all memories) and export_memories (export). Could be improved by specifying a verb like 'retrieve' or 'get'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives. For example, it does not suggest using this for an overview vs list_memories for detailed listing. Lacks explicit when-not-to-use or prerequisite context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

query_memoryBInspect

Search persistent memory using full-text search with BM25 ranking. Finds relevant stored facts, preferences, decisions, and observations. Recency-boosted by default.

ParametersJSON Schema
NameRequiredDescriptionDefault
tagsNoFilter by tags
limitNoMax results (default: 10)
queryYesWhat to search for in memory
categoryNoFilter by category
namespaceNo
recency_boostNo
min_importanceNoMinimum importance (0-10)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Description discloses BM25 ranking and recency boost by default, which are key behaviors. However, with no annotations, it fails to mention if modifications occur, auth requirements, or side effects. The description adds some value but leaves gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three concise sentences front-load the action and key features. No unnecessary words, efficient coverage of purpose and default behavior.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With 7 parameters and no output schema, the description lacks guidance on parameter interactions (e.g., tags + category), pagination, or output format. It does not explain the role of min_importance or namespace, leaving the agent to infer from parameter names only.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema covers 71% of parameters with descriptions. The description adds context for recency_boost (default behavior) but not for namespace. Overall, it provides minimal extra meaning beyond the schema, especially for the query parameter which repeats schema description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states it searches persistent memory using full-text search with BM25 ranking, and specifies the content types (facts, preferences, decisions, observations). It also notes default recency boosting, distinguishing it from siblings like list_memories or export_memories.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool vs alternatives. It implies usage for searching memory but does not mention when not to use it or suggest siblings like cross_reference or list_memories for different needs.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

store_memoryAInspect

Store a fact, preference, decision, observation, or rule in persistent memory. Survives session restarts. Deduplicates automatically. Set importance (1-10) and optional TTL.

ParametersJSON Schema
NameRequiredDescriptionDefault
tagsNoTags for filtering
sourceNoWhere this memory came from (default: agent)
contentYesThe memory content to store
categoryNofact, preference, decision, observation, rule, session, portfolio, action_itemfact
metadataNoAdditional structured data
linked_toNoLink to other memory IDs
namespaceNoUser/agent namespace (default: 'default')
confidenceNo0.0-1.0 confidence in this memory (default: 1.0)
importanceNo1-10, higher = more important (default: 5)
expires_hoursNoAuto-delete after N hours (optional)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description discloses persistence ('Survives session restarts') and deduplication, but does not address side effects, permissions, or error behavior. With no annotations, more detail on the write operation's impact would be beneficial.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is three concise sentences, front-loading the core purpose. Every sentence adds value with no redundancy or filler.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (10 parameters, no output schema, no annotations), the description covers essential aspects but omits return value, error handling, and usage context. It's adequate but not thorough.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

All 10 parameters have descriptions in the input schema, so the description's mention of importance and TTL adds little beyond what the schema provides. Baseline 3 is appropriate as schema covers the parameter details.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool stores various types of memory (fact, preference, etc.) and mentions key features like persistence and deduplication. It effectively distinguishes itself from sibling tools such as forget, query_memory, and list_memories.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies using this tool for storing new memories but lacks explicit guidance on when to use it versus alternatives. No mention of prerequisites or when not to use it, leaving the agent to infer context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

summarize_sessionAInspect

End-of-session capture. Store a summary with extracted facts, decisions, and action items. Each becomes a separate searchable memory.

ParametersJSON Schema
NameRequiredDescriptionDefault
summaryNoSession summary text
decisionsNoDecisions made
key_factsNoKey facts learned
namespaceNo
action_itemsNoOpen action items
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description should disclose behavioral traits like persistence, idempotency, or side effects. It only states that each item becomes a separate memory, but doesn't mention if it appends or replaces, error handling, or permission requirements.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise, consisting of two short sentences. The key phrase 'End-of-session capture' is front-loaded, and every word adds value with no filler.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 5 optional parameters and no output schema, the description adequately explains the purpose and outcome (memories are stored). However, more context on how the memories are searched (by namespace?) and the absence of required parameters could be clarified.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 80%, so the schema already explains most parameters. The description adds the context that each component becomes a separate memory, but does not elaborate on the 'namespace' parameter (missing in schema) or provide additional meaning beyond what is in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose as 'End-of-session capture' that stores a summary with facts, decisions, and action items, each becoming a separate searchable memory. This distinguishes it from siblings like store_memory which likely stores single entries.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The phrase 'End-of-session capture' implies it should be used at the end of a session, but there is no explicit guidance on when to use this tool versus alternatives like store_memory or cross_reference. No exclusions or when-not-to-use are provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

update_factBInspect

Update an existing memory with new information. Preserves version history (last 10 changes).

ParametersJSON Schema
NameRequiredDescriptionDefault
tagsNo
reasonNoWhy this was updated
contentNoNew content (optional)
metadataNo
memory_idYesID of memory to update
namespaceNo
confidenceNo
importanceNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, description carries full burden. Only discloses version history preservation. Missing critical behaviors: idempotency, error handling when memory_id doesn't exist, side effects of parameters like confidence/importance, auth requirements, and rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no filler. Efficient but could be structured better with bullet points in a longer description.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 8 parameters, no output schema, and no annotations, the description is severely incomplete. Does not explain return value, parameter interactions, or constraints. Only one behavioral aspect (version history) is covered.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is only 38%. Description adds no details about any parameter beyond what schema provides. Fails to compensate for low coverage, leaving most parameters (tags, metadata, namespace, confidence, importance) unexplained.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states action ('Update an existing memory') and resource ('memory'). It adds unique behavioral detail about version history, distinguishing it from sibling tools like store_memory (create) and forget (delete).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Does not explicitly specify when to use this tool versus alternatives. Implies usage for updating existing memories but lacks guidance on preconditions, when not to use, or comparisons to other tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.