Skip to main content
Glama
Ownership verified

Server Details

Self-hosted MCP memory server for personal use and teams

Status
Unhealthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

10 tools
delete_memoryA
Destructive
Inspect

Delete a memory by ID. Removes from vec index and FTS (via trigger).

ParametersJSON Schema
NameRequiredDescriptionDefault
idYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
idYes
successYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable implementation context beyond annotations by specifying that deletion removes entries from both the vector index and FTS (Full Text Search) via trigger. This explains the comprehensive scope of the deletion. No contradiction with destructiveHint=true.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste. The first sentence front-loads the core action, while the second efficiently adds technical implementation details without verbosity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple single-parameter schema, presence of output schema (covering return values), and annotations (covering destructive/idempotent hints), the description provides adequate completeness by explaining the deletion mechanism across indices.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description minimally compensates by referencing 'ID' which maps to the 'id' parameter. However, it does not elaborate on what the ID represents (memory identifier), constraints, or how to obtain it (e.g., via list_memories).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description provides a specific verb (Delete), resource (memory), and scope (by ID), clearly distinguishing it from sibling tools like update_memory, move_memory, and retrieve_memory.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While the description clearly identifies the tool's function, it lacks explicit guidance on when to use this versus alternatives (e.g., 'use update_memory to modify instead of delete' or 'use move_memory to change workspace without deleting').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_memoryA
Read-only
Inspect

Fetch a single memory by ID with full details (tags, metadata).

ParametersJSON Schema
NameRequiredDescriptionDefault
idYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
idYes
tagsYes
contentYes
metadataNo
created_atYes
memory_typeYes
workspace_idNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already establish the read-only, non-destructive nature of the operation. The description adds value by disclosing that the return includes 'full details (tags, metadata)', indicating data richness beyond what annotations provide, though it omits other behavioral traits like caching or rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the action and resource. Every phrase earns its place: 'single' distinguishes from list operations, 'by ID' clarifies the lookup method, and '(tags, metadata)' specifies output richness without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (single parameter), the presence of an output schema, and clear annotations, the description is complete. It adequately covers the retrieval semantics and return data characteristics without needing to document response structure.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description compensates by explicitly referencing 'by ID', which clarifies the purpose of the required 'id' parameter. While it doesn't elaborate on ID format or constraints, it provides essential semantic context missing from the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description provides a specific verb (Fetch) and resource (memory) with clear scope (single by ID). However, it fails to differentiate from the similar sibling tool 'retrieve_memory', which could confuse the agent when selecting between direct retrieval methods.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The phrase 'by ID' provides clear context that this tool requires a specific identifier, implying when to use it (direct lookup) versus list/search siblings. However, it lacks explicit when-not guidance or named alternatives like 'use search_by_tag if you don't have the ID'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_memoriesB
Read-only
Inspect

Paginated list of memories with optional type/tag filters.

ParametersJSON Schema
NameRequiredDescriptionDefault
tagNo
pageNo
page_sizeNo
memory_typeNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
pageYes
itemsYes
totalYes
page_sizeYes
total_pagesYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond the annotations by specifying that the listing is paginated and supports optional filtering. However, it fails to disclose additional behavioral traits like maximum page size limits, default sort order, or whether the results include full memory content versus metadata only.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise at nine words, with no redundant or filler content. It efficiently front-loads the key behavioral characteristic (paginated) and functional purpose (list of memories) in a single phrase.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of an output schema, the description appropriately avoids duplicating return value documentation. However, with four parameters and zero schema coverage, the description leaves significant gaps in parameter semantics, providing only partial context for the filtering capabilities without documenting pagination mechanics.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description fails to adequately compensate for the undocumented parameters. While it implicitly references the type and tag filters, it completely omits explanation of the pagination controls (page, page_size) despite mentioning pagination in the description text.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the resource (memories) and action (paginated list) with specific filtering capabilities (type/tag). However, it does not explicitly distinguish this from siblings like get_memory (single retrieval) or search_by_tag (dedicated search), relying on the agent to infer the difference from the tool name alone.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

There is no guidance on when to use this tool versus alternatives such as get_memory for specific retrieval or search_by_tag for dedicated tag searches. The description also omits any prerequisites or conditions for optimal use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_workspacesA
Read-only
Inspect

List all workspaces you are a member of (personal + shared).

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already establish this is a safe read-only operation. The description adds valuable scope context that 'all' includes both personal and shared workspaces, which clarifies authorization boundaries. However, it does not disclose additional behavioral traits like pagination, caching behavior, or default sorting.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The single sentence is perfectly efficient: front-loaded with the action ('List'), followed by resource and scope, with parenthetical clarification adding precision without verbosity. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (zero parameters, read-only), the presence of an output schema, and clear annotations, the description provides sufficient context for invocation. It appropriately omits return value details (covered by schema) but could mention if the result is paginated or cached.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With zero parameters, the baseline score applies. The description correctly implies no filtering capabilities are available, which aligns with the empty input schema requiring no arguments.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description provides a specific verb ('List') and resource ('workspaces'), and explicitly scopes the operation to workspaces 'you are a member of (personal + shared)'. This clearly distinguishes it from the sibling memory tools (delete_memory, store_memory, etc.) which operate on memories rather than workspaces.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description clarifies the scope of returned data (personal + shared workspaces), but does not explicitly state when to use this tool versus the memory-related siblings, nor does it provide workflow guidance (e.g., 'use this to identify target workspaces before storing memories').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

move_memoryAInspect

Move a memory to a different workspace.

id: memory ID to move. workspace: name of the target workspace (must be a member with write access).

ParametersJSON Schema
NameRequiredDescriptionDefault
idYes
workspaceYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
idYes
createdYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare idempotentHint=false and destructiveHint=false. The description adds valuable auth context (write access requirement) but omits behavioral details like whether the memory ID persists, if the operation is reversible, or error conditions when targeting the current workspace.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely compact with three efficient lines: one for purpose, two for parameters. Every sentence earns its place with zero redundancy or filler.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple 2-parameter schema, presence of an output schema (covering return values), and annotations (covering safety hints), the description is sufficiently complete. It successfully addresses the zero schema coverage by inline parameter documentation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description fully compensates by documenting both parameters: 'id' is defined as the memory ID and 'workspace' as the target name with an access constraint. It could achieve 5 with format examples or validation details, but adequately covers the semantic gap.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a clear, specific action ('Move a memory') and target location ('to a different workspace'), unambiguously distinguishing it from siblings like update_memory, delete_memory, or store_memory. It uses precise verbs and resources without tautology.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides a prerequisite constraint ('must be a member with write access'), implying when the tool can be used. However, it lacks explicit guidance on when to choose this over alternatives (e.g., when to move vs. copy/delete) or what happens if the memory is already in the target workspace.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recall_memoryA
Read-only
Inspect

Search memories by time expression + semantics.

Examples: "last week", "yesterday", "about Python last month".

Returns compact snippets by default (snippet_length=200). To get the full content of a specific memory, call get_memory(id). Set snippet_length=None to return full content immediately. Pass workspace= to search only within a specific workspace.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYes
n_resultsNo
workspaceNo
memory_typeNo
snippet_lengthNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations confirm read-only/non-destructive status; the description adds valuable behavioral context including the default snippet length (200), the compact vs full content trade-off, and workspace filtering behavior. Does not contradict annotations. Could add minor details about result ranking or time expression parsing limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Exceptionally well-structured with zero waste: opens with purpose, provides concrete query examples, explains default return format, references sibling tool for full content, and closes with parameter-specific usage tips. Every sentence delivers actionable information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriately complete given the presence of an output schema (not shown but indicated). The description correctly focuses on input semantics and search behavior rather than return values. Minor gap in not explaining the memory_type filter categories which would help users construct valid queries.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description compensates well by explaining query syntax through concrete examples ('last week', 'about Python last month'), snippet_length behavior (default 200, None for full), and workspace filtering. Minor deduction for not documenting the memory_type enum values (fact, preference, instruction, feedback) or n_results default.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool 'Search[es] memories by time expression + semantics', providing a specific verb, resource, and search methodology. It effectively distinguishes itself from siblings like get_memory (for full content retrieval) and search_by_tag through its emphasis on temporal + semantic querying.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit guidance on when to use alternatives: 'To get the full content of a specific memory, call get_memory(id)'. Also clarifies when to use snippet_length=None vs default, and when to pass workspace for scoping. Clear differentiation from simple listing or ID-based retrieval tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

retrieve_memoryA
Read-only
Inspect

Hybrid semantic + full-text search over stored memories.

Returns compact snippets by default (snippet_length=200). To get the full content of a specific memory, call get_memory(id). Set snippet_length=None to return full content immediately. Pass workspace= to search only within a specific workspace.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
queryYes
workspaceNo
memory_typeNo
snippet_lengthNo
similarity_thresholdNo

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations confirm read-only safety (readOnlyHint: true). Description adds crucial behavioral context: default snippet truncation (200 chars), workspace scoping capability, and full-content retrieval method. Does not explain similarity_threshold behavior or ranking logic.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Front-loaded with core purpose. Each sentence delivers distinct value: search methodology, default behavior, sibling cross-reference, override instructions, and workspace filtering. No redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for basic search operations given presence of output schema, but gaps remain due to undocumented parameters (memory_type filter options, similarity_threshold tuning). Does not mention pagination or result ranking.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema coverage, description compensates partially: explains snippet_length semantics (default 200/None for full) and workspace filtering. However, fails to document memory_type enum values (fact, preference, instruction, feedback), similarity_threshold meaning, or limit parameter purpose.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific action (hybrid semantic + full-text search) and resource (stored memories). Clearly distinguishes from sibling 'get_memory' by emphasizing search functionality vs. direct retrieval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when NOT to use ('To get the full content of a specific memory, call get_memory(id)') and provides workflow guidance (use snippet_length=None for full content). Names specific alternative tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_by_tagA
Read-only
Inspect

Search memories by tags. AND: all tags present. OR: any tag present.

ParametersJSON Schema
NameRequiredDescriptionDefault
tagsYes
operationNoAND

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The annotations already declare readOnlyHint=true and destructiveHint=false. The description adds valuable behavioral context by explaining how the AND/OR logic filters results, but does not disclose additional traits like case sensitivity, pagination behavior, or empty result handling.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is optimally concise with two sentences and zero waste. It is front-loaded with the core purpose ('Search memories by tags') followed immediately by the critical behavioral nuance (AND/OR logic).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of an output schema and simple parameter structure (2 params, one enum), the description is adequate but minimal. It covers the primary search logic but omits details about tag format expectations, workspace scoping (implied by siblings), or error conditions that would be helpful given the 0% schema coverage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description compensates by clearly explaining the semantics of the 'operation' enum (AND requires all tags, OR requires any tag). It implies the purpose of the 'tags' array through 'Search memories by tags,' though it does not detail format constraints like case sensitivity or max length.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb (Search), resource (memories), and mechanism (by tags). It is specific enough to distinguish from siblings like list_memories (general listing) and recall_memory (likely semantic search), though it does not explicitly contrast with them.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains the boolean logic for the operation parameter (AND vs OR), which guides usage of that specific parameter. However, it lacks explicit guidance on when to use this tool versus alternatives like recall_memory or list_memories, or prerequisites for the tags parameter.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

store_memoryA
Idempotent
Inspect

Save a new memory. Idempotent: returns existing if content already stored.

workspace: name of the workspace to store into (must be a member). Omit or pass None to store as a personal memory. force: skip near-duplicate check and store unconditionally.

ParametersJSON Schema
NameRequiredDescriptionDefault
tagsNo
forceNo
contentYes
metadataNo
workspaceNo
memory_typeYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
idYes
createdYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations confirm idempotency and non-destructive nature, but the description adds valuable behavioral context: the near-duplicate check logic (and how force bypasses it) and workspace membership constraints. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Efficiently structured with the core purpose front-loaded ('Save a new memory'), followed by behavioral note and parameter-specific documentation. Every sentence provides actionable information without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for basic invocation given the output schema handles return value documentation, but incomplete due to missing semantics for 4 of 6 parameters (including required ones). The idempotency and workspace behavior coverage prevents a lower score.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description partially compensates by documenting workspace (scope/permissions) and force (duplicate logic), which are the non-obvious parameters. However, it fails to document the two required parameters (content, memory_type) or optional tags/metadata, leaving significant gaps.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Save a new memory' with specific verb and resource. The idempotency note ('returns existing if content already stored') clarifies behavior, though it lacks explicit differentiation from the sibling update_memory tool (when to create vs modify).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides implicit guidance through idempotency explanation (safe to retry) and workspace membership requirements ('must be a member'). However, it lacks explicit when-to-use guidance comparing it to update_memory or other siblings, and omits prerequisites for the required parameters.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

update_memoryAInspect

Update an existing memory by ID. Only provided fields are changed.

ParametersJSON Schema
NameRequiredDescriptionDefault
idYes
tagsNo
contentNo
metadataNo
memory_typeYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
idYes
createdYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds valuable partial-update context ('Only provided fields are changed') not present in annotations. However, fails to explain why idempotentHint is false or clarify behavior regarding the required memory_type field (validation vs. mutable).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, zero waste. Critical behavioral detail ('Only provided fields are changed') is front-loaded effectively.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Inadequate for a 5-parameter mutation tool with 0% schema coverage. The unexplained required memory_type field (unusual for updates) and lack of sibling differentiation leave significant gaps despite the existing output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema coverage, description must compensate fully but only implicitly covers id and vaguely references 'fields' for tags/content/metadata. Completely omits explanation of memory_type despite it being a required parameter for updates.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific verb (Update) + resource (memory) + identifier (by ID). Clearly distinguishes from siblings like store_memory (create), delete_memory (delete), and get_memory (read).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies partial-update semantics via 'Only provided fields are changed,' indicating PATCH-like behavior. However, lacks explicit when-to-use guidance versus store_memory or prerequisites for the required memory_type parameter.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources