Skip to main content
Glama

memory_store_tool

Store and index memories with semantic deduplication and automatic relationship inference for persistent AI assistant context.

Instructions

Store a new memory with semantic indexing, deduplication, and automatic relationship inference.

FAST PATH: If daemon is running, queues for async embedding (<10ms response). The daemon handles embedding and storage in the background.

Auto-linking behavior:

  • ALWAYS works using embedding similarity (no LLM required)

  • Creates 'relates_to' edges for memories above similarity_threshold

  • If LLM available, upgrades edge types to supersedes/contradicts/caused_by

  • Set auto_link=False to disable automatic edge creation

Args: content: The memory content text memory_type: Type of memory (preference, decision, pattern, session) namespace: Scope of the memory (default from RECALL_DEFAULT_NAMESPACE config) importance: Importance score from 0.0 to 1.0 (default from RECALL_DEFAULT_IMPORTANCE config) metadata: Optional additional metadata as dict auto_link: If True, automatically create edges to similar memories (default: True) similarity_threshold: Minimum similarity for auto-linking, 0.0-1.0 (default: 0.6) max_auto_links: Maximum auto-created edges per memory (default: 5) use_llm_classification: If True, use LLM to refine edge types (default: True)

Returns: Result dictionary with: - success: Boolean indicating operation success - queued: True if queued via daemon (fast path) - queue_id: Queue ID if queued via daemon - id: Memory ID (if sync path used) - content_hash: Content hash for deduplication (sync path only) - auto_relationships: List of automatically inferred relationships (sync path only) - error: Error message (if failed)

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
contentYes
memory_typeNosession
namespaceNo
importanceNo
metadataNo
auto_linkNo
similarity_thresholdNo
max_auto_linksNo
use_llm_classificationNo
queue_idNo

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/blueman82/recall'

If you have feedback or need assistance with the MCP directory API, please join our Discord server