memory_store_tool
Store and index memories with semantic deduplication and automatic relationship inference for persistent AI assistant context.
Instructions
Store a new memory with semantic indexing, deduplication, and automatic relationship inference.
FAST PATH: If daemon is running, queues for async embedding (<10ms response). The daemon handles embedding and storage in the background.
Auto-linking behavior:
ALWAYS works using embedding similarity (no LLM required)
Creates 'relates_to' edges for memories above similarity_threshold
If LLM available, upgrades edge types to supersedes/contradicts/caused_by
Set auto_link=False to disable automatic edge creation
Args: content: The memory content text memory_type: Type of memory (preference, decision, pattern, session) namespace: Scope of the memory (default from RECALL_DEFAULT_NAMESPACE config) importance: Importance score from 0.0 to 1.0 (default from RECALL_DEFAULT_IMPORTANCE config) metadata: Optional additional metadata as dict auto_link: If True, automatically create edges to similar memories (default: True) similarity_threshold: Minimum similarity for auto-linking, 0.0-1.0 (default: 0.6) max_auto_links: Maximum auto-created edges per memory (default: 5) use_llm_classification: If True, use LLM to refine edge types (default: True)
Returns: Result dictionary with: - success: Boolean indicating operation success - queued: True if queued via daemon (fast path) - queue_id: Queue ID if queued via daemon - id: Memory ID (if sync path used) - content_hash: Content hash for deduplication (sync path only) - auto_relationships: List of automatically inferred relationships (sync path only) - error: Error message (if failed)
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| content | Yes | ||
| memory_type | No | session | |
| namespace | No | ||
| importance | No | ||
| metadata | No | ||
| auto_link | No | ||
| similarity_threshold | No | ||
| max_auto_links | No | ||
| use_llm_classification | No | ||
| queue_id | No |