Skip to main content
Glama

save_memory

Store new information in short-term memory with temporal decay, where frequently accessed content gets promoted to long-term storage automatically.

Instructions

Save a new memory to short-term storage. The memory will have temporal decay applied and will be forgotten if not used regularly. Frequently accessed memories may be promoted to long-term storage automatically. Args: content: The content to remember. tags: Tags for categorization. entities: Named entities in this memory. source: Source of the memory. context: Context when memory was created. meta: Additional custom metadata.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
contentYes
contextNo
entitiesNo
metaNo
sourceNo
tagsNo

Implementation Reference

  • The core handler for the 'save_memory' MCP tool. Decorated with @mcp.tool() for automatic registration. Performs comprehensive input validation, optional NLP-based auto-enrichment (entities extraction, strength scoring), secret detection, optional embedding generation using SentenceTransformer, constructs Memory object, persists via db.save_memory(), and returns structured result.
    @mcp.tool() @time_operation("save_memory") def save_memory( content: str, tags: list[str] | None = None, entities: list[str] | None = None, source: str | None = None, context: str | None = None, meta: dict[str, Any] | None = None, strength: float | None = None, ) -> dict[str, Any]: """ Save a new memory to short-term storage with automatic preprocessing. The memory will have temporal decay applied and will be forgotten if not used regularly. Frequently accessed memories may be promoted to long-term storage automatically. **Auto-enrichment (v0.6.0)**: If entities or strength are not provided, they will be automatically extracted/calculated from the content using natural language preprocessing. This makes save_memory "just work" for conversational use. Args: content: The content to remember (max 50,000 chars). tags: Tags for categorization (max 50 tags, each max 100 chars). entities: Named entities in this memory (max 100 entities). If None, automatically extracted from content. source: Source of the memory (max 500 chars). context: Context when memory was created (max 1,000 chars). meta: Additional custom metadata. strength: Base strength multiplier (1.0-2.0). If None, automatically calculated based on content importance. Raises: ValueError: If any input fails validation. """ # Input validation content = cast( str, validate_string_length(content, MAX_CONTENT_LENGTH, "content", allow_empty=False) ) if tags is not None: tags = validate_list_length(tags, MAX_TAGS_COUNT, "tags") tags = [validate_tag(tag, f"tags[{i}]") for i, tag in enumerate(tags)] if entities is not None: entities = validate_list_length(entities, MAX_ENTITIES_COUNT, "entities") entities = [validate_entity(entity, f"entities[{i}]") for i, entity in enumerate(entities)] if source is not None: source = cast(str, validate_string_length(source, 500, "source", allow_none=True)) if context is not None: context = cast(str, validate_string_length(context, 1000, "context", allow_none=True)) # Auto-enrichment preprocessing (v0.6.0) config = get_config() enrichment_applied = False if config.enable_preprocessing: from ..preprocessing import EntityExtractor, ImportanceScorer, PhraseDetector # Initialize preprocessing components (cached at module level) phrase_detector = PhraseDetector() entity_extractor = EntityExtractor() importance_scorer = ImportanceScorer() # Detect importance signals phrase_signals = phrase_detector.detect(content) # Auto-extract entities if not provided if entities is None: entities = entity_extractor.extract(content) enrichment_applied = True # Auto-calculate strength if not provided if strength is None: strength = importance_scorer.score( content, entities=entities, importance_marker=phrase_signals["importance_marker"] ) enrichment_applied = True else: # Default strength if preprocessing disabled if strength is None: strength = 1.0 # Validate strength if strength is not None and (strength < 1.0 or strength > 2.0): raise ValueError("strength must be between 1.0 and 2.0") # Secrets detection (if enabled) config = get_config() if config.detect_secrets: matches = detect_secrets(content) if should_warn_about_secrets(matches): warning = format_secret_warning(matches) logger.warning(f"Secrets detected in memory content:\n{warning}") # Note: We still save the memory but warn the user # Create metadata metadata = MemoryMetadata( tags=tags or [], source=source, context=context, extra=meta or {}, ) # Generate ID and embedding memory_id = str(uuid.uuid4()) embed = _generate_embedding(content) # Create memory now = int(time.time()) memory = Memory( id=memory_id, content=content, meta=metadata, created_at=now, last_used=now, use_count=0, embed=embed, entities=entities or [], strength=strength if strength is not None else 1.0, ) # Save to database db.save_memory(memory) return { "success": True, "memory_id": memory_id, "message": f"Memory saved with ID: {memory_id}", "has_embedding": embed is not None, "enrichment_applied": enrichment_applied, "auto_entities": len(entities or []) if enrichment_applied else 0, "calculated_strength": strength, }

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/prefrontalsys/mnemex'

If you have feedback or need assistance with the MCP directory API, please join our Discord server