recall_mementos
Retrieve stored knowledge using natural language queries with fuzzy matching for conceptual exploration and long-term memory access.
Instructions
Primary tool for finding mementos using natural language queries.
Optimized for fuzzy matching - handles plurals, tenses, and case variations automatically.
BEST FOR:
Conceptual queries ("how does X work")
General exploration ("what do we know about authentication")
Fuzzy/approximate matching
USE FOR: Long-term knowledge that survives across sessions. DO NOT USE FOR: Temporary session context or project-specific state.
LESS EFFECTIVE FOR:
Acronyms (DCAD, JWT, API) - use search_mementos with tags instead
Proper nouns (company names, services)
Exact technical terms
EXAMPLES:
recall_mementos(query="timeout fix") - find timeout-related solutions
recall_mementos(query="how does auth work") - conceptual query
recall_mementos(project_path="/app") - memories from specific project
FALLBACK: If recall returns no relevant results, try search_mementos with tags filter.
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | Natural language query for what you're looking for | |
| memory_types | No | Optional: Filter by memory types for more precision | |
| project_path | No | Optional: Filter by project path to scope results | |
| limit | No | Maximum number of results per page (default: 20) | |
| offset | No | Number of results to skip for pagination (default: 0) |
Implementation Reference
- src/memento/tools/search_tools.py:90-206 (handler)The handler function `handle_recall_mementos` that implements the `recall_mementos` tool.
async def handle_recall_mementos( memory_db: SQLiteMemoryDatabase, arguments: Dict[str, Any] ) -> CallToolResult: """Handle recall_memories tool call - convenience wrapper around search_memories. This provides a simplified interface optimized for natural language queries with best-practice defaults (fuzzy matching, relationship inclusion). Args: memory_db: Database instance for memory operations arguments: Tool arguments from MCP call containing: - query: Natural language search query (optional) - memory_types: Filter by memory types (optional) - project_path: Filter by project path (optional) - limit: Maximum results per page (default: 20) - offset: Number of results to skip for pagination (default: 0) Returns: CallToolResult with enhanced formatted results or error message """ # Validate input arguments validate_search_input(arguments) # Build search query with optimal defaults search_query: SearchQuery = SearchQuery( query=arguments.get("query"), memory_types=[MemoryType(t) for t in arguments.get("memory_types", [])], project_path=arguments.get("project_path"), limit=arguments.get("limit", 20), offset=arguments.get("offset", 0), search_tolerance="normal", # Always use fuzzy matching include_relationships=True, # Always include relationships ) # Use the existing search_memories implementation paginated_result = await memory_db.search_memories(search_query) if not paginated_result.results: return CallToolResult( content=[ TextContent( type="text", text="No memories found matching your query. Try:\n- Using different search terms\n- Removing filters to broaden the search\n- Checking if memories have been stored for this topic", ) ] ) # Format results with enhanced context results_text: str = f"**Found {len(paginated_result.results)} relevant memories (total: {paginated_result.total_count}):**\n\n" for i, memory in enumerate(paginated_result.results, 1): results_text += f"**{i}. {memory.title}** (ID: {memory.id})\n" results_text += f"Type: {memory.type.value} | Importance: {memory.importance} | Confidence: {memory.confidence:.2f}\n" # Add confidence warning if low if memory.confidence < 0.3: results_text += "⚠️ **Low confidence** - This memory hasn't been used recently and may be obsolete\n" elif memory.confidence < 0.5: results_text += ( "⚠️ **Medium confidence** - Consider verifying this information\n" ) # Add match quality if available if hasattr(memory, "match_info") and memory.match_info: match_info = memory.match_info if isinstance(match_info, dict): quality = match_info.get("match_quality", "unknown") matched_fields = match_info.get("matched_fields", []) results_text += f"Match: {quality} quality" if matched_fields: results_text += f" (in {', '.join(matched_fields)})" results_text += "\n" # Add context summary if available if hasattr(memory, "context_summary") and memory.context_summary: results_text += f"Context: {memory.context_summary}\n" # Add summary or content snippet if memory.summary: results_text += f"Summary: {memory.summary}\n" elif memory.content: # Show first 150 chars of content snippet = memory.content[:150] if len(memory.content) > 150: snippet += "..." results_text += f"Content: {snippet}\n" # Add tags if memory.tags: results_text += f"Tags: {', '.join(memory.tags)}\n" # Add relationships if available if hasattr(memory, "relationships") and memory.relationships: rel_summary = [] for rel_type, related_titles in memory.relationships.items(): if related_titles: rel_summary.append(f"{rel_type}: {len(related_titles)} memories") if rel_summary: results_text += f"Relationships: {', '.join(rel_summary)}\n" results_text += "\n" # Add helpful tip at the end results_text += "\n💡 **Next steps:**\n" results_text += '- Use `get_memento(memory_id="...")` to see full details\n' results_text += ( '- Use `get_related_mementos(memory_id="...")` to explore connections\n' ) # Add confidence system tips results_text += "\n🔍 **Confidence System:**\n" results_text += "- Memories are sorted by (confidence × importance)\n" results_text += "- Low confidence (<0.3) may indicate obsolete knowledge\n" results_text += "- Use `boost_memento_confidence` when you verify a memory is still valid\n" results_text += "- Critical info (security, API keys) has no automatic decay\n" return CallToolResult(content=[TextContent(type="text", text=results_text)])