ask_memory
Retrieve relevant passages from your personal knowledge base and generate answers using a local LLM, keeping your data private.
Instructions
Search the knowledge base and generate an answer using a local LLM.
Retrieves the top-k most relevant passages via hybrid search, builds a
RAG prompt, and calls the local Ollama LLM to generate an answer.
When Ollama is unavailable the ``answer`` field is ``null`` and a
``hint`` field explains how to install Ollama and pull a model.
Args:
question: The question to answer.
top_k: How many passages to retrieve (clamped to 1-20).
model: Optional model override.
Returns:
Dict with keys ``answer``, ``sources``, ``model``, ``ollama_available``.
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | ||
| top_k | No | ||
| model | No |