cache_warmup
Pre-warm semantic cache with prompt/answer pairs to seed FAQ responses, product descriptions, or known-good LLM answers before user traffic arrives.
Instructions
Pre-warm the semantic cache with a list of prompt/value pairs. For each entry: computes an embedding, checks if a similar entry already exists (similarity ≥ 0.98), and writes new entries to Valkey + pgvector index. Use this to seed FAQ responses, product descriptions, or known-good LLM answers before the first real user traffic. Requires OPENAI_API_KEY.
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| instance_id | Yes | UUID of the cache instance | |
| entries | Yes | List of prompt/value pairs to pre-warm into the cache | |
| namespace | No | Default namespace for all entries (default: cachly:sem) | |
| ttl | No | Time-to-live in seconds for warmed entries (omit for no expiry) | |
| auto_namespace | No | Auto-detect the namespace per prompt using text heuristics. Overrides `namespace` when no per-entry namespace is set. |