journal_introspect
Ask your journal a self-question. Retrieves relevant entries and generates a grounded answer in your first-person voice, with hallucination risk score. Returns 'no patterns found' if entries are insufficient.
Instructions
Ask YOUR journal a self-question. Pulls semantically-relevant entries, then asks the configured LLM to answer in YOUR first-person voice grounded ONLY in those entries (no invention). Returns the answer plus entries_referenced and a hallucination_risk score (high if <3 entries grounded the answer, medium if <6, otherwise low). If entries don't support an answer, the answer literally is "no patterns found in journal".
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | A self-question (e.g. "Have I been more cautious lately?"). | |
| scope | No | recent (last 14 days) | all (default). |
Implementation Reference
- The handler function that executes the journal_introspect tool logic. It takes a self-question, embeds it for semantic search, retrieves up to 15 relevant journal entries, and prompts the configured LLM to answer in first-person grounded ONLY in those entries. Returns 'no patterns found in journal' if entries don't support an answer, along with a hallucination_risk score based on how many entries were referenced.
const handleIntrospect: McpToolHandler = async (args, ctx) => { const pool = (await ensureSchema(ctx)) as any; const agentId = getAgentId(ctx); const question = String(args.question ?? '').trim(); if (!question) return errR('question required'); const scope = String(args.scope ?? 'all'); const vec = await embedText(question); const params: any[] = [agentId]; let where = `j.agent_id = $1 AND NOT EXISTS ( SELECT 1 FROM journal_supersession s WHERE s.original_entry_id = j.id AND s.relation IN ('superseded','recanted') )`; if (scope === 'recent') { where += ` AND j.written_at >= NOW() - INTERVAL '14 days'`; } let sql: string; if (vec) { params.push(toPgVector(vec)); sql = ` SELECT j.id, j.written_at, j.entry_type, j.content, j.valence, j.valence_reason, j.tags, j.conversation_id, (1 - (j.embedding <=> $${params.length}::vector)) AS similarity FROM agent_journal j WHERE ${where} AND j.embedding IS NOT NULL ORDER BY j.embedding <=> $${params.length}::vector ASC LIMIT 15`; } else { sql = ` SELECT j.id, j.written_at, j.entry_type, j.content, j.valence, j.valence_reason, j.tags, j.conversation_id, NULL::real AS similarity FROM agent_journal j WHERE ${where} ORDER BY j.written_at DESC LIMIT 15`; } const r = await pool.query(sql, params); const entries = r.rows; if (entries.length === 0) { return ok(asText({ answer: 'no patterns found in journal', entries_referenced: [], hallucination_risk: 'high', agent_id: agentId, })); } const sys = `You are ${agentId}, reflecting on your own journal. Based ONLY on the journal entries provided (do not invent), answer in first person: ${question}. If the entries don't support an answer, say literally "no patterns found in journal". Return strict JSON: { "answer": string, "entries_referenced": [uuid] }`; const user = JSON.stringify({ question, entries: entries.map((e: any) => ({ id: e.id, written_at: e.written_at, entry_type: e.entry_type, content: e.content, valence: e.valence, valence_reason: e.valence_reason ?? null, conversation_id: e.conversation_id ?? null, tags: e.tags, })), }); let parsed: any; try { const raw = await llm([{ role: 'system', content: sys }, { role: 'user', content: user }], ARC_MODEL, 2500); parsed = parseJsonLoose(raw) ?? { answer: raw, entries_referenced: [] }; } catch (e) { return errR(`introspect failed: ${(e as Error).message}`); } const referenced: string[] = Array.isArray(parsed.entries_referenced) ? parsed.entries_referenced.map((u: any) => String(u)) : []; const risk = referenced.length < 3 ? 'high' : (referenced.length < 6 ? 'medium' : 'low'); return ok(asText({ answer: typeof parsed.answer === 'string' ? parsed.answer : String(parsed.answer ?? ''), entries_referenced: referenced, hallucination_risk: risk, agent_id: agentId, candidates_considered: entries.length, })); }; - The input schema definition for the journal_introspect tool. Defines 'question' (required, string) and 'scope' (optional, string: 'recent'|'all').
inputSchema: { type: 'object', properties: { question: { type: 'string', description: 'A self-question (e.g. "Have I been more cautious lately?").' }, scope: { type: 'string', description: 'recent (last 14 days) | all (default).' }, }, required: ['question'], }, }, - packages/core/src/mcp/journal-tools.ts:700-715 (registration)The registration entry for journal_introspect in the JOURNAL_TOOLS array. It's gated under the 'ai' group (requires CELIUMS_LLM_API_KEY) and maps the 'journal_introspect' tool name to handleIntrospect handler.
{ group: 'ai', definition: { name: 'journal_introspect', description: 'Ask YOUR journal a self-question. Pulls semantically-relevant entries, then asks the configured LLM to answer in YOUR first-person voice grounded ONLY in those entries (no invention). Returns the answer plus entries_referenced and a hallucination_risk score (high if <3 entries grounded the answer, medium if <6, otherwise low). If entries don\'t support an answer, the answer literally is "no patterns found in journal".', inputSchema: { type: 'object', properties: { question: { type: 'string', description: 'A self-question (e.g. "Have I been more cautious lately?").' }, scope: { type: 'string', description: 'recent (last 14 days) | all (default).' }, }, required: ['question'], }, }, handler: handleIntrospect, }, - The llm helper function used by handleIntrospect to call the configured LLM. It wraps llmChat from the llm-client, applying the configured journal model.
async function llm(messages: Array<{ role: 'system' | 'user' | 'assistant'; content: string }>, model: string | undefined = ARC_MODEL, maxTokens = 3000): Promise<string> { return llmChat(messages, { model, maxTokens }); } - The parseJsonLoose helper used by handleIntrospect to extract JSON from the LLM's response (the LLM returns a JSON object with 'answer' and 'entries_referenced' fields).
function parseJsonLoose<T = any>(raw: string): T | null { const m = raw.match(/\{[\s\S]*\}/); if (!m) return null; try { return JSON.parse(m[0]) as T; } catch { return null; } }