Skip to main content
Glama

get_trigger_memory

Retrieve psychological trigger profiles distinguishing active stress triggers from resolved topics to guide sensitive conversation navigation based on historical stress patterns.

Instructions

Retrieve psychological trigger profile for a subject.

Returns which conversation topics consistently cause stress (active triggers) and which have been resolved over time.

- active triggers: topics where stress was elevated across multiple sessions. Tread carefully.
- resolved triggers: topics where stress has decreased. Safe to explore deeper.

Each trigger includes observation_count, avg_score, peak_score, and last_seen.

Requires prior ingest calls with the same subject_id. Not a medical device.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
subject_idYes

Implementation Reference

  • The main handler function for 'get_trigger_memory' tool. Retrieves psychological trigger profile for a subject from the Nefesh API. Decorated with @mcp.tool() for registration. Makes HTTP GET request to /v1/triggers endpoint and returns JSON response with active and resolved triggers.
    # ── Tool: get_trigger_memory ────────────────────────────────────
    @mcp.tool()
    async def get_trigger_memory(subject_id: str) -> dict:
        """Retrieve psychological trigger profile for a subject.
    
        Returns which conversation topics consistently cause stress (active triggers) and which have been resolved over time.
    
        - active triggers: topics where stress was elevated across multiple sessions. Tread carefully.
        - resolved triggers: topics where stress has decreased. Safe to explore deeper.
    
        Each trigger includes observation_count, avg_score, peak_score, and last_seen.
    
        Requires prior ingest calls with the same subject_id. Not a medical device.
        """
        async with httpx.AsyncClient(timeout=10) as client:
            resp = await client.get(
                f"{API_URL}/v1/triggers",
                params={"subject_id": subject_id},
                headers=_headers(),
            )
            if resp.status_code == 200:
                return resp.json()
            return {"error": f"No trigger data found for subject {subject_id}."}
  • proxy.py:148-148 (registration)
    The @mcp.tool() decorator registers the get_trigger_memory function with the FastMCP server, making it available as an MCP tool.
    @mcp.tool()
  • The schema is defined inline via the function signature (subject_id: str) -> dict and the docstring which serves as the tool description for the MCP protocol. The docstring documents the input parameter, return structure (active triggers, resolved triggers with observation_count, avg_score, peak_score, last_seen), and usage notes.
    # ── Tool: get_trigger_memory ────────────────────────────────────
    @mcp.tool()
    async def get_trigger_memory(subject_id: str) -> dict:
        """Retrieve psychological trigger profile for a subject.
    
        Returns which conversation topics consistently cause stress (active triggers) and which have been resolved over time.
    
        - active triggers: topics where stress was elevated across multiple sessions. Tread carefully.
        - resolved triggers: topics where stress has decreased. Safe to explore deeper.
    
        Each trigger includes observation_count, avg_score, peak_score, and last_seen.
    
        Requires prior ingest calls with the same subject_id. Not a medical device.
        """
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and excels: it explains the psychological model (active vs. resolved triggers), discloses the data structure returned (observation_count, avg_score, peak_score, last_seen), notes the dependency chain, and includes a medical disclaimer ('Not a medical device').

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Excellent structure: opening purpose statement, bulleted definitions of key concepts, data field enumeration, prerequisites, and disclaimer. Every sentence provides unique value. No redundancy or fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Comprehensive despite no output schema and no annotations. Explains the return structure in detail (trigger types and their metrics), covers domain-specific safety considerations, and documents operational dependencies. Complete for this complexity level.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0% (subject_id lacks description), so the description must compensate. It mentions subject_id in the prerequisite context ('Requires prior ingest calls with the same subject_id'), implying it must match previous calls, but does not explicitly define what subject_id represents or its format. Adequate but minimal compensation for the schema gap.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verbs ('Retrieve', 'Returns') and clearly identifies the resource as a 'psychological trigger profile'. It distinguishes from sibling 'ingest' by being retrieval-focused rather than input-focused, and differentiates from 'get_human_state' by focusing on historical trigger patterns rather than current state.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states the prerequisite 'Requires prior ingest calls with the same subject_id', establishing when the tool is usable. Provides guidance on interpreting results ('Tread carefully' vs 'Safe to explore deeper'). Could be improved by explicitly contrasting with 'get_session_history' for when to use each.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/nefesh-ai/nefesh-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server