Skip to main content
Glama

memory_log_conversation

Record complete AI conversation turns with code changes to persistent Markdown journals for maintaining context across sessions.

Instructions

Record one full conversation turn to today's journal.

You MUST pass the complete user message and your entire reply — no truncation, no summary, no "..." or "see above". If your reply is very long, pass the first part here then use memory_log_conversation_append() for the rest.

Args: user_message: The user's full message in this turn. agent_response: Your full reply (complete text, every paragraph). model: The model used for this response (e.g. "claude-4-opus"). code_changes: Optional. Files created/modified, e.g. "- src/foo.py (created)". title: Optional. One-line summary for this turn; if empty, derived from first line of user_message.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
user_messageYes
agent_responseNo
modelNo
code_changesNo
titleNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • The tool handler for memory_log_conversation, which records a conversation turn to the journal directory.
    async def memory_log_conversation(
        user_message: str,
        agent_response: str = "",
        model: str = "",
        code_changes: str = "",
        title: str = "",
    ) -> str:
        """Record one full conversation turn to today's journal.
    
        You MUST pass the **complete** user message and your **entire** reply — no
        truncation, no summary, no "..." or "see above". If your reply is very long,
        pass the first part here then use memory_log_conversation_append() for the rest.
    
        Args:
            user_message: The user's full message in this turn.
            agent_response: Your full reply (complete text, every paragraph).
            model: The model used for this response (e.g. "claude-4-opus").
            code_changes: Optional. Files created/modified, e.g. "- `src/foo.py` (created)".
            title: Optional. One-line summary for this turn; if empty, derived from first line of user_message.
        """
        journal_dir = _get_journal_dir()
        path = write_turn(
            journal_dir, user_message, agent_response, model, code_changes, title=title
        )
        return f"Recorded in {path.name}"
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Explains temporal scoping ('today's journal'), integrity requirements ('complete' messages), and default title derivation logic. However, lacks disclosure on error handling, idempotency, or size limits before requiring the append sibling.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with clear progression: purpose → constraints → workflow guidance → parameter docs. The Args section is necessary given 0% schema coverage. Slightly verbose format but every sentence earns its place by conveying required usage constraints.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given output schema exists (per context signals), description appropriately omits return value details. Covers primary workflow and sibling coordination. Minor gap: does not specify maximum length thresholds before requiring append tool, which would be useful for 'complete' message handling.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0% (only titles, no descriptions), but the Args section compensates fully by providing semantic meaning for all 5 parameters, including format examples like 'claude-4-opus' for model and '- `src/foo.py` (created)' for code_changes.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description opens with specific verb+resource ('Record one full conversation turn to today's journal') and distinguishes from siblings by explicitly naming memory_log_conversation_append() for handling long replies, clearly differentiating initial logging vs continuation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit conditional guidance: 'If your reply is very long, pass the first part here then use memory_log_conversation_append() for the rest.' Also states mandatory constraints ('You MUST pass the **complete** user message... no truncation').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/liuhao6741/openclaw-memory'

If you have feedback or need assistance with the MCP directory API, please join our Discord server