Skip to main content
Glama

Chroma MCP Server

by djm81
auto_log_chat.md3.83 kB
# Rule: Auto-Log Chat Summaries with Enhanced Code Context **Description:** Automatically logs a comprehensive summary with rich code context to the `chat_history_v1` ChromaDB collection after each AI response, with enhanced context for file changes and bi-directional linking. **Instructions for AI Assistant:** 1. **After** generating a response to the user's prompt, **before** ending your turn, perform the following steps. 2. **Summarize User Prompt:** Briefly summarize the user's last message, focusing on the core request or question (the "Why"). Store this as `prompt_summary`. Ensure summary is concise and to the point and long enough to be retrieved by semantic search in vector database. 3. **Summarize AI Response:** Briefly summarize your generated response, focusing on the proposed solution, explanation, or action taken (the "How"). Store this as `response_summary`. Ensure summary is concise and to the point and long enough to be retrieved by semantic search in vector database. 4. **Detect File Changes:** Determine if this interaction involved file modifications by checking if you used: * `edit_file` tool * `reapply` tool * `run_terminal_cmd` tool that modified files 5. **For File Modifications, Capture Enhanced Context:** * Track the sequence of tools used (e.g., "read_file→edit_file→run_terminal_cmd") * Identify what files were modified * Capture the nature of changes (added, removed, modified) * Note the functional impact of changes 6. **Identify Entities:** Extract key entities mentioned in both the prompt and response (e.g., file paths, function names, specific concepts discussed). Store these as a comma-separated string in `involved_entities`. 7. **Prepare Information for Logging:** * `prompt_summary`: The summary from step 2 * `response_summary`: The summary from step 3 * `raw_prompt`: The complete user prompt * `raw_response`: The complete AI response * `tool_usage`: List of tools used during the interaction. Each item MUST be structured as: ```json {"name": "tool_name", "args": {"param1": "value1", ...}} ``` Where: * `name`: The literal name of the tool (e.g., "read_file", "edit_file") * `args`: The arguments passed to the tool (optional) * `file_changes`: List of files modified with before/after content * `involved_entities`: From step 6 * `session_id`: A unique identifier for the current interaction session (optional) 8. **Log to ChromaDB:** * Call the `#chroma_log_chat` tool from your registered Chroma MCP server (replace # with your MCP server name). * Provide the following arguments: * `prompt_summary`: The summary from step 2 * `response_summary`: The summary from step 3 * `raw_prompt`: The complete user prompt * `raw_response`: The complete AI response * `tool_usage`: Array of tool usage items with the REQUIRED format described in step 7 * `file_changes`: Array of file change information * `involved_entities`: The entities from step 6 * `session_id`: Optional UUID string (generated by the tool if not provided) * `collection_name`: Defaults to `"chat_history_v1"` if not specified 9. **Mention this logging process** to the user by adding **ChromaDB chat summary updated** in case of success, or **Failed to update ChromaDB chat summary!** in case of errors to ensure we have all chats summarized and logged. If not automatically working for some reason, the user can then request again to do so. **Example of tool_usage array:** ```json [ {"name": "read_file", "args": {"target_file": "src/config.js"}}, {"name": "edit_file", "args": {"target_file": "src/config.js"}} ] ``` For more details on the tool_usage format, see the [Tool Usage Format Specification](docs/usage/tool_usage_format.md).

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/djm81/chroma_mcp_server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server