Skip to main content
Glama
109,308 tools. Last updated 2026-04-17 17:52
  • Save your cognitive state for handoff to another agent. Include your investigation context: - What session/investigation is this part of? - What role/perspective were you taking? - Who might pick this up next? (another Claude, human, Claude Code?) Reference specific memories that matter: - Key discoveries (with memory IDs or quotes) - Critical evidence memories - Important questions that were raised - Hypotheses that were tested Before saving, organize your thoughts: 1. PROBLEM: What were you investigating? 2. DISCOVERED: What did you learn for certain? (reference the memories) 3. HYPOTHESIS: What do you think is happening? (cite supporting memories) 4. EVIDENCE: What memories support or contradict this? 5. BLOCKED ON: What prevented further progress? 6. NEXT STEPS: What should be investigated next? 7. KEY MEMORIES: Which specific memories are essential for understanding? Example descriptions: "[API Timeout Investigation - 3 hour session] Investigating production API timeouts as code analyst. Found correlation with batch_size=100 due to hardcoded limit in batch_handler.py (see memory: 'MAX_BATCH_SIZE discovery'). Confirmed not Redis connection issue - monitoring showed only 43/200 connections used (memory: 'Redis connection analysis'). Earlier hypothesis about connection pool exhaustion (memory_id: abc-123) was disproven. Key insight came from comparing 99 vs 100 batch behavior (memory: 'batch threshold testing'). Blocked on: need production access to verify fix. Next: Deploy with MAX_BATCH_SIZE=200 to staging first. Essential memories for handoff: 'MAX_BATCH_SIZE discovery', 'Redis monitoring results', 'Production vs staging comparison'. Ready for handoff to SRE team for deployment." "[Memory System Debugging - From Claude Code perspective] Worked on scoring issues where recall wasn't finding recent memories. Discovered RRF scores (0.005-0.016) were below MCP threshold of 0.05 (memory: 'RRF scoring analysis'). Implemented weighted linear fusion to replace RRF (memory: 'fusion algorithm implementation'). Testing showed immediate improvement (memory: 'fusion testing results'). This builds on earlier investigation about recall failures (memory: 'user report of recall issues'). Critical memories for continuation: 'RRF scoring analysis', 'ADR-023 decision', 'fusion testing results'. Next agent should verify scoring with real queries." "[Context Save/Restore Bug Investigation - 4 hour debugging session with user] Started with user noticing list_contexts returned empty despite saved contexts existing. Investigation revealed two critical bugs: (1) list_contexts was using hybrid search for 'checkpoint' word instead of filtering by memory_type (memory: 'hybrid search misuse discovery'), (2) restore_context hardcoded limit of 10 memories despite contexts having 20+ (memory: 'hardcoded limit bug'). Root cause analysis showed save_context grabs 20 most recent memories regardless of relevance - fundamental design flaw (memory: 'save_context design flaw analysis'). EVIDENCE CHAIN: User reported empty list -> checked DB, contexts exist -> examined list_contexts code -> found hybrid search looking for word 'checkpoint' -> tested /memories endpoint with memory_type filter -> confirmed working -> implemented fix using direct endpoint. INSIGHTS: The narrative description is doing 90% of cognitive handoff work. Memories are supporting evidence, not primary carriers of understanding (memory: 'narrative vs memories insight'). This suggests doubling down on narrative richness rather than perfecting memory selection. CORRECTED UNDERSTANDING: Initially thought memories weren't being returned. Actually they were, just wrong ones - recent memories instead of relevant ones (memory: 'memory selection correction'). CRITICAL MEMORIES: 'hybrid search misuse discovery', 'save_context design flaw analysis', 'narrative vs memories insight', '/memories endpoint test results'. NEXT AGENT: Should implement Phase 2 - semantic search for relevant memories within investigation timeframe. Ready for handoff to any Claude agent for implementation." When referencing memories: - **RELIABLE** — Use memory IDs: "memory_id: abc-123" (direct lookup, always works) - **BEST-EFFORT** — Use descriptive phrases: "see memory: 'Redis connection analysis'" (uses search + substring matching, may not resolve if the memory isn't in top results) - Group related memories: "Essential memories: 'X', 'Y', 'Z'" **Prefer memory_id references** whenever you have the UUID. Semantic phrase references are a convenience that works most of the time, but may silently fail to resolve. The response will tell you how many references resolved so you can retry with UUIDs if needed. Args: name: Name for this context checkpoint description: Detailed cognitive handoff description with memory references ctx: MCP context (automatically provided) Returns: Dict with success status, context_id, and memories included
    Connector

Matching MCP Servers

  • -
    security
    A
    license
    -
    quality
    Provides read-only server monitoring and diagnostic tools for AI assistants to manage Linux and Unraid systems via SSH. It enables natural language interactions for container management, storage health checks, and system log analysis while keeping credentials secure.
    Last updated
    13
    ISC
    • Linux
  • -
    security
    A
    license
    -
    quality
    Enables chatting with PostgreSQL databases through secure GitHub OAuth authentication, supporting read operations for all users and write operations for privileged users. Deployable as a production-ready remote MCP server on Cloudflare Workers with automatic schema discovery and SQL injection protection.
    Last updated
    1
    MIT

Matching MCP Connectors

  • UK property MCP: Land Registry prices, company charges, House Price Index. x402 USDC on Base.

  • Demo server entry for local testing