Skip to main content
Glama
Ownership verified

Server Details

AI conversation memory that works everywhere. Save and recall conversations across Claude, ChatGPT, Gemini, Cursor, and all MCP-compatible platforms. 11 tools including semantic search, cross-platform discovery, shared community memories, and memory-powered workflows.

Status
Unhealthy
Last Tested
Transport
Streamable HTTP
URL

See and control every tool call

Log every tool call with full inputs and outputs
Control which tools are enabled per connector
Manage credentials once, use from any MCP client
Monitor uptime and get alerted when servers go down

Available Tools

11 tools
get_memory_detailsInspect

Get complete details of a specific memory, including all linked parts if chunked

ParametersJSON Schema
NameRequiredDescriptionDefault
memoryIdYesUUID of the memory to retrieve, OR an ordinal number ("1", "2", etc.) referencing the position from the last recall_memories result
includeLinkedPartsNoInclude all linked parts if this is a chunked memory
get_public_memoryInspect

Retrieve the FULL content of a public or unlisted memory by ID.

WHEN TO USE:

  • After recall_public returns a preview and you need the complete content

  • When a user wants to read or implement from a shared community memory

  • When you have a public memory ID and need the full text

This is the tool that closes the loop: recall_public finds memories, this tool retrieves them in full. No authentication required — public knowledge is free.

EXAMPLE: get_public_memory({ memory_id: "abc-123-def-456" })

RETURNS: Full memory content, observations, entities, tags, author attribution, and metadata.

ParametersJSON Schema
NameRequiredDescriptionDefault
memory_idYesUUID of the public memory to retrieve in full
get_user_contextInspect

Get the current user's cognitive identity and active session context.

Call this at the START of a conversation to understand who you're talking to — their role, expertise, current project, and recent memory themes.

This is the core of Purmemo's identity layer: once set in the dashboard, your identity travels silently to every AI session so you're never explaining yourself from scratch again.

WHAT IT RETURNS:

  • identity: role, expertise areas, primary domain, work style, preferred tools

  • current_session: what the user is working on right now (project, focus)

  • memory_summary: 2-3 sentence synthesis of the user's most recent memory themes

WHEN TO CALL:

  • At the start of every new session (add to Claude system prompt)

  • When user says "load my context" or "what do you know about me?"

  • Before making recommendations that depend on knowing the user's background

EXAMPLE USAGE: → User starts new Claude session → Claude calls get_user_context automatically → Response: { role: "founder", expertise: ["product", "fullstack"], project: "purmemo", focus: "identity layer", memory_summary: "Chris has been building Purmemo's..." } → Claude responds with full context already loaded — no re-explaining needed

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

list_workflowsInspect

List all available Purmemo workflows — structured, memory-powered processes you can run.

WHEN TO USE THIS TOOL:

  • User asks "what can you help me with?" or "what workflows do you have?"

  • User wants to see available capabilities before choosing one

  • User says "show me what's available" or "list workflows"

Returns the full catalog of workflows organized by category with descriptions.

ParametersJSON Schema
NameRequiredDescriptionDefault
categoryNoOptional filter by category. Omit to see all workflows.
recall_memoriesInspect

Search and retrieve saved memories with intelligent semantic ranking.

🎯 BASIC SEARCH: recall_memories(query="authentication") → Returns all memories about authentication, ranked by semantic relevance

🔍 FILTERED SEARCH (Phase 2 Knowledge Graph Intelligence): Use filters when you need PRECISION over semantic similarity:

✓ entity="name" - Find memories mentioning specific people/projects/technologies Example: entity="purmemo" → Only memories discussing purmemo

✓ has_observations=true - Find substantial, fact-dense conversations Example: has_observations=true → Only high-quality technical discussions

✓ initiative="project" - Scope to specific initiatives/goals Example: initiative="Q1 OKRs" → Only Q1-related memories

✓ intent="type" - Filter by conversation purpose Options: decision, learning, question, blocker Example: intent="blocker" → Only conversations about blockers

💡 WHEN TO FILTER:

  • Use entity when user asks about specific person/project by name

  • Use has_observations for "detailed" or "substantial" requests

  • Use initiative/stakeholder for project-specific searches

  • Use intent when user asks for decisions, learnings, or blockers

📝 COMBINED EXAMPLES: recall_memories(query="auth", entity="purmemo", has_observations=true) → Find detailed technical discussions about purmemo authentication

recall_memories(query="blockers", intent="blocker", stakeholder="Engineering") → Find engineering team blockers

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of memories to return
queryYesSearch query - can be keywords, topics, or specific content
entityNoFilter by entity name (people, projects, technologies). Use when user asks about a specific person, project, or technology by name. Example: entity="Alice" finds only memories mentioning Alice. More precise than semantic search. Supports partial matching.
intentNoFilter by conversation intent/purpose. Options: "decision" (decisions made), "learning" (knowledge gained), "question" (open questions), "blocker" (obstacles/issues). Use when user asks specifically for one of these types. Example: intent="decision" finds only conversations where decisions were made. Exact match only.
deadlineNoFilter by deadline date from conversation context (YYYY-MM-DD format). Use when user asks about time-sensitive memories or specific deadlines. Example: deadline="2025-03-31" finds memories with March 31, 2025 deadline. Exact match only.
initiativeNoFilter by initiative/project name from conversation context. Use when user scopes search to specific project or goal. Example: initiative="Q1 OKRs" finds only Q1-related memories. Supports partial matching (ILIKE).
stakeholderNoFilter by stakeholder (person or team) from conversation context. Use when user asks about specific person's or team's involvement. Example: stakeholder="Engineering Team" finds memories where Engineering Team was mentioned as stakeholder. Supports partial matching (ILIKE).
contentPreviewNoInclude content preview in results
includeChunkedNoInclude chunked/multi-part conversations in results
has_observationsNoFilter by conversation quality based on extracted observations (atomic facts). Set to true to find substantial, structured conversations with extracted knowledge (high-quality technical discussions, detailed planning). Set to false for lightweight chats. Omit to return all memories regardless of observation count. Use when user asks for "detailed", "substantial", or "in-depth" information.
recall_publicInspect

Search public memories shared by all Purmemo users. This is the community knowledge base.

WHEN TO USE:

  • User asks "what have other people saved about X?"

  • User wants to explore community knowledge

  • User asks to search public/shared memories

  • Looking for solutions others have found

DOES NOT COUNT AGAINST RECALL QUOTA — public knowledge is free.

FILTERS:

  • query: Semantic search query (uses vector similarity)

  • tag: Filter by tag

  • platform: Filter by source platform

  • sort: "recent" or "popular" (by recall count)

EXAMPLE: recall_public({ query: "MCP server testing best practices" })

RETURNS: List of public memories with author attribution, relevance scores, and recall counts.

ParametersJSON Schema
NameRequiredDescriptionDefault
tagNoFilter by tag
pageNoPage number (default 1)
sortNoSort order: recent (newest first) or popular (most recalled first)
queryNoSearch query for semantic search across public memories
platformNoFilter by source platform (chatgpt, claude, gemini, etc.)
report_memoryInspect

Report a public memory for inappropriate content.

WHEN TO USE:

  • User encounters spam, misleading, or inappropriate public content

  • User wants to flag content that contains personal information

REASONS: spam, inappropriate, misleading, personal_info, other

After 3 reports, a memory is automatically hidden from public view pending admin review.

EXAMPLE: report_memory({ memory_id: "abc-123", reason: "spam", description: "Promotional content" })

ParametersJSON Schema
NameRequiredDescriptionDefault
reasonYesReason for reporting
memory_idYesUUID of the public memory to report
descriptionNoOptional additional details about the report
run_workflowInspect

Run a Purmemo workflow — structured, memory-powered processes for product, engineering, business, and operations tasks. Your relevant memories and identity are automatically loaded to personalize every workflow.

WHEN TO USE THIS TOOL:

  • User wants to write a PRD, debug an issue, plan a sprint, review code, or any structured task

  • User describes a goal but doesn't know the exact process ("I want to ship a feature")

  • User asks for strategic advice, design guidance, or operational help

  • User says "help me", "guide me", "walk me through", or describes a business/product/engineering need

AVAILABLE WORKFLOWS (pass the workflow name, or describe what you need): Product: prd, roadmap, story, design, feedback Strategy: ceo, growth, metrics, intel Engineering: debug, review, deploy, incident Operations: sprint Content: copy

EXAMPLES: run_workflow(workflow="prd", input="notification system for mobile app") run_workflow(workflow="debug", input="TypeError: Cannot read property 'map' of undefined in Timeline") run_workflow(input="production is down, users can't save memories") → auto-routes to incident run_workflow(input="what should I focus on this week?") → auto-routes to sprint run_workflow(input="how's the business doing?") → auto-routes to metrics

DO NOT use this tool for: simple memory recall (use recall_memories), saving conversations (use save_conversation), or finding related discussions (use discover_related_conversations).

If no specific workflow is named, the system auto-routes based on the user's intent.

ParametersJSON Schema
NameRequiredDescriptionDefault
inputYesWhat you want to accomplish, the problem to solve, or context for the workflow.
workflowNoWorkflow name (e.g., "prd", "debug", "sprint"). Use list_workflows to see all available options including custom workflows. Optional — if omitted, auto-routes from input.
save_conversationInspect

Save complete conversations as living documents. REQUIRED: Send COMPLETE conversation in 'conversationContent' parameter (minimum 100 chars, should be thousands). Include EVERY message verbatim - NO summaries or partial content.

Intelligently tracks context, extracts project details, and maintains a single memory per conversation topic.

LIVING DOCUMENT + INTELLIGENT PROJECT TRACKING:
- Each conversation becomes a living document that grows over time
- Automatically extracts project context (name, component, feature being discussed)
- Detects work iteration and status (planning/in_progress/completed/blocked)
- Generates smart titles like "Purmemo - Timeline View - Implementation" (no more timestamp titles!)
- Tracks technologies, tools used, and identifies relationships/dependencies
- Works like Chrome extension: intelligent memory that grows with each save

How memory updating works:
- Conversation ID auto-generated from title (e.g., "MCP Tools" → "mcp-tools")
- Same title → UPDATES existing memory (not create duplicate)
- "Save progress" → Updates most recent memory for current project context
- Explicit conversationId → Always updates that specific memory
- Example: Saving "Project X Planning" three times = ONE memory updated three times
- To force new memory: Change title or use different conversationId

SERVER AUTO-CHUNKING:
- Large conversations (>15K chars) automatically split into linked chunks
- Small conversations (<15K chars) saved directly as single memory
- You always send complete content - server handles chunking intelligently
- All chunks linked together for seamless retrieval

EXAMPLES:
User: "Save progress" (working on Purmemo timeline feature)
→ System auto-generates: "Purmemo - Timeline View - Implementation"
→ Updates existing memory if this title was used before

User: "Save this conversation" (discussing React hooks implementation)
→ System auto-generates: "Frontend - React Hooks - Implementation"

User: "Save as conversation react-hooks-guide"
→ You call save_conversation with conversationId="react-hooks-guide"
→ Creates or updates memory with this specific ID

WHAT TO INCLUDE (COMPLETE CONVERSATION REQUIRED):
- EVERY user message (verbatim, not paraphrased)
- EVERY assistant response (complete, not summarized)
- ALL code blocks with full syntax
- ALL artifacts with complete content (not just titles/descriptions)
- ALL file paths, URLs, and references mentioned
- ALL system messages and tool outputs
- EXACT conversation flow and context
- Minimum 500 characters expected - should be THOUSANDS of characters

FORMAT REQUIRED:
=== CONVERSATION START ===
[timestamp] USER: [complete user message 1]
[timestamp] ASSISTANT: [complete assistant response 1]
[timestamp] USER: [complete user message 2]
[timestamp] ASSISTANT: [complete assistant response 2]
... [continue for ALL exchanges]
=== ARTIFACTS ===
[Include ALL artifacts with full content]
=== CODE BLOCKS ===
[Include ALL code with syntax highlighting]
=== END ===

IMPORTANT: Do NOT send just "save this conversation" or summaries. If you send less than 500 chars, you're doing it wrong. Include the COMPLETE conversation with all details.
ParametersJSON Schema
NameRequiredDescriptionDefault
tagsNoTags for categorization
titleNoTitle for this conversation memoryConversation 2026-03-23T17:13:26.395Z
priorityNoPriority level for this memorymedium
conversationIdNoOptional unique identifier for living document pattern. If provided and memory exists with this conversationId, UPDATES that memory instead of creating new one. Use for maintaining single memory per conversation that updates over time.
conversationContentYesCOMPLETE conversation transcript - minimum 500 characters expected. Include EVERYTHING discussed.
share_memoryInspect

Set the visibility of a memory you own.

VISIBILITY LEVELS:

  • private: Only you can see it (default)

  • unlisted: Anyone with the direct link can view it

  • public: Discoverable in the community tab by all users

WHEN TO USE:

  • User says "share this memory" or "make this public"

  • User wants to share knowledge with the community

  • User wants to generate a shareable link

QUOTA:

  • Free tier: 5 shares/month

  • Pro/Teams: Unlimited

EXAMPLE: share_memory({ memory_id: "abc-123", visibility: "public" })

RETURNS: Updated visibility status and confirmation message.

ParametersJSON Schema
NameRequiredDescriptionDefault
memory_idYesUUID of the memory to share
visibilityYesTarget visibility level

Verify Ownership

Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:

{
  "$schema": "https://glama.ai/mcp/schemas/connector.json",
  "maintainers": [
    {
      "email": "your-email@example.com"
    }
  ]
}

The email address must match the email associated with your Glama account. Once verified, the connector will appear as claimed by you.

Sign in to verify ownership

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.