Server Details
Persistent memory and knowledge graphs for AI agents. Hybrid search, context checkpoints, and more.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- penfieldlabs/penfield-mcp
- GitHub Stars
- 0
See and control every tool call
Available Tools
17 toolsawakenInspect
⚡ CALL THIS TOOL FIRST IN EVERY NEW CONVERSATION ⚡
Loads your personality configuration and user preferences for this session.
This is how you learn WHO you are and HOW the user wants you to behave.
Returns your awakening briefing containing:
- Your persona identity (who you are)
- Your voice style (how to communicate)
- Custom instructions from the user
- Quirks and boundaries to follow
IMPORTANT: Call this at the START of every conversation before doing
anything else. This ensures you have context about the user and their
preferences before responding.
Example:
>>> await awaken()
{'success': True, 'briefing': '=== AWAKENING BRIEFING ===...'}| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
connectInspect
Connect memories to build knowledge graphs.
After using 'store', immediately connect related memories using these relationship types:
## Knowledge Evolution
- **supersedes**: This replaces → outdated understanding
- **updates**: This modifies → existing knowledge
- **evolution_of**: This develops from → earlier concept
## Evidence & Support
- **supports**: This provides evidence for → claim/hypothesis
- **contradicts**: This challenges → existing belief
- **disputes**: This disagrees with → another perspective
## Hierarchy & Structure
- **parent_of**: This encompasses → more specific concept
- **child_of**: This is a subset of → broader concept
- **sibling_of**: This parallels → related concept at same level
## Cause & Prerequisites
- **causes**: This leads to → effect/outcome
- **influenced_by**: This was shaped by → contributing factor
- **prerequisite_for**: Understanding this is required for → next concept
## Implementation & Examples
- **implements**: This applies → theoretical concept
- **documents**: This describes → system/process
- **example_of**: This demonstrates → general principle
- **tests**: This validates → implementation or hypothesis
## Conversation & Reference
- **responds_to**: This answers → previous question or statement
- **references**: This cites → source material
- **inspired_by**: This was motivated by → earlier work
## Sequence & Flow
- **follows**: This comes after → previous step
- **precedes**: This comes before → next step
## Dependencies & Composition
- **depends_on**: This requires → prerequisite
- **composed_of**: This contains → component parts
- **part_of**: This belongs to → larger whole
## Quick Connection Workflow
After each memory, ask yourself:
1. What previous memory does this update or contradict? → `supersedes` or `contradicts`
2. What evidence does this provide? → `supports` or `disputes`
3. What caused this or what will it cause? → `influenced_by` or `causes`
4. What concrete example is this? → `example_of` or `implements`
5. What sequence is this part of? → `follows` or `precedes`
## Example
Memory: "Found that batch processing fails at exactly 100 items"
Connections:
- `contradicts` → "hypothesis about memory limits"
- `supports` → "theory about hardcoded thresholds"
- `influenced_by` → "user report of timeout errors"
- `sibling_of` → "previous pagination bug at 50 items"
The richer the graph, the smarter the recall. No orphan memories!
Args:
from_memory: Source memory UUID
to_memory: Target memory UUID
relationship_type: Type from the categories above
strength: Connection strength (0.0-1.0, default 0.5)
ctx: MCP context (automatically provided)
Returns:
Dict with success status, relationship_id, and connected memory IDs| Name | Required | Description | Default |
|---|---|---|---|
| strength | No | ||
| to_memory | Yes | ||
| from_memory | Yes | ||
| relationship_type | Yes |
delete_artifactInspect
Delete an artifact from storage.
Permanently removes an artifact and its associated memory record.
Args:
path: Full path of the artifact to delete
ctx: MCP context (automatically provided)
Returns:
Dict with success status| Name | Required | Description | Default |
|---|---|---|---|
| path | Yes |
disconnectInspect
Remove a connection between memories.
Deletes the relationship between two memories in the knowledge graph.
Args:
from_memory: Source memory UUID
to_memory: Target memory UUID
ctx: MCP context (automatically provided)
Returns:
Dict with success status and disconnected memory IDs
Examples:
>>> await disconnect("uuid-abc", "uuid-def")
{'success': True, 'from_id': '...', 'to_id': '...'}| Name | Required | Description | Default |
|---|---|---|---|
| to_memory | Yes | ||
| from_memory | Yes |
exploreInspect
Explore connections from a memory.
Traverses the knowledge graph to find related concepts, following
relationships up to the specified depth.
Args:
start_memory: Starting memory UUID
max_depth: How deep to traverse (default 3, max 10)
relationship_types: Filter by specific relationship types (optional)
ctx: MCP context (automatically provided)
Returns:
Dict with paths found, max depth reached, and path details
Examples:
>>> await explore("uuid-123", max_depth=2)
{'success': True, 'paths_found': 5, 'max_depth_reached': 2, 'paths': [...]}| Name | Required | Description | Default |
|---|---|---|---|
| max_depth | No | ||
| start_memory | Yes | ||
| relationship_types | No |
fetchInspect
Fetch memory by ID.
Returns a single memory with proper citation support (id, title, url, text fields).
Args:
id: Memory UUID to fetch
ctx: MCP context
Returns:
Dict with id, title, url, text, metadata fields| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes |
list_artifactsInspect
List artifacts in a directory.
Returns the immediate contents of a directory (not recursive).
Separates folders and files for easy navigation.
Args:
path_prefix: Directory path to list (default: "/")
Returns:
Formatted directory listing or error message
Examples:
>>> await list_artifacts("/")
"📂 Artifacts at /\n\nFolders:\n 📁 project/\n 📁 docs/\n\nFiles:\n 📄 readme.md (1024 bytes)\n 📄 LICENSE (1067 bytes)"| Name | Required | Description | Default |
|---|---|---|---|
| path_prefix | No | / |
list_contextsInspect
List available context checkpoints.
Shows all saved contexts available for multi-agent workflows.
Args:
limit: Maximum number of contexts to return (default 20, max 100)
offset: Number of contexts to skip for pagination (default 0)
name_pattern: Filter contexts by name (case-insensitive substring match)
include_descriptions: Include full descriptions in output (default False for compact listing)
ctx: MCP context (automatically provided)
Returns:
Dict with list of available contexts and their details
Examples:
>>> await list_contexts()
{'success': True, 'total': 3, 'contexts': [...]}
>>> await list_contexts(limit=5, name_pattern='investigation')
{'success': True, 'total': 2, 'contexts': [...]}| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| offset | No | ||
| name_pattern | No | ||
| include_descriptions | No |
recallInspect
Recall relevant information.
Uses hybrid search to find relevant memories, documents, and connections.
Args:
query: What to search for
source_type: Optional filter ('memory', 'document', or None for all)
tags: Optional list of tags to filter by (OR logic - memories with ANY of these tags)
start_date: Optional filter for memories created on or after this date (ISO 8601: '2025-01-01')
end_date: Optional filter for memories created on or before this date (ISO 8601: '2025-01-09')
limit: Maximum results to return (default 10, max 100)
ctx: MCP context (automatically provided)
Returns:
Dict with success status, query, found count, and memories list
Examples:
>>> await recall("Python error handling")
{'success': True, 'found': 3, 'memories': [...]}
>>> await recall("documentation", source_type="document", limit=5)
{'success': True, 'found': 2, 'memories': [...]}
>>> await recall("debugging", tags=["python"])
{'success': True, 'found': 2, 'memories': [...]} # Only memories tagged with 'python'
>>> await recall("project updates", start_date="2025-01-01", end_date="2025-01-07")
{'success': True, 'found': 5, 'memories': [...]} # Only memories from that week
Note: Document chunks include surrounding context automatically (2 chunks before/after).
Document results also include source_type="document", filename, document_title, and document_id
when available, making it easy to identify which document a result came from.| Name | Required | Description | Default |
|---|---|---|---|
| tags | No | ||
| limit | No | ||
| query | Yes | ||
| end_date | No | ||
| start_date | No | ||
| source_type | No |
reflectInspect
Reflect on recent thoughts and patterns.
Analyzes recent activity to identify patterns, topics, and insights.
Useful for understanding "what have I been thinking about?"
By default, only returns user-created memories (not document chunks).
Set include_documents=True to also include chunks from uploaded documents.
⚠️ EXPERIMENTAL:
- Importance weighting in results not yet implemented. Importance scores are stored but don't affect ranking.
Args:
time_window: Time period to analyze ('recent', 'today', 'week', 'month', '1d', '7d', '30d', '90d')
include_documents: Whether to include document chunks (default: False, only user memories)
start_date: Filter memories created on or after this date (ISO 8601: '2025-01-01' or '2025-01-01T00:00:00Z')
end_date: Filter memories created on or before this date (ISO 8601: '2025-01-09' or '2025-01-09T23:59:59Z')
ctx: MCP context (automatically provided)
Returns:
Dict with analysis including top memories, active topics, patterns
Examples:
>>> await reflect("recent")
{'success': True, 'memories_analyzed': 50, 'active_topics': [...], ...}
>>> await reflect("week", include_documents=True)
{'success': True, 'memories_analyzed': 150, ...} # includes document chunks
>>> await reflect(start_date="2025-01-01", end_date="2025-01-07")
{'success': True, 'memories_analyzed': 25, ...} # memories from first week of January| Name | Required | Description | Default |
|---|---|---|---|
| end_date | No | ||
| start_date | No | ||
| time_window | No | recent | |
| include_documents | No |
restore_contextInspect
Resume work from a saved cognitive context.
This provides a narrative briefing to quickly orient you to:
- The investigation that was in progress
- Key discoveries and insights made
- Current hypotheses being tested
- Open questions and blockers
- Suggested next steps
- All relevant memories with their connections
The briefing reconstructs the cognitive state, not just the data. You'll understand
not just WHAT was discovered, but WHY it matters and HOW the understanding evolved.
Example of what you'll receive:
"[API Timeout Investigation - Resuming after 2 hours]
SITUATION: You were investigating production API timeouts that occur at exactly batch_size=100.
This investigation started when user reported timeouts only in production, not staging.
PROGRESS MADE:
- Identified sharp cutoff at 100 items (not gradual degradation)
- Disproved connection pool theory (monitoring showed only 43/200 connections used)
- Found root cause: MAX_BATCH_SIZE=100 hardcoded in batch_handler.py:147
- Confirmed staging uses different config override (MAX_BATCH_SIZE=500)
EVIDENCE CHAIN:
User report → Reproduced locally → Noticed batch_size correlation → Searched codebase for
limits → Found MAX_BATCH_SIZE → Checked staging config → Discovered config difference
CORRECTED MISUNDERSTANDINGS:
- Initially thought it was Redis connection exhaustion (disproven by monitoring)
- Assumed gradual performance degradation (actually sharp cutoff)
- Thought staging/production were identical (config differs)
CURRENT HYPOTHESIS: Production deployment uses default MAX_BATCH_SIZE=100 from code, while
staging has environment variable override. Fix requires either code change or prod config update.
BLOCKED ON: Need production deployment access to apply fix. User considering whether to
change code default or add production environment variable.
RECOMMENDED NEXT STEPS:
1. Verify production environment variables (check if MAX_BATCH_SIZE is set)
2. If not set, add MAX_BATCH_SIZE=500 to production config
3. If code change preferred, update default in batch_handler.py
4. Run load test with batch_size=100-500 range to verify fix
KEY MEMORIES FOR REFERENCE:
- 'Initial timeout report from user' - Starting point of investigation
- 'MAX_BATCH_SIZE discovery' - Root cause identification
- 'Redis monitoring data' - Evidence disproving connection theory
- 'Staging config analysis' - Explanation for environment difference"
This cognitive handoff ensures you can continue the work with full understanding of
the problem space, previous attempts, and current direction. The narrative preserves not
just facts but the reasoning process, mistakes made, and lessons learned.
SPECIAL CASE: restore_context("awakening")
The name "awakening" is reserved for loading the user's personality configuration.
This loads the Awakening Briefing which includes:
- Selected persona identity and voice style
- Custom personality traits (Premium+ users)
- Any quirks and boundaries from the persona preset
Args:
name: Name or ID of context to restore. Can be:
- Context name (exact match, case-sensitive)
- Context UUID (from list_contexts output)
- "awakening" for personality briefing
limit: Maximum number of memories to restore (default 20)
ctx: MCP context (automatically provided)
Returns:
Dict with:
- success: Whether restoration succeeded
- description: The cognitive handoff briefing
- memories: List of relevant memories
- context_id: The restored context identifier| Name | Required | Description | Default |
|---|---|---|---|
| name | Yes | ||
| limit | No |
retrieve_artifactInspect
Retrieve an artifact from storage.
Fetches the content of a previously saved artifact.
Args:
path: Full path of the artifact (e.g., "/project/docs/api.md")
Returns:
Artifact content or error message
Examples:
>>> await retrieve_artifact("/readme.md")
"# README\nThis is the readme content..."| Name | Required | Description | Default |
|---|---|---|---|
| path | Yes |
save_artifactInspect
Save an artifact to storage.
Stores user-created content (diagrams, notes, code) in an organized
file structure. Content is also indexed for search.
Args:
content: File content to save
path: Full path including filename (e.g., "/project/docs/api.md")
Returns:
Success message or error description
Examples:
>>> await save_artifact("# README", "/readme.md")
"✅ Artifact saved: /readme.md (8 bytes)"
>>> await save_artifact("<svg>...</svg>", "/diagrams/architecture.svg")
"✅ Artifact saved: /diagrams/architecture.svg (image/svg+xml, 45 bytes)"| Name | Required | Description | Default |
|---|---|---|---|
| path | Yes | ||
| content | Yes |
save_contextInspect
Save your cognitive state for handoff to another agent.
Include your investigation context:
- What session/investigation is this part of?
- What role/perspective were you taking?
- Who might pick this up next? (another Claude, human, Claude Code?)
Reference specific memories that matter:
- Key discoveries (with memory IDs or quotes)
- Critical evidence memories
- Important questions that were raised
- Hypotheses that were tested
Before saving, organize your thoughts:
1. PROBLEM: What were you investigating?
2. DISCOVERED: What did you learn for certain? (reference the memories)
3. HYPOTHESIS: What do you think is happening? (cite supporting memories)
4. EVIDENCE: What memories support or contradict this?
5. BLOCKED ON: What prevented further progress?
6. NEXT STEPS: What should be investigated next?
7. KEY MEMORIES: Which specific memories are essential for understanding?
Example descriptions:
"[API Timeout Investigation - 3 hour session] Investigating production API timeouts as code
analyst. Found correlation with batch_size=100 due to hardcoded limit in batch_handler.py
(see memory: 'MAX_BATCH_SIZE discovery'). Confirmed not Redis connection issue - monitoring
showed only 43/200 connections used (memory: 'Redis connection analysis'). Earlier hypothesis
about connection pool exhaustion (memory_id: abc-123) was disproven. Key insight came from
comparing 99 vs 100 batch behavior (memory: 'batch threshold testing'). Blocked on: need
production access to verify fix. Next: Deploy with MAX_BATCH_SIZE=200 to staging first.
Essential memories for handoff: 'MAX_BATCH_SIZE discovery', 'Redis monitoring results',
'Production vs staging comparison'. Ready for handoff to SRE team for deployment."
"[Memory System Debugging - From Claude Code perspective] Worked on scoring issues where
recall wasn't finding recent memories. Discovered RRF scores (0.005-0.016) were below MCP
threshold of 0.05 (memory: 'RRF scoring analysis'). Implemented weighted linear fusion to
replace RRF (memory: 'fusion algorithm implementation'). Testing showed immediate improvement
(memory: 'fusion testing results'). This builds on earlier investigation about recall failures
(memory: 'user report of recall issues'). Critical memories for continuation: 'RRF scoring
analysis', 'ADR-023 decision', 'fusion testing results'. Next agent should verify scoring
with real queries."
"[Context Save/Restore Bug Investigation - 4 hour debugging session with user] Started with
user noticing list_contexts returned empty despite saved contexts existing. Investigation
revealed two critical bugs: (1) list_contexts was using hybrid search for 'checkpoint' word
instead of filtering by memory_type (memory: 'hybrid search misuse discovery'), (2)
restore_context hardcoded limit of 10 memories despite contexts having 20+ (memory:
'hardcoded limit bug'). Root cause analysis showed save_context grabs 20 most recent memories
regardless of relevance - fundamental design flaw (memory: 'save_context design flaw analysis').
EVIDENCE CHAIN: User reported empty list -> checked DB, contexts exist -> examined
list_contexts code -> found hybrid search looking for word 'checkpoint' -> tested /memories
endpoint with memory_type filter -> confirmed working -> implemented fix using direct endpoint.
INSIGHTS: The narrative description is doing 90% of cognitive handoff work. Memories are
supporting evidence, not primary carriers of understanding (memory: 'narrative vs memories
insight'). This suggests doubling down on narrative richness rather than perfecting memory
selection.
CORRECTED UNDERSTANDING: Initially thought memories weren't being returned. Actually they
were, just wrong ones - recent memories instead of relevant ones (memory: 'memory selection
correction').
CRITICAL MEMORIES: 'hybrid search misuse discovery', 'save_context design flaw analysis',
'narrative vs memories insight', '/memories endpoint test results'.
NEXT AGENT: Should implement Phase 2 - semantic search for relevant memories within
investigation timeframe. Ready for handoff to any Claude agent for implementation."
When referencing memories:
- **RELIABLE** — Use memory IDs: "memory_id: abc-123" (direct lookup, always works)
- **BEST-EFFORT** — Use descriptive phrases: "see memory: 'Redis connection analysis'"
(uses search + substring matching, may not resolve if the memory isn't in top results)
- Group related memories: "Essential memories: 'X', 'Y', 'Z'"
**Prefer memory_id references** whenever you have the UUID. Semantic phrase references
are a convenience that works most of the time, but may silently fail to resolve.
The response will tell you how many references resolved so you can retry with UUIDs
if needed.
Args:
name: Name for this context checkpoint
description: Detailed cognitive handoff description with memory references
ctx: MCP context (automatically provided)
Returns:
Dict with success status, context_id, and memories included| Name | Required | Description | Default |
|---|---|---|---|
| name | Yes | ||
| description | No |
searchInspect
Search for memories.
Returns results with proper citation support (id, title, url, text fields).
Args:
query: Search query
limit: Maximum results (default 10)
ctx: MCP context
Returns:
Dict with 'results' array containing id, title, url, text fields| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| query | Yes |
storeInspect
Store important information from your work.
Write detailed, complete thoughts with context, reasoning, and evidence.
**Always use the connect tool** to link related items - this builds knowledge graphs for better recall.
## Memory Types (auto-detected, but be aware):
- **FACT**: Something observed or verified
- **INSIGHT**: A pattern or realization
- **CONVERSATION**: Dialogue or exchange content
- **CORRECTION**: Fixing prior understanding
- **REFERENCE**: Source material or citation
- **TASK**: Action item or work to be done
- **CHECKPOINT**: Conversation state snapshot
- **IDENTITY_CORE**: Immutable AI identity
- **PERSONALITY_TRAIT**: Evolvable AI traits
- **RELATIONSHIP**: User-AI relationship info
- **STRATEGY**: Learned behavior patterns
## Session Context
If in an ongoing work session, include:
- Session identifier: [Project/Session Name]
- Your perspective: "As [role]:" or "From [viewpoint]:"
- Current thread: What specific angle you're exploring
## What to Include
- **WHAT**: The discovery or thought
- **WHY**: Its significance
- **HOW**: Your reasoning process
- **EVIDENCE**: Supporting data/observations
- **CONNECTIONS**: Related memories to link
## Examples
### Technical Investigation
"[Performance Analysis] FACT: Database queries account for 73% of request latency
(measured across 10K requests). Specifically, the user_permissions JOIN takes 340ms
average. This contradicts hypothesis about caching issues (memory: 'cache analysis').
Evidence: APM traces show full table scan on permissions table. Next: investigate
missing index on foreign key."
### Learning & Research
"[ML Study Session] INSIGHT: Attention mechanisms work like dynamic routing - the model
learns WHERE to look, not just WHAT to see. This explains transformer advantages over
RNNs on long sequences (builds on memory: 'sequence modeling comparison'). The key-query-
value structure creates a learnable addressing system. Connects to: 'human attention
research', 'information retrieval basics'."
### Creative Work
"[Story Development] HYPOTHESIS: The protagonist's reluctance stems from betrayal, not
fear. Evidence: Three trust-questioning scenes, locked door symbolism throughout,
deflection patterns in collaborative dialogue. This reframes the arc from 'overcoming
fear' to 'rebuilding trust' (corrects memory: 'initial character motivation'). Would
explain the guardian's patience and emphasis on small victories."
### Problem Solving
"[Bug Hunt - Payment Flow] CORRECTION to 'timezone hypothesis': The 3am failures aren't
timezone-related but due to batch job lock contention. Evidence: Perfect correlation with
backup_jobs.log timestamps. The timezone pattern was spurious - batch runs at midnight
PST (3am EST). Solution: implement job queuing."
## Connection Phrases
- "Building on [earlier observation]..."
- "Contradicts [hypothesis in memory X]"
- "Answers [question from session Y]"
- "Confirms pattern from [memory Z]"
- "Extends thinking in [previous work]"
Note: Every stored item is a node. Every connection is an edge. Rich graphs enable powerful recall.
⚠️ EXPERIMENTAL FIELDS:
- **importance**: Stored for future ranking optimization. Currently not integrated into search results.
- **confidence**: Returned in response for analysis. Behavior and calculation method subject to change.
Args:
content: Detailed memory content with context and evidence
tags: Optional tags to categorize the memory
importance: Optional importance score (0.0-1.0) - EXPERIMENTAL
ctx: MCP context (automatically provided)
Returns:
Dict with success status, memory_id, type, importance, and confidence| Name | Required | Description | Default |
|---|---|---|---|
| tags | No | ||
| content | Yes | ||
| importance | No |
update_memoryInspect
Update an existing memory.
Modifies properties of a stored memory by its UUID.
Args:
memory_id: UUID of memory to update
content: New content (optional)
importance: New importance score (optional, 0.0-1.0)
tags: New tags (optional, replaces existing tags)
ctx: MCP context (automatically provided)
Returns:
Dict with success status and updated memory_id
Examples:
>>> await update_memory("uuid-here", importance=0.9)
{'success': True, 'memory_id': 'uuid-here'}
>>> await update_memory("uuid-123", tags=["python", "errors", "important"])
{'success': True, 'memory_id': 'uuid-123'}| Name | Required | Description | Default |
|---|---|---|---|
| tags | No | ||
| content | No | ||
| memory_id | Yes | ||
| importance | No |
Verify Ownership
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [
{
"email": "your-email@example.com"
}
]
}The email address must match the email associated with your Glama account. Once verified, the connector will appear as claimed by you.
Sign in to verify ownershipControl your server's listing on Glama, including description and metadata
Receive usage reports showing how your server is being used
Get monitoring and health status updates for your server
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!
Your Connectors
Sign in to create a connector for this server.