Synapse Layer — Continuous Consciousness Infrastructure
Server Details
Persistent zero-knowledge memory for AI agents. AES-256-GCM encryption, PII redaction.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- SynapseLayer/synapse-layer
- GitHub Stars
- 2
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.5/5 across 5 of 5 tools scored.
Each tool has a clearly distinct purpose with no overlap: health_check verifies system status, process_text extracts and processes events from text, recall retrieves past context for a single agent, save_to_synapse persists data with encryption, and search queries across all agents. The descriptions explicitly differentiate recall (single-agent) from search (cross-agent), eliminating ambiguity.
All tools follow a consistent verb_noun or verb_preposition_noun pattern (health_check, process_text, recall, save_to_synapse, search). The naming is uniform with clear, descriptive terms that align with their functions, and there are no deviations in style or convention.
With 5 tools, this server is well-scoped for its purpose of continuous consciousness infrastructure. Each tool serves a specific role in the memory and processing pipeline, and the count is neither too sparse nor excessive, fitting typical expectations for a focused domain.
The tool set covers core CRUD-like operations for memory management: process_text (create/ingest), recall and search (read), and save_to_synapse (update/persist). A minor gap is the lack of explicit deletion or modification tools for stored data, but agents can likely work around this given the focus on persistence and retrieval.
Available Tools
5 toolshealth_checkBInspect
Verify system availability and memory pipeline integrity. Returns database status, memory count, and engine version.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the tool 'returns database status, memory count, and engine version', which gives some output context, but lacks details on permissions, rate limits, or potential side effects. This is adequate but has gaps for a tool with zero annotation coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded and concise, consisting of two efficient sentences that directly state the tool's purpose and return values without any wasted words. Every sentence earns its place by providing essential information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (0 parameters, no output schema, no annotations), the description is minimally complete. It explains what the tool does and what it returns, but for a health check tool, it could benefit from more context on error handling or typical use cases. It's adequate but leaves room for improvement.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has 0 parameters with 100% schema description coverage, so the schema already documents this fully. The description doesn't need to add parameter information, and it doesn't contradict the schema. A baseline of 4 is appropriate as the description compensates by not introducing confusion.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('verify system availability and memory pipeline integrity') and resources ('database status, memory count, and engine version'), making it easy to understand what the tool does. However, it doesn't explicitly differentiate from sibling tools like 'process_text' or 'search', which prevents a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'search' or 'recall'. It implies usage for system health checks but doesn't specify contexts, prerequisites, or exclusions, leaving the agent without clear decision-making criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
process_textBInspect
Self-extracting memory engine. Scans free-form text for milestones, decisions, alerts, and strategic events. Detected events pass through the full pipeline: policy evaluation, PII/secret redaction, deduplication, and persistence.
| Name | Required | Description | Default |
|---|---|---|---|
| text | Yes | Free-form text to scan for auto-save triggers. | |
| source | No | Source identifier (default: mcp). | |
| project | No | Force a specific project (e.g., SYNAPSE_LAYER, OFFLY). Auto-detected if omitted. | |
| agent_id | No | Agent identifier. Defaults to "default". |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It describes the processing pipeline (policy evaluation, PII/secret redaction, deduplication, persistence), which adds context beyond basic functionality. However, it lacks details on permissions, rate limits, error handling, or what 'persistence' entails (e.g., storage location or format), leaving gaps for a tool with significant processing implications.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise and front-loaded, with two sentences that efficiently outline the tool's purpose and pipeline. Every sentence adds value: the first defines the scanning function, and the second details the processing steps. It avoids redundancy and is appropriately sized for the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (processing text through multiple stages) and lack of annotations and output schema, the description is moderately complete. It covers the core functionality and pipeline but omits details on output format, error cases, or integration with sibling tools. This leaves the agent with incomplete context for effective use, especially without an output schema to clarify results.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds no specific parameter information beyond implying that 'text' is the primary input for scanning. It doesn't explain how parameters like 'source' or 'project' affect the processing, so it meets the baseline but doesn't enhance understanding beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's function: scanning free-form text for specific event types (milestones, decisions, alerts, strategic events) and processing them through a pipeline. It uses specific verbs like 'scans,' 'detects,' and 'passes through,' making the purpose explicit. However, it doesn't differentiate from sibling tools like 'save_to_synapse' or 'search,' which might have overlapping text-processing functions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools like 'save_to_synapse' (which might save processed data) or 'search' (which might query processed events), nor does it specify prerequisites or exclusions. Usage is implied from the description but not explicitly stated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recallAInspect
Deterministically retrieves past context and decisions. Essential for multi-session agent logic. Call before responding when prior context, preferences, or decisions may exist.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum memories to return (1–50, default: 10). | |
| query | Yes | What to recall — natural language query for memory retrieval. | |
| agent_id | No | Agent identifier to scope memory recall. Defaults to "default". |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It adds useful context about the tool being 'deterministic' and 'essential for multi-session agent logic,' which helps understand its reliability and use case. However, it lacks details on permissions, rate limits, error handling, or what the return format looks like (e.g., structured memories vs. raw text), leaving gaps for a tool with no output schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is highly concise and front-loaded, with three sentences that each earn their place: the first defines the core action, the second explains its importance, and the third gives usage timing. There's zero waste or redundancy, making it easy for an agent to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (memory retrieval with 3 parameters) and lack of annotations and output schema, the description is adequate but has clear gaps. It covers purpose and usage context well, but without annotations, it should ideally disclose more about behavioral traits like response format or limitations. The description is complete enough for basic understanding but falls short of fully compensating for missing structured data.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all three parameters thoroughly. The description doesn't add any parameter-specific semantics beyond what the schema provides (e.g., it doesn't explain query formatting or agent_id implications). Baseline score of 3 is appropriate as the schema does the heavy lifting, but no extra value is added.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('retrieves past context and decisions') and identifies the resource ('past context and decisions'). It distinguishes from siblings like 'search' by focusing on deterministic retrieval of historical data rather than general searching. However, it doesn't explicitly differentiate from 'process_text' or 'save_to_synapse' in terms of memory vs. processing/storage operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use the tool ('Call before responding when prior context, preferences, or decisions may exist') and highlights its role in 'multi-session agent logic.' It implies usage for memory retrieval scenarios but doesn't explicitly state when not to use it or name specific alternatives among the sibling tools like 'search' for non-deterministic queries.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
save_to_synapseBInspect
Persists user preferences, task progress, and facts with Zero-Knowledge encryption. Content passes through PII/secret redaction, intent validation, and deduplication before storage.
| Name | Required | Description | Default |
|---|---|---|---|
| tags | No | Tags for categorization. | |
| type | No | Event type: [MILESTONE], [DECISION], [ALERT], [AUTO-STRAT], [AUTO-OP], [AUTO-INSIGHT], [AUTO-DECISION], [AUTO-CONTEXT], [MANUAL]. | |
| content | Yes | The memory content to store securely. | |
| project | No | Project identifier (e.g., SYNAPSE_LAYER). | |
| agent_id | No | Agent identifier for memory isolation. Defaults to "default". | |
| importance | No | Importance level 1–5 (default: 3). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It adds useful context about encryption, redaction, validation, and deduplication, which are behavioral traits beyond basic storage. However, it lacks details on permissions, rate limits, error handling, or what 'persists' entails (e.g., overwrite vs. append), leaving gaps for a mutation tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, dense sentence that efficiently covers key aspects (purpose and processing steps) without fluff. It's front-loaded with the core action ('Persists...') and avoids redundancy. However, it could be slightly more structured (e.g., breaking into clauses) for clarity, preventing a perfect score.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description partially compensates by detailing processing behavior. However, for a 6-parameter mutation tool, it lacks information on return values, error cases, or operational constraints (e.g., storage limits). It's adequate but has clear gaps in context for safe and effective use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds no specific parameter semantics beyond implying 'content' is the memory to store. It doesn't explain relationships between parameters (e.g., how 'type' interacts with processing) or provide examples, meeting the baseline of 3 for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Persists user preferences, task progress, and facts' with specific processing steps (encryption, redaction, validation, deduplication). It distinguishes from siblings like 'recall' (retrieval) and 'search' (querying) by focusing on storage. However, it doesn't explicitly name the sibling alternatives for differentiation, keeping it at a 4 instead of 5.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It mentions processing steps but doesn't specify scenarios, prerequisites, or exclusions (e.g., compared to 'process_text' or 'recall'). This lack of explicit usage context results in minimal guidance for an AI agent.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
searchAInspect
Cross-agent memory search with full-text matching. Unlike recall (which scopes to a single agent), search queries the entire memory pool. Ideal for finding related context across agents, projects, or time periods.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum results to return (1–50, default: 20). | |
| query | Yes | Search query — natural language or keywords. | |
| agent_id | No | Optional: restrict search to a specific agent. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses the behavioral trait of 'full-text matching' and scope ('entire memory pool'), but lacks details on permissions, rate limits, error conditions, or what the search returns (since no output schema exists).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two concise sentences with zero waste—the first explains the core functionality and differentiation, the second provides usage guidance. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description adequately covers purpose and usage but lacks details on return values, error handling, or operational constraints. It's complete enough for basic understanding but leaves gaps for a search tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds no additional parameter semantics beyond what's in the schema, meeting the baseline for high coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('search', 'queries') and resources ('cross-agent memory', 'entire memory pool'), and explicitly distinguishes it from the sibling tool 'recall' by contrasting scopes ('single agent' vs 'entire memory pool').
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool ('ideal for finding related context across agents, projects, or time periods') and when not to use it (vs 'recall' for single-agent scoping), naming the alternative tool directly.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail — every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control — enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management — store and rotate API keys and OAuth tokens in one place
Change alerts — get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption — public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics — see which tools are being used most, helping you prioritize development and documentation
Direct user feedback — users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!