Skip to main content
Glama

Synapse Layer — Continuous Consciousness Infrastructure

Server Details

Persistent zero-knowledge memory for AI agents. AES-256-GCM encryption, PII redaction.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
SynapseLayer/synapse-layer
GitHub Stars
2

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.5/5 across 5 of 5 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose with no overlap: health_check verifies system status, process_text extracts and processes events from text, recall retrieves past context for a single agent, save_to_synapse persists data with encryption, and search queries across all agents. The descriptions explicitly differentiate recall (single-agent) from search (cross-agent), eliminating ambiguity.

Naming Consistency5/5

All tools follow a consistent verb_noun or verb_preposition_noun pattern (health_check, process_text, recall, save_to_synapse, search). The naming is uniform with clear, descriptive terms that align with their functions, and there are no deviations in style or convention.

Tool Count5/5

With 5 tools, this server is well-scoped for its purpose of continuous consciousness infrastructure. Each tool serves a specific role in the memory and processing pipeline, and the count is neither too sparse nor excessive, fitting typical expectations for a focused domain.

Completeness4/5

The tool set covers core CRUD-like operations for memory management: process_text (create/ingest), recall and search (read), and save_to_synapse (update/persist). A minor gap is the lack of explicit deletion or modification tools for stored data, but agents can likely work around this given the focus on persistence and retrieval.

Available Tools

5 tools
health_checkBInspect

Verify system availability and memory pipeline integrity. Returns database status, memory count, and engine version.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the tool 'returns database status, memory count, and engine version', which gives some output context, but lacks details on permissions, rate limits, or potential side effects. This is adequate but has gaps for a tool with zero annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded and concise, consisting of two efficient sentences that directly state the tool's purpose and return values without any wasted words. Every sentence earns its place by providing essential information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (0 parameters, no output schema, no annotations), the description is minimally complete. It explains what the tool does and what it returns, but for a health check tool, it could benefit from more context on error handling or typical use cases. It's adequate but leaves room for improvement.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has 0 parameters with 100% schema description coverage, so the schema already documents this fully. The description doesn't need to add parameter information, and it doesn't contradict the schema. A baseline of 4 is appropriate as the description compensates by not introducing confusion.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('verify system availability and memory pipeline integrity') and resources ('database status, memory count, and engine version'), making it easy to understand what the tool does. However, it doesn't explicitly differentiate from sibling tools like 'process_text' or 'search', which prevents a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'search' or 'recall'. It implies usage for system health checks but doesn't specify contexts, prerequisites, or exclusions, leaving the agent without clear decision-making criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

process_textBInspect

Self-extracting memory engine. Scans free-form text for milestones, decisions, alerts, and strategic events. Detected events pass through the full pipeline: policy evaluation, PII/secret redaction, deduplication, and persistence.

ParametersJSON Schema
NameRequiredDescriptionDefault
textYesFree-form text to scan for auto-save triggers.
sourceNoSource identifier (default: mcp).
projectNoForce a specific project (e.g., SYNAPSE_LAYER, OFFLY). Auto-detected if omitted.
agent_idNoAgent identifier. Defaults to "default".
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It describes the processing pipeline (policy evaluation, PII/secret redaction, deduplication, persistence), which adds context beyond basic functionality. However, it lacks details on permissions, rate limits, error handling, or what 'persistence' entails (e.g., storage location or format), leaving gaps for a tool with significant processing implications.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise and front-loaded, with two sentences that efficiently outline the tool's purpose and pipeline. Every sentence adds value: the first defines the scanning function, and the second details the processing steps. It avoids redundancy and is appropriately sized for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (processing text through multiple stages) and lack of annotations and output schema, the description is moderately complete. It covers the core functionality and pipeline but omits details on output format, error cases, or integration with sibling tools. This leaves the agent with incomplete context for effective use, especially without an output schema to clarify results.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds no specific parameter information beyond implying that 'text' is the primary input for scanning. It doesn't explain how parameters like 'source' or 'project' affect the processing, so it meets the baseline but doesn't enhance understanding beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's function: scanning free-form text for specific event types (milestones, decisions, alerts, strategic events) and processing them through a pipeline. It uses specific verbs like 'scans,' 'detects,' and 'passes through,' making the purpose explicit. However, it doesn't differentiate from sibling tools like 'save_to_synapse' or 'search,' which might have overlapping text-processing functions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools like 'save_to_synapse' (which might save processed data) or 'search' (which might query processed events), nor does it specify prerequisites or exclusions. Usage is implied from the description but not explicitly stated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recallAInspect

Deterministically retrieves past context and decisions. Essential for multi-session agent logic. Call before responding when prior context, preferences, or decisions may exist.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum memories to return (1–50, default: 10).
queryYesWhat to recall — natural language query for memory retrieval.
agent_idNoAgent identifier to scope memory recall. Defaults to "default".
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It adds useful context about the tool being 'deterministic' and 'essential for multi-session agent logic,' which helps understand its reliability and use case. However, it lacks details on permissions, rate limits, error handling, or what the return format looks like (e.g., structured memories vs. raw text), leaving gaps for a tool with no output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is highly concise and front-loaded, with three sentences that each earn their place: the first defines the core action, the second explains its importance, and the third gives usage timing. There's zero waste or redundancy, making it easy for an agent to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (memory retrieval with 3 parameters) and lack of annotations and output schema, the description is adequate but has clear gaps. It covers purpose and usage context well, but without annotations, it should ideally disclose more about behavioral traits like response format or limitations. The description is complete enough for basic understanding but falls short of fully compensating for missing structured data.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all three parameters thoroughly. The description doesn't add any parameter-specific semantics beyond what the schema provides (e.g., it doesn't explain query formatting or agent_id implications). Baseline score of 3 is appropriate as the schema does the heavy lifting, but no extra value is added.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('retrieves past context and decisions') and identifies the resource ('past context and decisions'). It distinguishes from siblings like 'search' by focusing on deterministic retrieval of historical data rather than general searching. However, it doesn't explicitly differentiate from 'process_text' or 'save_to_synapse' in terms of memory vs. processing/storage operations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use the tool ('Call before responding when prior context, preferences, or decisions may exist') and highlights its role in 'multi-session agent logic.' It implies usage for memory retrieval scenarios but doesn't explicitly state when not to use it or name specific alternatives among the sibling tools like 'search' for non-deterministic queries.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

save_to_synapseBInspect

Persists user preferences, task progress, and facts with Zero-Knowledge encryption. Content passes through PII/secret redaction, intent validation, and deduplication before storage.

ParametersJSON Schema
NameRequiredDescriptionDefault
tagsNoTags for categorization.
typeNoEvent type: [MILESTONE], [DECISION], [ALERT], [AUTO-STRAT], [AUTO-OP], [AUTO-INSIGHT], [AUTO-DECISION], [AUTO-CONTEXT], [MANUAL].
contentYesThe memory content to store securely.
projectNoProject identifier (e.g., SYNAPSE_LAYER).
agent_idNoAgent identifier for memory isolation. Defaults to "default".
importanceNoImportance level 1–5 (default: 3).
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It adds useful context about encryption, redaction, validation, and deduplication, which are behavioral traits beyond basic storage. However, it lacks details on permissions, rate limits, error handling, or what 'persists' entails (e.g., overwrite vs. append), leaving gaps for a mutation tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, dense sentence that efficiently covers key aspects (purpose and processing steps) without fluff. It's front-loaded with the core action ('Persists...') and avoids redundancy. However, it could be slightly more structured (e.g., breaking into clauses) for clarity, preventing a perfect score.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations and no output schema, the description partially compensates by detailing processing behavior. However, for a 6-parameter mutation tool, it lacks information on return values, error cases, or operational constraints (e.g., storage limits). It's adequate but has clear gaps in context for safe and effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds no specific parameter semantics beyond implying 'content' is the memory to store. It doesn't explain relationships between parameters (e.g., how 'type' interacts with processing) or provide examples, meeting the baseline of 3 for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Persists user preferences, task progress, and facts' with specific processing steps (encryption, redaction, validation, deduplication). It distinguishes from siblings like 'recall' (retrieval) and 'search' (querying) by focusing on storage. However, it doesn't explicitly name the sibling alternatives for differentiation, keeping it at a 4 instead of 5.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It mentions processing steps but doesn't specify scenarios, prerequisites, or exclusions (e.g., compared to 'process_text' or 'recall'). This lack of explicit usage context results in minimal guidance for an AI agent.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.