Skip to main content
Glama
Ownership verified

Server Details

Persistent memory for AI coding agents. Store coding standards, architecture decisions, and project context across sessions with AES-256 encryption.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.4/5 across 8 of 8 tools scored.

Server CoherenceA
Disambiguation4/5

Most tools have distinct purposes, but 'query_knowledge' and 'search_knowledge' could cause confusion as both involve searching knowledge, though their descriptions clarify that one uses AI and the other uses keyword matching. Other tools like 'add_knowledge', 'get_topic', and 'list_topics' are clearly differentiated.

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern with snake_case, such as 'add_knowledge', 'capture_session', 'get_topic', 'list_sessions', 'list_topics', 'query_knowledge', 'recall_memory', and 'search_knowledge'. This uniformity makes the tool set predictable and easy to understand.

Tool Count5/5

With 8 tools, the count is well-scoped for a knowledge and memory management server. It covers core operations like adding, retrieving, listing, and searching knowledge and sessions, without being overly complex or too sparse for the domain.

Completeness4/5

The tool set provides good coverage for knowledge base and session memory management, including CRUD-like operations (add, get, list, search). A minor gap is the lack of update or delete tools for knowledge or sessions, which agents might need to work around, but core workflows are supported.

Available Tools

8 tools
add_knowledgeBInspect

Write new knowledge to your base. Store facts, decisions, patterns, or any information your AI should remember.

ParametersJSON Schema
NameRequiredDescriptionDefault
topicNoTopic to file under (auto-detected if omitted)
contentYesThe knowledge to store
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states this is a write operation ('Write new knowledge'), but fails to mention side effects, persistence guarantees, duplicate handling, or that the topic parameter is optional (auto-detected) as noted in the schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of two efficient sentences with no filler. It is front-loaded with the core action ('Write new knowledge') followed by supportive examples, with every sentence earning its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple two-parameter schema and lack of output schema, the description is minimally adequate. However, for a tool with seven siblings in a knowledge management ecosystem, it lacks context on how this operation relates to the broader workflow (e.g., how stored knowledge is retrieved).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, establishing a baseline score. The description adds illustrative examples of content types ('facts, decisions, patterns') but does not add semantic meaning beyond what the schema already provides for the parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description provides a specific verb ('Write') and resource ('knowledge to your base') and clarifies the content scope ('facts, decisions, patterns'). However, it does not explicitly distinguish this from the sibling 'capture_session' tool, which may also involve storing information.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains what to store but provides no guidance on when to use this tool versus siblings like 'capture_session', 'query_knowledge', or 'recall_memory'. There are no exclusions, prerequisites, or scenarios mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

capture_sessionAInspect

Capture an AI session summary into persistent memory. Store what you learned, decided, or built during this session. Requires Memory plan.

ParametersJSON Schema
NameRequiredDescriptionDefault
tagsNoTags for categorization (optional)
toolNoAI tool used (e.g. claude-code, cursor, chatgpt)
projectNoProject name or path (optional)
summaryYesCompressed summary of the session — key decisions, patterns found, problems solved
observationsNoDetailed observations as JSON string (optional)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully communicates the authentication requirement ('Requires Memory plan') and implies persistence ('persistent memory'), but fails to disclose mutation characteristics like idempotency, overwrite behavior, or storage limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The three-sentence structure is tightly optimized: sentence 1 defines the operation, sentence 2 clarifies content semantics, and sentence 3 states critical prerequisites. No redundancy exists; every sentence earns its place without wasting tokens.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the 100% schema coverage and lack of output schema, the description adequately covers the tool's intent and prerequisites. However, for a write-operation tool with zero annotations, it should disclose more behavioral traits (conflict resolution, rate limits) and explicitly position itself against the similar add_knowledge sibling.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage across all 5 parameters, the schema itself documents inputs thoroughly. The description adds semantic context for the 'summary' parameter by suggesting content types ('what you learned, decided, or built'), but does not need to compensate for schema gaps, warranting the baseline score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Capture') and resource ('AI session summary') being operated on, targeting 'persistent memory'. It effectively distinguishes from siblings like add_knowledge by emphasizing 'session' context and temporal scope ('during this session'), though it could explicitly contrast with the general knowledge storage sibling.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides implied usage context through 'Store what you learned, decided, or built during this session', suggesting when to invoke the tool. It also states the prerequisite 'Requires Memory plan'. However, it lacks explicit guidance on when NOT to use this versus add_knowledge or exclusions for incomplete sessions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_topicBInspect

Retrieve all knowledge entries for a specific topic.

ParametersJSON Schema
NameRequiredDescriptionDefault
topicYesTopic name to retrieve
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. While 'Retrieve' suggests a read-only operation, the description fails to disclose what happens if the topic doesn't exist (error vs empty result), whether the operation is idempotent, or the structure of the returned data.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence of seven words with no redundancy. It is front-loaded with the action verb and immediately conveys the core purpose without extraneous information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple single-parameter retrieval tool with 100% schema coverage and no output schema, the description is minimally sufficient. However, given the presence of multiple knowledge-related siblings, the lack of usage context and behavioral details leaves gaps that would force the agent to guess or experiment.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, the parameter 'topic' is already well-documented as 'Topic name to retrieve'. The description adds minimal semantic value beyond the schema, merely referencing 'a specific topic' without elaborating on format constraints, valid values, or lookup behavior (exact match vs case-insensitive).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Retrieve') and clearly identifies the resource ('all knowledge entries') and scope ('for a specific topic'). It implicitly distinguishes from siblings like 'search_knowledge' by emphasizing 'all' entries for a single topic rather than filtered results, though it could be more explicit about this distinction.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus siblings like 'search_knowledge', 'query_knowledge', or 'list_topics'. The agent cannot determine from this description whether to use this for finding topics, searching within topics, or retrieving complete topic contents.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_sessionsAInspect

List recent captured session memories. Requires Memory plan.

ParametersJSON Schema
NameRequiredDescriptionDefault
toolNoFilter by AI tool (optional)
limitNoMax results (default 10)
projectNoFilter by project (optional)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It adds the 'Memory plan' auth constraint and 'recent' temporal filtering, but fails to disclose safety (read-only implied but not stated), return format, or pagination behavior beyond the 'limit' parameter.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero redundancy. The constraint 'Requires Memory plan' is placed second but is essential information. No filler words or repetitive phrases.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With 3 simple optional parameters and no output schema, the description provides minimal viable context. It identifies the resource type but omits return structure details, field descriptions for session objects, and explicit safety declarations that would be necessary given the lack of annotations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage, establishing baseline 3. The description adds no parameter-specific context (e.g., expected format for 'project', valid values for 'tool'), but the schema adequately documents the three optional filters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb ('List') and resource ('captured session memories'), with temporal scope ('recent'). However, it does not explicitly differentiate from sibling tools like 'recall_memory' or 'list_topics', leaving the agent to infer the distinction based on naming alone.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides a critical prerequisite ('Requires Memory plan') but lacks explicit guidance on when to choose this over siblings like 'recall_memory' or 'query_knowledge'. The 'recent' qualifier implies a browsing use case, but this is implicit rather than stated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_topicsBInspect

List all knowledge topics in your base.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Fails to disclose read-only safety, pagination behavior, or what 'base' refers to. 'All' implies unbounded retrieval but doesn't warn about performance implications.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single, tight sentence with zero redundancy. Every word earns its place: verb, scope, resource, and location are all specified efficiently.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a zero-parameter tool but lacks return value description (no output schema exists to compensate). Relationship to 'search_knowledge' and 'query_knowledge' remains ambiguous given the sibling set.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Zero parameters present per schema. With no parameters to document, the baseline score applies. Description correctly implies no filtering capabilities are available.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb ('List'), resource ('knowledge topics'), and scope ('all'/'in your base'). The 'all' distinguishes it from sibling 'get_topic', and 'topics' distinguishes from 'list_sessions', though explicit differentiation from 'search_knowledge'/'query_knowledge' is missing.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this versus siblings like 'search_knowledge', 'query_knowledge', or 'get_topic'. Does not mention that this retrieves an unfiltered enumeration or warn about potential volume.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

query_knowledgeCInspect

Search your knowledge base using AI. Ask a natural language question and get an answer based on your stored knowledge.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesNatural language question or search term
topicNoFilter by specific topic (optional)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are present, so the description carries full behavioral disclosure burden. While it mentions 'using AI', it omits critical details: whether results are synthesized vs. raw retrieval, caching behavior, rate limits, authentication requirements, or what happens when the knowledge base lacks relevant information.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences totaling 21 words. Information is front-loaded with the core action ('Search your knowledge base'). No redundant phrases or tautology. Could benefit from one additional sentence covering sibling differentiation without harming conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists, yet the description only vaguely mentions 'get an answer' without clarifying response format, confidence scores, or citation structure. With zero annotations and high sibling overlap (particularly 'search_knowledge'), the description inadequately prepares the agent for tool selection and response handling.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (both query and topic are well-described in the schema), establishing a baseline of 3. The description adds 'Ask a natural language question' which aligns with but does not extend beyond the schema's 'Natural language question or search term'. No additional context on topic filtering syntax or query optimization is provided.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verbs 'Search' and 'get an answer' with clear resource 'knowledge base'. It specifies AI/natural language processing, which distinguishes the mechanism. However, it fails to differentiate from the sibling tool 'search_knowledge', leaving ambiguity about which retrieval tool to choose.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Zero guidance provided on when to use this tool versus siblings like 'search_knowledge', 'recall_memory', or 'get_topic'. No mention of prerequisites, optimal use cases, or when natural language querying is preferred over structured search.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recall_memoryAInspect

Search across all captured session memories using AI. Ask what you worked on, what decisions were made, or what patterns were found. Requires Memory plan.

ParametersJSON Schema
NameRequiredDescriptionDefault
toolNoFilter by AI tool (optional)
limitNoMax results (default 20)
queryYesNatural language question about past sessions
projectNoFilter by project (optional)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It successfully discloses the 'Memory plan' requirement (critical behavioral constraint) and indicates semantic search behavior ('using AI'), but omits other behavioral details like return format, pagination limits, or read-only safety guarantees.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three well-structured sentences with zero waste: purpose declaration, usage examples, and requirement constraint. Every sentence earns its place by adding distinct value not present in structured fields.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 4-parameter search tool without output schema or annotations, the description is nearly complete. It covers the key behavioral constraint (Memory plan) and search semantics. Minor gap: lacks description of return value structure, though this is partially mitigated by the absence of an output schema to reference.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, establishing a baseline of 3. The description reinforces that the query is question-based ('Ask what you worked on') but does not add semantic details beyond what the schema already provides for the tool, limit, or project parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Search across all captured session memories') and distinguishes from siblings by emphasizing 'AI' semantic search (vs. lexical search in search_knowledge) and 'memories' (vs. 'knowledge' in knowledge-related siblings). It also differentiates from capture_session (write) and list_sessions (enumeration without AI).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides helpful usage examples ('Ask what you worked on...'), implying natural language queries, but fails to explicitly state when to choose this over sibling tools like search_knowledge, query_knowledge, or list_sessions. No exclusions or prerequisites beyond the Memory plan are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_knowledgeAInspect

Search knowledge entries by keyword (text match). Use query_knowledge for AI-powered answers.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesSearch term
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It adds the critical behavioral trait that this is 'text match' (not semantic), but omits other key behaviors: return format, result pagination, case sensitivity, and exact vs. partial matching rules.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, zero waste. First sentence defines function; second provides alternative routing. Front-loaded with the action and no filler text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a single-parameter search tool. Covers tool selection rationale and basic invocation needs. Deducted one point because, lacking both annotations and output schema, it could briefly indicate what the search returns (e.g., entry list vs. single result).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage ('Search term'). Description adds 'keyword (text match)' context which reinforces the parameter's purpose, but does not add syntax details, examples, or constraints beyond what the schema already provides. Baseline 3 appropriate for high-coverage schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description explicitly states the tool 'Search[es] knowledge entries by keyword (text match)'—clear verb, resource, and method. The parenthetical '(text match)' precisely distinguishes it from the semantic search sibling.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly directs users to the alternative: 'Use query_knowledge for AI-powered answers.' This creates a clear decision boundary between keyword search (this tool) and AI-powered search (sibling).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources