Skip to main content
Glama
Ownership verified

Server Details

Universal memory runtime for AI agents: episodic, semantic, and procedural memory with hybrid retrieval and spaced-repetition decay.

Status
Unhealthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 3.6/5 across 9 of 9 tools scored.

Server CoherenceA
Disambiguation4/5

Tools have clearly distinct purposes with good separation between episodic memory (observe/episode_start/end) and semantic memory (remember/recall). The distinction between recall (semantic search) and inspect (entity-specific browsing) is clear from descriptions, though both retrieve memories. Account (billing) vs status (technical) are well differentiated.

Naming Consistency3/5

Uses consistent 'pensyve_' prefix but mixes suffix patterns: simple nouns (account, status), simple verbs (forget, inspect, observe, recall, remember), and compound noun_verb (episode_start, episode_end). While readable, the lack of a unified convention (all verbs or all nouns) creates minor inconsistency.

Tool Count5/5

Nine tools is well-scoped for a memory management domain, covering account management (2), episode lifecycle (3), and memory CRUD operations (4). Each tool earns its place without redundancy or bloat.

Completeness4/5

Covers the core memory lifecycle well: episodic tracking, semantic storage, retrieval via search and entity lookup, and deletion. Minor gaps include no explicit update operation for memories and no episode listing capability, but agents can work around these by re-remembering or managing episode IDs externally.

Available Tools

9 tools
pensyve_accountAInspect

Get account information including plan, usage, and quota. Returns local mode info when not connected to a remote server.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully notes the local/offline mode behavior, but does not disclose whether the operation requires authentication, if it makes network requests, or side effects like caching.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of two efficient sentences with no redundancy. The first states the core function and data returned; the second provides the behavioral context about local mode. Every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simplicity (zero parameters) and lack of output schema, the description adequately covers what information is retrieved (plan, usage, quota) and the offline behavior. It could be improved by mentioning authentication requirements or response format, but it meets the minimum viable threshold for completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has zero parameters, establishing a baseline score of 4. The description appropriately does not attempt to invent parameter semantics where none exist.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Get') and resource ('account information'), and explicitly lists the data categories returned (plan, usage, quota). It clearly distinguishes this from sibling tools which focus on episodes and memory operations (forget, recall, remember, etc.).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides implied usage context by noting it 'Returns local mode info when not connected to a remote server,' which hints at when to expect limited data. However, it lacks explicit guidance on when to prefer this over the sibling 'status' tool or prerequisites for remote data retrieval.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

pensyve_episode_endAInspect

Close an episode and extract any memories from it. Returns the count of memories created.

ParametersJSON Schema
NameRequiredDescriptionDefault
outcomeNoOutcome of the episode: "success", "failure", or "partial".
episode_idYesThe episode ID returned by `pensyve_episode_start`.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It explains that memories are 'extracted' and counted, but fails to clarify critical behavioral traits: whether 'extract' implies deletion from the episode, if closing is permanent/reversible, or what happens to the episode data post-closure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste: the first states the action (close + extract), the second states the return value. Every word earns its place and the core action is front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 2-parameter tool with complete schema coverage, the description adequately covers the core operation and return value (compensating for missing output schema). However, given the lack of annotations and the destructive/transitional nature of 'closing' and 'extracting', it should disclose side effects or persistence guarantees to be fully complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with both episode_id and outcome fully documented in the schema (including the episode_start reference). The description adds no parameter-specific semantics, meeting the baseline expectation when the schema is self-sufficient.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'Close[s] an episode and extract[s] any memories' with specific verbs and resources. It implicitly distinguishes from pensyve_episode_start (start vs end) and pensyve_remember/observe (manual vs automatic extraction), though it could explicitly mention the episode_start pairing.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context (use when finishing an episode to extract memories), but lacks explicit 'when to use' guidance or named alternatives. The input schema compensates by noting episode_id comes from pensyve_episode_start, but the description text itself provides no comparative guidance against siblings like pensyve_forget or pensyve_inspect.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

pensyve_episode_startAInspect

Begin tracking an interaction episode with named participants. Returns the episode_id needed to close the episode.

ParametersJSON Schema
NameRequiredDescriptionDefault
participantsYesEntity names of the participants in this episode.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It adequately discloses that the tool initiates tracking and returns a required identifier, but omits mutation details: whether episodes can overlap, failure modes if already in an episode, persistence guarantees, or side effects on the memory system implied by siblings (forget, recall, etc.).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two focused sentences with zero redundancy. The first states the operation and required input context; the second states the critical return value and its purpose. Front-loaded structure puts the action first.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Comprehensively covers the single parameter via high schema coverage and compensates for the missing output schema by explicitly documenting the return value (episode_id) and its functional purpose. Adequate for a simple initialization tool, though error conditions or lifecycle constraints could augment completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, establishing a baseline of 3. The description references 'named participants,' which loosely maps to the parameter, but adds no syntax constraints, format examples, or semantic clarification beyond the schema's 'Entity names' description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Uses specific action verb 'Begin tracking' with clear resource 'interaction episode' and scope 'with named participants'. The mention of returning the ID 'needed to close' effectively distinguishes this from sibling pensyve_episode_end and contextualizes it within a lifecycle workflow.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides implicit usage context by stating the return value is 'needed to close the episode,' hinting at the relationship with pensyve_episode_end. However, lacks explicit 'when to use' guidance, prerequisites, or direct comparison to siblings like pensyve_observe versus starting an episode.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

pensyve_forgetBInspect

Delete all memories associated with an entity. Returns the count of forgotten memories.

ParametersJSON Schema
NameRequiredDescriptionDefault
entityYesThe entity whose memories to remove.
hard_deleteNoIf true, permanently deletes rather than soft-deleting.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Mentions return value (count) and operation scope, but lacks disclosure of soft-delete behavior implications (recoverability?), entity validation rules, or permanence warnings. Since annotations are absent, description carries full burden and leaves gaps regarding mutation safety.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste: action front-loaded, return value second. Efficient structure with every sentence earning its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a 2-parameter tool: covers operation and return value despite absent output schema. However, lacks behavioral context critical for a destructive operation—specifically soft-delete recovery semantics and entity identification patterns—which should be present given no annotations provide safety hints.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear descriptions ('permanently deletes rather than soft-deleting'). Description text doesn't add param-specific syntax or format details beyond well-documented schema, warranting baseline 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb 'Delete' with resource 'memories' and scope 'associated with an entity'. Differentiates from recall/remember siblings implicitly via 'Delete', though explicit contrast with pensyve_remember (create) and pensyve_recall (retrieve) would strengthen clarity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use hard_delete vs soft-delete, or when to prefer this over other memory tools. Simply states functionality without decision criteria or prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

pensyve_inspectBInspect

View all memories stored for an entity, optionally filtered by type. Returns an array of memory objects with stats.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of memories to return.
entityYesThe entity to inspect.
memory_typeNoMemory type filter: "episodic", "semantic", or "procedural".
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, description carries burden of explaining behavior. It adds valuable return-format context ('array of memory objects with stats') not in structured fields, but omits safety profile confirmation, permission requirements, or pagination behavior beyond the 'limit' parameter.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two efficient sentences. First covers action, resource and filtering; second covers return shape. Zero waste, front-loaded with essential information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a 3-parameter read-like tool. Compensates for missing output schema by describing return value format ('array of memory objects with stats'), though could elaborate on what 'stats' entails. No annotations present to provide safety hints.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, establishing baseline 3. Description reinforces 'memory_type' as optional filtering but adds no syntax details or semantic nuance beyond what schema already provides for 'entity' or 'limit'.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb ('View') and resource ('memories for an entity'), plus filtering scope ('optionally filtered by type'). Distinguishes from sibling 'recall' by emphasizing 'all memories' and return of 'stats', though explicit contrast is missing.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this versus siblings like 'pensyve_recall' or 'pensyve_observe'. No mention of prerequisites or exclusions. Only parameter-level context ('optionally filtered') is provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

pensyve_observeAInspect

Record an observation within an active episode. Captures what happened, who said it, and what it's about. Returns the stored episodic memory object.

ParametersJSON Schema
NameRequiredDescriptionDefault
contentYesThe observation content (max 32KB).
episode_idYesEpisode ID from `pensyve_episode_start`.
about_entityYesWhat the observation is about (e.g. "pensyve-cloud").
content_typeNoContent type: "text" (default), "code", "`tool_output`".
source_entityYesWho made the observation (e.g. "claude-code").
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Clarifies this creates episodic memory and maps parameters to natural language concepts ('what happened, who said it, what it's about'). However, omits safety traits (idempotency, failure modes, persistence guarantees) and doesn't clarify behavioral constraints beyond the episode scope.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, zero waste. Front-loaded with core action, middle sentence explains parameter semantics, final sentence clarifies return type. Every sentence earns its place with no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for basic invocation given rich schema (100% coverage), but lacks output structure details (no output schema exists) and omits explicit lifecycle prerequisites (dependency on pensyve_episode_start) in the description text itself, relying on schema field descriptions instead.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage (baseline 3). Description adds value by mapping parameters to cohesive narrative roles ('what happened'→content, 'who said it'→source_entity, 'what it's about'→about_entity), providing conceptual framing that helps the agent understand how to populate fields semantically.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear specific verb ('Record') + resource ('observation') + scope ('within an active episode'). Explicitly distinguishes from semantic-memory siblings (remember/recall) by specifying 'episodic memory object' and episode-bound context.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies prerequisite ('active episode') but lacks explicit guidance on when to use this vs. pensyve_remember (semantic memory) or pensyve_inspect. No 'when-not-to-use' or alternative recommendations provided despite clear sibling differentiation in the ecosystem.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

pensyve_recallAInspect

Search memories by semantic similarity and text matching. Returns ranked results from episodic, semantic, and procedural memory.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of results to return.
queryYesThe search query text.
typesNoOptional memory types to include ("episodic", "semantic", "procedural").
entityNoOptional entity name to filter by.
min_confidenceNoMinimum confidence threshold (0.0–1.0). Memories below this are excluded.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully notes that results are 'ranked' and describes the search mechanism (semantic + text matching), but fails to clarify safety aspects like whether the operation is read-only, if it logs the query as an episode, or rate limiting concerns.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of two efficient sentences with zero redundancy. It front-loads the core action ('Search memories') and immediately follows with the mechanism and return format. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 5 well-documented parameters (100% schema coverage) and no output schema, the description provides sufficient high-level context by stating the tool returns 'ranked results' and operates across three memory types. It adequately supports tool selection despite lacking an output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, establishing a baseline of 3. The description adds value by explaining that the search uses 'semantic similarity' (not just text matching), providing context for the 'query' parameter beyond the schema's 'search query text'. However, it does not elaborate on the 'entity' filter or 'min_confidence' threshold mechanics.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool searches memories using 'semantic similarity and text matching' and specifies the scope covers 'episodic, semantic, and procedural' memory types. While it implies distinction from siblings like 'remember' (store) and 'forget' (delete) through the specific verb 'Search', it does not explicitly reference sibling tools or contrast use cases.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage through the verb 'Search', indicating this tool is for retrieval rather than storage or deletion. However, it lacks explicit guidance on when to use this versus siblings like 'pensyve_inspect' (likely for direct retrieval by ID) or 'pensyve_remember', and provides no prerequisites or exclusion criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

pensyve_rememberBInspect

Store an explicit fact about an entity as a semantic memory. Returns the stored memory object.

ParametersJSON Schema
NameRequiredDescriptionDefault
factYesThe fact to store.
entityYesThe entity this fact is about.
confidenceNoConfidence level in [0.0, 1.0].
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden. It successfully discloses the return value ('stored memory object') and memory classification ('semantic'), but omits mutation safety details like idempotency, overwrite behavior, or persistence guarantees that annotations would typically cover.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste: first establishes purpose, second discloses return value. Perfectly front-loaded and appropriately sized for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 3-parameter tool with 100% schema coverage and no output schema, the description adequately compensates by specifying the return object. It leverages domain terminology ('semantic memory') that aligns with sibling tool naming conventions, though it could clarify persistence scope given the lack of annotations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing a baseline of 3. The description adds the qualifier 'explicit' to the fact parameter and frames the operation within 'semantic memory,' but does not elaborate on parameter interactions (e.g., how confidence affects retrieval) beyond the schema's individual field descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb (store), resource (explicit fact about an entity), and domain context (semantic memory). It implies distinction from episodic memory siblings (episode_start/end) via the 'semantic' qualifier, though it doesn't explicitly contrast with 'observe' or 'recall'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this versus siblings like 'recall' (retrieval), 'forget' (deletion), or 'observe' (potentially implicit capture). The agent must infer from the verb 'store' that this is for explicit write operations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

pensyve_statusAInspect

Get connection status, namespace info, and memory statistics. Free — not metered.

ParametersJSON Schema
NameRequiredDescriptionDefault
entityNoOptional entity name to get stats for a specific entity.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full disclosure burden. Adds critical cost behavior ('Free — not metered') not inferable from schema. However, lacks explicit safety guarantees (read-only nature implied by 'Get' but not stated), rate limits, or return format details despite absent output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, zero waste. Front-loaded with specific retrieval purpose, second sentence adds cost property that earns its place. No redundancy with structured fields. Excellent information density.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriately complete for low-complexity tool (single optional parameter, no nested objects). Lists specific return data categories compensating for absent output schema. Could enhance with return structure hint, but sufficient for agent selection.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage for the single 'entity' parameter. Description provides no parameter-specific guidance, but baseline 3 is appropriate since schema fully documents the optional filtering capability.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'Get' and clearly enumerates three distinct data categories retrieved: connection status, namespace info, and memory statistics. Effectively distinguishes from operational siblings (remember, forget, episode_start) by focusing on diagnostic/system metadata.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides valuable cost guidance ('Free — not metered') implying safe for frequent polling, but lacks explicit when-to-use guidance versus sibling inspection tools like pensyve_inspect or pensyve_observe. Usage context is implied by the diagnostic nature but not stated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources