Skip to main content
Glama

Server Details

Klaviyo MCP Pack — wraps the Klaviyo API for email marketing

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-klaviyo
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.7/5 across 10 of 10 tools scored. Lowest: 2.9/5.

Server CoherenceB
Disambiguation3/5

There is a clear split between general-purpose tools (ask_pipeworx, discover_tools, memory tools) and Klaviyo-specific tools. However, ask_pipeworx and discover_tools overlap in purpose (both help find information), and the memory tools are separate but not part of the Klaviyo domain, causing potential confusion about which tools to use for Klaviyo tasks.

Naming Consistency3/5

Tool names are inconsistent: Klaviyo tools follow a klaviyo_verb_noun pattern, but ask_pipeworx and discover_tools use different conventions. Memory tools (remember, recall, forget) are short verbs without a clear pattern. This mixed style reduces predictability.

Tool Count4/5

With 10 tools, the count is reasonable for a server that combines a general-purpose query tool with Klaviyo-specific operations. However, the inclusion of memory and catalog tools expands the scope beyond Klaviyo, making it slightly more than expected for a single-domain server.

Completeness3/5

The Klaviyo tools cover listing and getting campaigns, profiles, and lists, but lack write operations (create, update, delete) and operations for other resources like flows or segments. The general-purpose ask_pipeworx tool might fill gaps, but it is unclear if it covers missing Klaviyo actions. Thus, the tool surface is incomplete for full lifecycle management.

Available Tools

10 tools
ask_pipeworxAInspect

Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".

ParametersJSON Schema
NameRequiredDescriptionDefault
questionYesYour question or request in natural language
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description bears full burden. It discloses that Pipeworx picks the right tool and fills arguments, which is key behavioral info. It does not mention rate limits, authentication needs, or error handling, but for a query tool with simple input, the description is sufficiently transparent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise (3 sentences) and front-loaded with purpose. Every sentence adds value: first states the function, second explains how it works, third gives examples. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (single param, no output schema, no nested objects), the description is nearly complete. It lacks details on response format or error cases, but for a natural language query tool, the description adequately covers what the agent needs to know.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with one parameter described as 'Your question or request in natural language'. The description adds context that the question should be in plain English and includes examples, which adds meaning beyond the schema. Baseline 3 is appropriate as schema already covers the param well.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool accepts natural language questions and returns answers from the best available data source, distinguishing it from sibling tools that are more specific (e.g., Klaviyo tools, discover_tools, recall/remember). The verb 'ask' and resource 'answer' are precise.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly advises to 'just describe what you need' and provides examples, which implies when to use (when you have a natural language request) and not to use (when you need a specific tool action). However, it does not explicitly list alternative tools or when not to use it, so it loses one point.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discover_toolsAInspect

Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of tools to return (default 20, max 50)
queryYesNatural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must cover behavioral traits. It explains that it searches and returns relevant tools with names and descriptions, which is basic. However, it does not mention any side effects, authentication needs, or rate limits. Given no annotations, a score of 3 is appropriate as it conveys core behavior but lacks depth.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, each earning its place: purpose, what it returns, and when to use. Front-loaded with the key action. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and low complexity (2 params, no enums, no nested objects), the description is sufficient for an agent to use the tool correctly. It covers purpose, usage, and parameters. Minor gap: does not mention that the tool returns tool names and descriptions (already implied by 'Returns the most relevant tools with names and descriptions'). Score 4.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. The description adds value beyond the schema by explaining the query parameter with examples and noting default/max for limit. This elevates the score to 4.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verbs ('Search', 'Returns', 'Call this FIRST') and clearly identifies the resource ('Pipeworx tool catalog'). It distinguishes from sibling tools by stating it is for searching when you have 500+ tools available, which implies other tools are for specific operations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states 'Call this FIRST' and provides context (when you have 500+ tools available and need to find the right ones). This clearly guides when to use this tool before others.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

forgetCInspect

Delete a stored memory by key.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key to delete
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description bears full burden. It does not disclose whether deletion is permanent, requires confirmation, or affects other data. The verb 'delete' implies destructive action, but no further context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Very concise, single sentence, front-loaded with action and resource. Could be improved by mentioning permanence or side effects.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple delete tool with one param and no output schema, the description is minimal. It lacks completeness about what happens after deletion (e.g., success indicator, idempotency).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage with a single parameter 'key' described as 'Memory key to delete'. The description adds no further semantics beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action (delete) and the resource (stored memory by key). It distinguishes from sibling tools like 'remember' (store) and 'recall' (retrieve), though not explicitly.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives like 'recall' or 'remember'. The description implies deletion but does not mention prerequisites or scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

klaviyo_get_campaignBInspect

Get a campaign's full details by ID. Returns name, status, subject line, recipient list, performance stats, and send history.

ParametersJSON Schema
NameRequiredDescriptionDefault
_apiKeyYesKlaviyo private API key
campaign_idYesKlaviyo campaign ID
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It states 'Returns full campaign details' but does not disclose behavior like rate limits, authentication details beyond the _apiKey parameter, or what happens if the ID is invalid. The description is adequate but lacks depth.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with purpose. Efficient but could combine into one sentence without loss.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple retrieval tool with full schema coverage and no output schema, the description is adequate. It lacks any mention of return structure, but for a get-by-ID operation, the description covers the essential purpose.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the schema already documents both parameters. The description adds no further meaning beyond what the schema provides, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves a single Klaviyo campaign by ID, with a verb (Get) and resource (campaign). However, it doesn't explicitly distinguish it from siblings like klaviyo_list_campaigns, which lists campaigns.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives like klaviyo_list_campaigns or other campaign tools. The description does not mention context such as prerequisites (e.g., needing a campaign ID) or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

klaviyo_get_profileBInspect

Get a contact's full profile by ID. Returns email, name, phone, custom properties, list memberships, and subscription status.

ParametersJSON Schema
NameRequiredDescriptionDefault
_apiKeyYesKlaviyo private API key
profile_idYesKlaviyo profile ID
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Description states 'Returns full profile details', which adds context beyond the empty annotations. However, annotations provide no hints (no readOnlyHint, destructiveHint), so description bears burden. Lacks details on response shape or pagination, but since no output schema exists, description partially compensates. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two short sentences, front-loaded with action and resource. No fluff. Could arguably merge into one sentence, but clear and efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Simple tool with 2 params, no nested objects, no output schema. Description adequately covers purpose and return type ('full profile details'). However, no mention of error conditions or what 'full details' entails. Acceptable for simplicity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (both parameters described in schema). Description does not add any parameter-level detail beyond schema. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states verb 'Get', resource 'single Klaviyo profile', and identifier 'by its ID'. Distinguishes from sibling tools like klaviyo_list_profiles (list vs. single). However, does not explicitly contrast with other get tools (klaviyo_get_campaign) but sufficient given resource type differs.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use vs alternatives (e.g., klaviyo_list_profiles for listing). No mention of prerequisites like API key requirements (though _apiKey param exists) or that profile_id is needed. Description implies usage for fetching full details but lacks exclusions or context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

klaviyo_list_campaignsBInspect

Search campaigns by status (draft, scheduled, sent). Returns campaign IDs, names, status, send dates, and performance metrics.

ParametersJSON Schema
NameRequiredDescriptionDefault
filterNoFilter by status (e.g., "equals(messages.channel,\"email\")")
_apiKeyYesKlaviyo private API key
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must carry the behavioral burden. It indicates that the tool can filter by status, but does not disclose pagination, rate limits, or other side effects. A read/list operation is implied, but not explicitly stated.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, concise sentence that states the purpose and key capability. No wasted words, but could be slightly more structured (e.g., separating purpose and optional filtering).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has only 2 parameters, 100% schema coverage, and no output schema, the description adequately covers the tool's main functionality. However, it lacks details on return format or expected output, which could affect an agent's invocation decision.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and the description adds 'draft, scheduled, sent' as example statuses, adding context beyond the schema's filter example. However, it does not explain the filter syntax beyond a brief example.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it lists campaigns from Klaviyo, with optional status filtering. It differentiates from siblings like klaviyo_get_campaign (single campaign) and klaviyo_list_lists, though not explicitly.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description mentions optional filtering by status, providing a usage hint. However, it does not specify when to use this tool versus other listing tools or when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

klaviyo_list_listsBInspect

Get all email lists in your account. Returns list IDs, names, subscriber counts, and creation dates.

ParametersJSON Schema
NameRequiredDescriptionDefault
_apiKeyYesKlaviyo private API key
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations are empty, so description carries full burden. It states the tool lists all email lists, but does not disclose any behavioral traits like rate limits, authentication requirements (beyond the API key), or whether results are paginated. Minimal but not misleading.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, no wasted words. Front-loaded with the core purpose. Could potentially add more context, but it is concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given low complexity (1 param, no output schema), the description is minimally adequate. It lacks details on output format or pagination, but for a simple list operation it is sufficient.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with a single parameter (_apiKey) whose description is clear. The tool description does not add any extra meaning beyond the schema, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool lists all email lists in Klaviyo, using a specific verb and resource. It distinguishes from siblings like klaviyo_get_campaign or klaviyo_list_profiles by focusing on lists.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool vs alternatives, but the description implies it is for retrieving all lists, which is clear enough. No exclusions or alternatives mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

klaviyo_list_profilesBInspect

Search contacts by email, name, or custom attributes. Returns profile IDs, emails, names, and properties with pagination support.

ParametersJSON Schema
NameRequiredDescriptionDefault
filterNoFilter string in Klaviyo filter syntax (e.g., "equals(email,\"user@example.com\")")
_apiKeyYesKlaviyo private API key
page_sizeNoNumber of profiles per page (default 20, max 100)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are present, so the description carries the full burden. It discloses that the tool supports optional filtering and page size, which are key behaviors. It does not mention pagination behavior (e.g., if results are limited or how to handle multiple pages), rate limits, or data freshness, but the schema already describes the filter and page_size parameters sufficiently.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise (two sentences) and front-loads the main action. It avoids unnecessary details, making it efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has no output schema, the description does not explain what is returned (e.g., list of profiles, metadata). It also does not mention pagination limits beyond page_size (max 100). For a list tool with no output schema, some return structure context would be helpful. However, the schema parameters are fully documented.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. The description adds the phrase 'optional filtering and page size', confirming the optional nature of those parameters, but does not add new semantics beyond what the schema already provides. The schema already describes the filter syntax and page size range.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool lists profiles (contacts) from Klaviyo, specifying the data source and optional features (filtering and page size). It does not explicitly distinguish it from siblings like klaviyo_get_profile (which gets a single profile) but the intent is clear.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description mentions optional filtering and page size, implying these can be used to refine results. However, it does not provide guidance on when to use this tool versus other sibling tools (e.g., klaviyo_get_profile for a single profile, or klaviyo_list_campaigns for campaigns).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recallAInspect

Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyNoMemory key to retrieve (omit to list all keys)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so the description carries full burden. It discloses the dual behavior (retrieve by key vs list all) and mentions persistence across sessions, which is good context for a read-only operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with the core action, no wasted words. Efficient and complete.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple tool with no output schema and low complexity, the description is complete enough. It covers retrieval and listing, and mentions cross-session persistence.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with one optional parameter. The description adds context about listing all keys when omitted, which the schema's description doesn't fully convey.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves a previously stored memory by key or lists all memories if key is omitted. It distinguishes itself from sibling tools like 'remember' (store) and 'forget' (delete) by focusing on retrieval and listing.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains when to use it: to retrieve context saved earlier. It doesn't explicitly mention when not to use it or list alternatives, but the context is clear for a simple memory tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rememberAInspect

Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key (e.g., "subject_property", "target_ticker", "user_preference")
valueYesValue to store (any text — findings, addresses, preferences, notes)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It discloses persistence behavior ('authenticated users get persistent memory; anonymous sessions last 24 hours'), which is beyond the basic 'store' action. No contradictions with annotations (none provided).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, concise and front-loaded. No wasted words, but could potentially be even more compact.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and simple input schema, the description is complete enough. It explains purpose, usage context, and persistence behavior. No obvious gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and includes examples for 'key' and a description for 'value'. The description adds no additional parameter details beyond the schema, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it stores a key-value pair in session memory, distinguishing it from siblings like 'recall' and 'forget'. The verb 'store' and resource 'key-value pair in session memory' are specific.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains when to use it ('save intermediate findings, user preferences, or context across tool calls') and distinguishes between authenticated and anonymous sessions. However, it does not explicitly say when not to use it or name alternative tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.