Skip to main content
Glama

Server Details

Amplitude MCP Pack

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-amplitude
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.7/5 across 10 of 10 tools scored. Lowest: 2.9/5.

Server CoherenceC
Disambiguation1/5

The tool set mixes two entirely different domains: Amplitude analytics and Pipeworx memory/search. Within the Amplitude tools, there is some distinction, but the Pipeworx tools like 'ask_pipeworx' and 'discover_tools' have overlapping purposes (both for querying data) and the memory tools are unrelated, causing confusion for an agent.

Naming Consistency2/5

Amplitude tools use a consistent 'amp_' prefix, but Pipeworx tools use different prefixes ('ask_pipeworx', 'discover_tools') and memory tools have no prefix. This mixing of naming conventions is inconsistent and unpredictable.

Tool Count3/5

10 tools is a reasonable count, but the set feels bloated because it combines two unrelated services (Amplitude and Pipeworx) plus memory utilities. Each subset is small, but together they lack cohesion.

Completeness2/5

For the Amplitude analytics domain, basic querying is present but lacks CRUD operations (no update/delete for events or users). The Pipeworx side is vague and the memory tools are trivial. Overall, the surface is incomplete for either domain.

Available Tools

10 tools
amp_get_active_usersBInspect

Get active user counts by granularity (daily, weekly, or monthly) for a date range. Returns totals and trend data.

ParametersJSON Schema
NameRequiredDescriptionDefault
mNoMetric: "active" (DAU), "new", or "returning" (default "active")
endYesEnd date (YYYYMMDD)
startYesStart date (YYYYMMDD)
_apiKeyYesAmplitude API key
_secretKeyYesAmplitude secret key
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations are empty, so description must carry behavioral burden. It discloses it returns counts for a date range, but does not mention whether authentication is required (implied by required API keys), rate limits, or data freshness. The description is adequate but lacks depth for a data access tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is one sentence, efficient and front-loaded. It conveys core purpose without extra words. Could be slightly more informative about the 'm' parameter, but overall concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 5 parameters (4 required) and no output schema, the description is complete for a simple data retrieval tool. It covers the main function but omits details like return format or error cases. For a tool with required API keys, mentioning authentication in description would be helpful.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description does not add parameter details beyond schema (e.g., date format YYYYMMDD is in schema). It implies the metric parameter exists but does not clarify 'm' values beyond what schema provides. No extra meaning added.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves active user counts (daily/weekly/monthly) for a date range. It specifies the verb 'get' and resource 'active user counts', but does not explicitly distinguish from siblings like amp_get_events or amp_get_retention, though the metric focus (active users) differentiates it implicitly.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies when to use (for active user counts) but provides no guidance on when not to use or alternatives. Siblings exist (e.g., amp_get_retention) but no exclusions are given. The date range scope is clear, but no context on prerequisite data or limitations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

amp_get_eventsCInspect

Get event counts and breakdowns for a date range (e.g., "2024-01-01" to "2024-01-31"). Returns frequency, user segments, and trends by event name.

ParametersJSON Schema
NameRequiredDescriptionDefault
endYesEnd date (YYYYMMDD)
startYesStart date (YYYYMMDD)
_apiKeyYesAmplitude API key
group_byNoProperty to group by (optional)
_secretKeyYesAmplitude secret key
event_typeYesEvent name to query (e.g., "Page View", "Button Click")
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description mentions it returns event counts and breakdowns, which adds some context. However, there are no annotations provided, so the description carries full burden. It does not disclose authentication requirements (though _apiKey and _secretKey are in schema), rate limits, data freshness, or potential errors.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise at one sentence, front-loading the main purpose. It could be slightly improved by adding a second sentence for when to use, but current structure is efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description partially explains return values. With 6 parameters, the description is minimal but acceptable. However, it lacks context about the tool's scope (e.g., what segmentation means, how grouping works) which might be necessary for correct use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already describes each parameter. The description adds 'event counts and breakdowns' which implies the output, but does not elaborate on how parameters affect results. Baseline 3 is appropriate as schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves event segmentation data from Amplitude for a date range, specifying it returns event counts and breakdowns. However, it does not explicitly distinguish it from siblings like amp_get_active_users or amp_get_retention.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool vs alternatives. It does not mention when to use amp_get_events over amp_get_active_users or amp_get_retention, nor does it specify any prerequisites or context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

amp_get_retentionBInspect

Get user retention metrics for a cohort over time. Returns retention percentages by time period (e.g., day 1, day 7, day 30).

ParametersJSON Schema
NameRequiredDescriptionDefault
reNoRetention type: "rolling" or "bracket" (default "rolling")
endYesEnd date (YYYYMMDD)
startYesStart date (YYYYMMDD)
_apiKeyYesAmplitude API key
_secretKeyYesAmplitude secret key
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, description partially covers behavior: it returns time-series data. But lacks details like whether data is aggregated, time granularity, or any side effects. Acceptable for a read-only tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences, front-loaded with purpose. No unnecessary words. Could benefit from specifying the retention type from schema.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and moderate complexity (5 params), the description is adequate but minimal. Missing details like return format, date format validation, or example values.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. The description adds context about the overall purpose (retention data) but doesn't detail individual parameters beyond schema. However, it correctly implies date range usage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it retrieves retention data for a date range and explains the purpose (showing user return over time). It distinguishes from siblings like amp_get_active_users which focus on active users, but could be more specific about the metric.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this vs alternatives like amp_get_active_users or amp_get_events. Does not specify prerequisites (e.g., need API keys) or typical use cases.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

amp_get_user_activityAInspect

Get recent event activity timeline for a specific user. Returns events with timestamps, properties, and interactions.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax events to return (default 100, max 1000)
offsetNoPagination offset (default 0)
_apiKeyYesAmplitude API key
_secretKeyYesAmplitude secret key
amplitude_idYesAmplitude internal user ID (from amp_user_search results)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the burden. It discloses that the tool returns 'recent event activity' but does not describe behavioral traits such as auth requirements (though _apiKey and _secretKey are parameters), rate limits, or what 'recent' means. It adds minimal context beyond the schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, concise sentence that front-loads the purpose. Every word is necessary, and there is no waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 5 parameters (100% schema coverage) and no output schema, the description is somewhat complete but lacks behavioral context. It explains what it does but not the response format or any edge cases. With no annotations, more detail would be beneficial.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description does not add meaning beyond the schema; it merely mentions 'Amplitude ID' which is already described in the schema. No additional parameter guidance is provided.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses the verb 'Get' and resource 'recent event activity for a specific user', clearly indicating what the tool does. It differentiates from siblings like amp_get_events (which may not be user-specific) and amp_get_active_users (which focuses on active users). However, it does not explicitly distinguish from all siblings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by specifying 'for a specific user by their Amplitude ID', but it does not provide explicit guidance on when to use this tool vs alternatives like amp_get_events or amp_user_search. It lacks when-not-to-use or alternative recommendations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ask_pipeworxAInspect

Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".

ParametersJSON Schema
NameRequiredDescriptionDefault
questionYesYour question or request in natural language
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Since no annotations are provided, the description carries full burden. It discloses that the tool internally selects tools and fills arguments, returning a result. This adds transparency about its orchestration behavior. However, it does not mention any limitations, potential delays, or failure modes.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is very concise, with three sentences covering purpose, behavior, and examples. No filler. Front-loaded with the key action.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (single parameter, no output schema) and the orchestration nature, the description is quite complete. It explains what the tool does and how to use it. A slight gap is not discussing potential ambiguity or clarification mechanisms.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% coverage for the single parameter 'question' with a description. The description adds value by explaining how to use the parameter ('describe what you need' and examples), but the schema already covers the meaning. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a clear verb ('Ask a question') and specifies the resource ('get an answer from the best available data source'). It explicitly states that Pipeworx selects the right tool and fills arguments, distinguishing it from sibling tools that are direct tools. The examples provide concrete use cases.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description advises 'just describe what you need' and provides examples, implying when to use this tool (when the user wants a natural language answer) vs. browsing tools directly. However, it does not explicitly state when not to use it or mention alternatives (the sibling tools themselves).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discover_toolsAInspect

Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of tools to return (default 20, max 50)
queryYesNatural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries")
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description discloses the tool's behavioral trait of returning 'most relevant tools with names and descriptions,' which is important for agent decision-making. Since no annotations are provided, the description carries the full burden, and it does so adequately by explaining the search-and-return behavior. It could mention if results are ordered by relevance or any caveats, but it's sufficiently transparent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences long, front-loaded with the core action, and every sentence provides value: the first explains what the tool does, the second gives explicit usage guidance. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (2 parameters, no output schema, no annotations), the description is nearly complete. It explains the purpose, when to use it, and what it returns. The only minor gap is not explicitly stating that it searches by semantic matching (though implied by 'natural language description'). It doesn't need to explain return values since there's no output schema, but a brief note on the result format would be ideal.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema already provides descriptions for both parameters ('query' and 'limit'), achieving 100% schema coverage. The description adds context by mentioning the default and max for 'limit' (20 and 50), which is helpful. However, it doesn't add new semantic meaning beyond what the schema offers, so a baseline of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: searching the Pipeworx tool catalog by describing what you need. It specifies the verb ('Search'), the resource ('Pipeworx tool catalog'), and the outcome ('Returns the most relevant tools'). This effectively distinguishes it from sibling tools, which are action-specific (e.g., amp_get_active_users) or memory-related (remember/recall).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly tells when to use this tool: 'Call this FIRST when you have 500+ tools available and need to find the right ones.' It provides clear guidance on the context (large tool catalog) and the task (finding relevant tools), leaving no ambiguity about its role compared to siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

forgetAInspect

Delete a stored memory by key.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key to delete
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must carry the full burden. It states deletion is permanent ('delete'), but does not clarify if the operation is irreversible, what happens to related data, or any side effects. This is acceptable for a simple delete tool but lacks depth.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, concise sentence that directly states the action and object. No unnecessary words; every part adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (single required parameter, no output schema, no nested objects), the description is adequate. However, it could mention that deletion is permanent or that the key must exactly match a stored memory.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the schema already documents the 'key' parameter. The description adds no additional meaning beyond the schema, meeting the baseline of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a strong verb-resource pair ('Delete a stored memory by key') that clearly distinguishes this from siblings like 'remember' (store) and 'recall' (retrieve). It explicitly states the action and the identifier.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies that 'forget' is for deletion, but does not specify when to use it vs. alternatives (e.g., 'recall' for reading, 'remember' for writing). No explicit exclusions or context are given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recallAInspect

Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyNoMemory key to retrieve (omit to list all keys)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so description carries the burden. It discloses that omitting the key lists all memories, but does not mention behavior if key is missing or if memory doesn't exist, nor any side effects. Given no annotations, a 3 is reasonable.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, clear and front-loaded. Each sentence adds value. Slightly verbose phrasing ('previously stored', 'saved earlier') could be tightened.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool is simple (1 optional param, no output schema), the description covers the essential use case. Could mention return format (e.g., returns memory content) but not necessary given simplicity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description adds meaning beyond schema by explaining that omitting the key lists all memories, but does not provide additional detail about the key parameter (e.g., format, case-sensitivity).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Retrieve') and resource ('stored memory'), and distinguishes between retrieving by key vs listing all. This differentiates it from sibling tools like 'remember' and 'forget'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description tells when to use it ('to retrieve context you saved earlier'), and implies when not to (if you want to store, use 'remember'). However, it does not explicitly mention alternatives or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rememberAInspect

Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key (e.g., "subject_property", "target_ticker", "user_preference")
valueYesValue to store (any text — findings, addresses, preferences, notes)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses persistence behavior (authenticated vs. anonymous), which is useful. However, it does not mention overwrite behavior, memory limits, or data retrieval methods. Given the absence of annotations, a score of 3 is reasonable.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise with three sentences, front-loading the core purpose. The last sentence adds useful but non-essential detail about persistence. It could be slightly more efficient by removing redundancy, but overall it is well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple key-value store tool with 2 parameters and no output schema, the description covers the essential use cases, persistence model, and example keys. It lacks details on overwriting and limits, but given the tool's simplicity, it is largely complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, but the description adds meaning beyond the schema by clarifying that values can store 'findings, addresses, preferences, notes'. It also provides example keys in the schema. The description effectively complements the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool stores a key-value pair in session memory, specifying the verb 'store' and resource 'key-value pair'. It distinguishes from sibling tools like 'forget' (which likely removes) and 'recall' (which retrieves).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit context for use: saving intermediate findings, user preferences, or context across tool calls. It also mentions persistence differences between authenticated users and anonymous sessions. However, it does not explicitly state when not to use this tool or mention alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.