Amplitude
Server Details
Amplitude MCP Pack
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-amplitude
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.7/5 across 10 of 10 tools scored. Lowest: 2.9/5.
The tool set mixes two entirely different domains: Amplitude analytics and Pipeworx memory/search. Within the Amplitude tools, there is some distinction, but the Pipeworx tools like 'ask_pipeworx' and 'discover_tools' have overlapping purposes (both for querying data) and the memory tools are unrelated, causing confusion for an agent.
Amplitude tools use a consistent 'amp_' prefix, but Pipeworx tools use different prefixes ('ask_pipeworx', 'discover_tools') and memory tools have no prefix. This mixing of naming conventions is inconsistent and unpredictable.
10 tools is a reasonable count, but the set feels bloated because it combines two unrelated services (Amplitude and Pipeworx) plus memory utilities. Each subset is small, but together they lack cohesion.
For the Amplitude analytics domain, basic querying is present but lacks CRUD operations (no update/delete for events or users). The Pipeworx side is vague and the memory tools are trivial. Overall, the surface is incomplete for either domain.
Available Tools
10 toolsamp_get_active_usersBInspect
Get active user counts by granularity (daily, weekly, or monthly) for a date range. Returns totals and trend data.
| Name | Required | Description | Default |
|---|---|---|---|
| m | No | Metric: "active" (DAU), "new", or "returning" (default "active") | |
| end | Yes | End date (YYYYMMDD) | |
| start | Yes | Start date (YYYYMMDD) | |
| _apiKey | Yes | Amplitude API key | |
| _secretKey | Yes | Amplitude secret key |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so description must carry behavioral burden. It discloses it returns counts for a date range, but does not mention whether authentication is required (implied by required API keys), rate limits, or data freshness. The description is adequate but lacks depth for a data access tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is one sentence, efficient and front-loaded. It conveys core purpose without extra words. Could be slightly more informative about the 'm' parameter, but overall concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 5 parameters (4 required) and no output schema, the description is complete for a simple data retrieval tool. It covers the main function but omits details like return format or error cases. For a tool with required API keys, mentioning authentication in description would be helpful.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description does not add parameter details beyond schema (e.g., date format YYYYMMDD is in schema). It implies the metric parameter exists but does not clarify 'm' values beyond what schema provides. No extra meaning added.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves active user counts (daily/weekly/monthly) for a date range. It specifies the verb 'get' and resource 'active user counts', but does not explicitly distinguish from siblings like amp_get_events or amp_get_retention, though the metric focus (active users) differentiates it implicitly.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies when to use (for active user counts) but provides no guidance on when not to use or alternatives. Siblings exist (e.g., amp_get_retention) but no exclusions are given. The date range scope is clear, but no context on prerequisite data or limitations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
amp_get_eventsCInspect
Get event counts and breakdowns for a date range (e.g., "2024-01-01" to "2024-01-31"). Returns frequency, user segments, and trends by event name.
| Name | Required | Description | Default |
|---|---|---|---|
| end | Yes | End date (YYYYMMDD) | |
| start | Yes | Start date (YYYYMMDD) | |
| _apiKey | Yes | Amplitude API key | |
| group_by | No | Property to group by (optional) | |
| _secretKey | Yes | Amplitude secret key | |
| event_type | Yes | Event name to query (e.g., "Page View", "Button Click") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description mentions it returns event counts and breakdowns, which adds some context. However, there are no annotations provided, so the description carries full burden. It does not disclose authentication requirements (though _apiKey and _secretKey are in schema), rate limits, data freshness, or potential errors.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise at one sentence, front-loading the main purpose. It could be slightly improved by adding a second sentence for when to use, but current structure is efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description partially explains return values. With 6 parameters, the description is minimal but acceptable. However, it lacks context about the tool's scope (e.g., what segmentation means, how grouping works) which might be necessary for correct use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already describes each parameter. The description adds 'event counts and breakdowns' which implies the output, but does not elaborate on how parameters affect results. Baseline 3 is appropriate as schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves event segmentation data from Amplitude for a date range, specifying it returns event counts and breakdowns. However, it does not explicitly distinguish it from siblings like amp_get_active_users or amp_get_retention.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool vs alternatives. It does not mention when to use amp_get_events over amp_get_active_users or amp_get_retention, nor does it specify any prerequisites or context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
amp_get_retentionBInspect
Get user retention metrics for a cohort over time. Returns retention percentages by time period (e.g., day 1, day 7, day 30).
| Name | Required | Description | Default |
|---|---|---|---|
| re | No | Retention type: "rolling" or "bracket" (default "rolling") | |
| end | Yes | End date (YYYYMMDD) | |
| start | Yes | Start date (YYYYMMDD) | |
| _apiKey | Yes | Amplitude API key | |
| _secretKey | Yes | Amplitude secret key |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, description partially covers behavior: it returns time-series data. But lacks details like whether data is aggregated, time granularity, or any side effects. Acceptable for a read-only tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences, front-loaded with purpose. No unnecessary words. Could benefit from specifying the retention type from schema.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and moderate complexity (5 params), the description is adequate but minimal. Missing details like return format, date format validation, or example values.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description adds context about the overall purpose (retention data) but doesn't detail individual parameters beyond schema. However, it correctly implies date range usage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it retrieves retention data for a date range and explains the purpose (showing user return over time). It distinguishes from siblings like amp_get_active_users which focus on active users, but could be more specific about the metric.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this vs alternatives like amp_get_active_users or amp_get_events. Does not specify prerequisites (e.g., need API keys) or typical use cases.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
amp_get_user_activityAInspect
Get recent event activity timeline for a specific user. Returns events with timestamps, properties, and interactions.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max events to return (default 100, max 1000) | |
| offset | No | Pagination offset (default 0) | |
| _apiKey | Yes | Amplitude API key | |
| _secretKey | Yes | Amplitude secret key | |
| amplitude_id | Yes | Amplitude internal user ID (from amp_user_search results) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the burden. It discloses that the tool returns 'recent event activity' but does not describe behavioral traits such as auth requirements (though _apiKey and _secretKey are parameters), rate limits, or what 'recent' means. It adds minimal context beyond the schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, concise sentence that front-loads the purpose. Every word is necessary, and there is no waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 5 parameters (100% schema coverage) and no output schema, the description is somewhat complete but lacks behavioral context. It explains what it does but not the response format or any edge cases. With no annotations, more detail would be beneficial.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the baseline is 3. The description does not add meaning beyond the schema; it merely mentions 'Amplitude ID' which is already described in the schema. No additional parameter guidance is provided.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses the verb 'Get' and resource 'recent event activity for a specific user', clearly indicating what the tool does. It differentiates from siblings like amp_get_events (which may not be user-specific) and amp_get_active_users (which focuses on active users). However, it does not explicitly distinguish from all siblings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by specifying 'for a specific user by their Amplitude ID', but it does not provide explicit guidance on when to use this tool vs alternatives like amp_get_events or amp_user_search. It lacks when-not-to-use or alternative recommendations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
amp_user_searchAInspect
Search for users by ID or property (e.g., email, user_id). Returns matching profiles with properties, event history, and segments.
| Name | Required | Description | Default |
|---|---|---|---|
| user | Yes | User search term (email, user_id, or Amplitude ID) | |
| _apiKey | Yes | Amplitude API key | |
| _secretKey | Yes | Amplitude secret key |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must carry the burden. It states the tool returns matching user profiles, but does not disclose any limitations, permissions required, or side effects. Since the tool likely requires API and secret keys (implied by parameters), the description does not add much beyond that.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that is concise and front-loaded, clearly stating the purpose. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 3 parameters all described in schema, and no output schema, the description is sufficient to understand the tool's function. It could mention the return format (e.g., list of profiles) but the description implies this. Completeness is high.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description adds that the 'user' parameter is a search term for email, user_id, or Amplitude ID, which adds value. However, the API and secret key parameters are not elaborated on in the description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches for a user by user property or user ID and returns matching user profiles. The verb 'search' and resource 'user' are specific, and the scope (returning profiles) distinguishes it from sibling tools like amp_get_user_activity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies this tool is for finding users, but it does not explicitly state when to use this over other tools like amp_get_user_activity. However, given the sibling tools cover different functionality (active users, events, retention), usage is fairly clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ask_pipeworxAInspect
Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | Your question or request in natural language |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Since no annotations are provided, the description carries full burden. It discloses that the tool internally selects tools and fills arguments, returning a result. This adds transparency about its orchestration behavior. However, it does not mention any limitations, potential delays, or failure modes.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is very concise, with three sentences covering purpose, behavior, and examples. No filler. Front-loaded with the key action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (single parameter, no output schema) and the orchestration nature, the description is quite complete. It explains what the tool does and how to use it. A slight gap is not discussing potential ambiguity or clarification mechanisms.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% coverage for the single parameter 'question' with a description. The description adds value by explaining how to use the parameter ('describe what you need' and examples), but the schema already covers the meaning. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a clear verb ('Ask a question') and specifies the resource ('get an answer from the best available data source'). It explicitly states that Pipeworx selects the right tool and fills arguments, distinguishing it from sibling tools that are direct tools. The examples provide concrete use cases.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description advises 'just describe what you need' and provides examples, implying when to use this tool (when the user wants a natural language answer) vs. browsing tools directly. However, it does not explicitly state when not to use it or mention alternatives (the sibling tools themselves).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discover_toolsAInspect
Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of tools to return (default 20, max 50) | |
| query | Yes | Natural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description discloses the tool's behavioral trait of returning 'most relevant tools with names and descriptions,' which is important for agent decision-making. Since no annotations are provided, the description carries the full burden, and it does so adequately by explaining the search-and-return behavior. It could mention if results are ordered by relevance or any caveats, but it's sufficiently transparent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences long, front-loaded with the core action, and every sentence provides value: the first explains what the tool does, the second gives explicit usage guidance. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (2 parameters, no output schema, no annotations), the description is nearly complete. It explains the purpose, when to use it, and what it returns. The only minor gap is not explicitly stating that it searches by semantic matching (though implied by 'natural language description'). It doesn't need to explain return values since there's no output schema, but a brief note on the result format would be ideal.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema already provides descriptions for both parameters ('query' and 'limit'), achieving 100% schema coverage. The description adds context by mentioning the default and max for 'limit' (20 and 50), which is helpful. However, it doesn't add new semantic meaning beyond what the schema offers, so a baseline of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: searching the Pipeworx tool catalog by describing what you need. It specifies the verb ('Search'), the resource ('Pipeworx tool catalog'), and the outcome ('Returns the most relevant tools'). This effectively distinguishes it from sibling tools, which are action-specific (e.g., amp_get_active_users) or memory-related (remember/recall).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly tells when to use this tool: 'Call this FIRST when you have 500+ tools available and need to find the right ones.' It provides clear guidance on the context (large tool catalog) and the task (finding relevant tools), leaving no ambiguity about its role compared to siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forgetAInspect
Delete a stored memory by key.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must carry the full burden. It states deletion is permanent ('delete'), but does not clarify if the operation is irreversible, what happens to related data, or any side effects. This is acceptable for a simple delete tool but lacks depth.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, concise sentence that directly states the action and object. No unnecessary words; every part adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (single required parameter, no output schema, no nested objects), the description is adequate. However, it could mention that deletion is permanent or that the key must exactly match a stored memory.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the schema already documents the 'key' parameter. The description adds no additional meaning beyond the schema, meeting the baseline of 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a strong verb-resource pair ('Delete a stored memory by key') that clearly distinguishes this from siblings like 'remember' (store) and 'recall' (retrieve). It explicitly states the action and the identifier.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies that 'forget' is for deletion, but does not specify when to use it vs. alternatives (e.g., 'recall' for reading, 'remember' for writing). No explicit exclusions or context are given.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recallAInspect
Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.
| Name | Required | Description | Default |
|---|---|---|---|
| key | No | Memory key to retrieve (omit to list all keys) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so description carries the burden. It discloses that omitting the key lists all memories, but does not mention behavior if key is missing or if memory doesn't exist, nor any side effects. Given no annotations, a 3 is reasonable.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, clear and front-loaded. Each sentence adds value. Slightly verbose phrasing ('previously stored', 'saved earlier') could be tightened.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool is simple (1 optional param, no output schema), the description covers the essential use case. Could mention return format (e.g., returns memory content) but not necessary given simplicity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description adds meaning beyond schema by explaining that omitting the key lists all memories, but does not provide additional detail about the key parameter (e.g., format, case-sensitivity).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Retrieve') and resource ('stored memory'), and distinguishes between retrieving by key vs listing all. This differentiates it from sibling tools like 'remember' and 'forget'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description tells when to use it ('to retrieve context you saved earlier'), and implies when not to (if you want to store, use 'remember'). However, it does not explicitly mention alternatives or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rememberAInspect
Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key (e.g., "subject_property", "target_ticker", "user_preference") | |
| value | Yes | Value to store (any text — findings, addresses, preferences, notes) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses persistence behavior (authenticated vs. anonymous), which is useful. However, it does not mention overwrite behavior, memory limits, or data retrieval methods. Given the absence of annotations, a score of 3 is reasonable.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise with three sentences, front-loading the core purpose. The last sentence adds useful but non-essential detail about persistence. It could be slightly more efficient by removing redundancy, but overall it is well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple key-value store tool with 2 parameters and no output schema, the description covers the essential use cases, persistence model, and example keys. It lacks details on overwriting and limits, but given the tool's simplicity, it is largely complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, but the description adds meaning beyond the schema by clarifying that values can store 'findings, addresses, preferences, notes'. It also provides example keys in the schema. The description effectively complements the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool stores a key-value pair in session memory, specifying the verb 'store' and resource 'key-value pair'. It distinguishes from sibling tools like 'forget' (which likely removes) and 'recall' (which retrieves).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit context for use: saving intermediate findings, user preferences, or context across tool calls. It also mentions persistence differences between authenticated users and anonymous sessions. However, it does not explicitly state when not to use this tool or mention alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!