Dropbox
Server Details
Dropbox MCP Pack — wraps the Dropbox API v2
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-dropbox
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.7/5 across 10 of 10 tools scored. Lowest: 2.6/5.
The Dropbox tools (dropbox_create_folder, dropbox_download, etc.) are well-disambiguated, but they are mixed with Pipeworx tools (ask_pipeworx, discover_tools) and memory tools (forget, recall, remember) that serve completely different purposes. This makes it unclear which tools are related to Dropbox and which are for other domains.
Tool names are inconsistent: Dropbox tools use a 'dropbox_' prefix with snake_case, Pipeworx tools use no prefix and snake_case, and memory tools use single verbs without prefix. No consistent naming convention across the set.
10 tools is a reasonable number, but the set is split across three unrelated domains (Dropbox, Pipeworx, memory). Each domain individually has a small number of tools, which might be appropriate if the server is intended to be a general-purpose assistant.
For Dropbox, basic CRUD is present (list, download, search, create folder, get metadata) but missing update, delete, and upload. Pipeworx and memory tools are minimal but serve their purposes. Overall, the Dropbox coverage has notable gaps.
Available Tools
10 toolsask_pipeworxAInspect
Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | Your question or request in natural language |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It explains that Pipeworx 'picks the right tool, fills the arguments, and returns the result,' which discloses its autonomous behavior. It does not describe side effects or limitations, but given the tool's purpose (question-answering), the description is reasonably transparent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise (three sentences) and front-loaded with the purpose. Every sentence adds value: first sentence states the action, second explains the mechanism, third provides examples. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (single parameter, no output schema, no annotations), the description is complete. It explains what the tool does, how it works, and provides examples. There is no need for additional behavioral details for this type of tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% coverage with a single 'question' parameter described as 'Your question or request in natural language.' The description adds value by explaining that the question should be in plain English and giving examples, which provides more context than the schema alone.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Ask a question') and resource ('Pipeworx') and clearly distinguishes from siblings by stating it 'picks the right tool, fills the arguments, and returns the result,' which contrasts with sibling tools that perform specific actions like dropbox_create_folder or dropbox_search.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear usage context: 'No need to browse tools or learn schemas — just describe what you need,' and gives examples like 'What is the US trade deficit with China?' However, it does not explicitly state when not to use this tool or mention alternatives, so it loses a point.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discover_toolsAInspect
Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of tools to return (default 20, max 50) | |
| query | Yes | Natural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description reveals that the tool performs a search and returns relevant tool names and descriptions, but does not specify any additional behavioral traits like whether it modifies data or requires authentication. However, given no annotations are provided, it adequately covers the core behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is three sentences, all front-loaded with the action, and no superfluous information. Every sentence adds value: purpose, mechanism, and usage guidance.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (2 parameters, no output schema, no nested objects), the description is complete. It explains what it does, when to use it, and what it returns. No additional information is necessary.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description does not add extra meaning beyond what the schema provides for parameters; it mentions 'Natural language description' for query which aligns with schema. No additional parameter details are needed.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Search', the resource 'Pipeworx tool catalog', and the mechanism 'by describing what you need'. It distinguishes the tool's purpose from siblings by specifying it returns tool names and descriptions, and instructs to call it first when many tools are available.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool: 'Call this FIRST when you have 500+ tools available and need to find the right ones for your task.' It implies this tool is for discovery, not execution, and suggests alternatives are the specific tools returned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
dropbox_create_folderCInspect
Create a new folder in Dropbox at a specified path. Returns folder metadata. Use to organize files or set up directory structures.
| Name | Required | Description | Default |
|---|---|---|---|
| path | Yes | Path of the folder to create (e.g., "/New Folder") | |
| autorename | No | Auto-rename if a folder with the same name exists (default false) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It does not disclose behavior like whether the folder is created under a parent path, any side effects, or error conditions (e.g., if path is invalid).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single concise sentence that efficiently states the purpose. However, it could be slightly improved by adding context about parameters.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (2 params, no output schema), the description is minimally adequate but lacks behavioral details and usage context that would help the agent decide when to invoke it.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description adds no additional meaning beyond the schema; it repeats 'Create a new folder' without elaborating on the parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Create') and resource ('folder in Dropbox'). It distinguishes from siblings like 'dropbox_download' or 'dropbox_list_folder' by specifying the create action on folders.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives (e.g., if the folder already exists, or using other tools). No mention of prerequisites or limitations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
dropbox_downloadAInspect
Download a file from Dropbox and return its content as text plus metadata (size, type, modified date). Use to retrieve file contents for processing or inspection.
| Name | Required | Description | Default |
|---|---|---|---|
| path | Yes | File path to download |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must disclose behavioral traits. It states that the tool returns file content as text and metadata, which implies a read-only operation. However, it does not mention potential limitations like file size, encoding, or authentication requirements, leaving some gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences long, each conveying essential information without redundancy. It is front-loaded with the action and resource, and the second sentence clarifies the return value. No unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simplicity of the tool (one parameter, no nested objects, no output schema), the description is mostly complete. It explains what is returned. However, it could benefit from mentioning that only text files are suitable, or that binary files may not be returned correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, with a single parameter 'path' described as 'File path to download'. The description adds no additional meaning beyond the schema. With high schema coverage, a baseline of 3 is appropriate, but the description's clarity earns a 4.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (download), the resource (a file from Dropbox), and what is returned (file content as text and metadata). It effectively distinguishes itself from sibling tools like dropbox_get_metadata (which only retrieves metadata) and dropbox_list_folder (which lists folder contents).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for downloading files as text. It does not explicitly state when not to use it (e.g., for binary files) or mention alternatives like dropbox_search for finding files. However, the context is clear, and with sibling tools listed, an agent can infer when to choose this tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
dropbox_get_metadataCInspect
Get detailed metadata for a file or folder: size, modified date, ID, sharing status, and revision info. Use before downloading or modifying to inspect properties.
| Name | Required | Description | Default |
|---|---|---|---|
| path | Yes | File or folder path |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description does not disclose any behavioral traits beyond what is obvious from the name. With no annotations, the description should explain what metadata is returned, any restrictions (e.g., only for files, not folders), or if it can handle both. It lacks detail on the return format or potential errors.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence with no extra words. It is concise but could be slightly more informative without adding length.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with one parameter and no output schema, the description should provide enough context to understand what 'metadata' means. It is insufficient: the agent doesn't know if it returns file size, modification time, or permissions. No output schema exists to compensate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema covers the single parameter 'path' with 100% coverage, and the description does not add additional semantics. Since coverage is high, baseline 3 is appropriate, though the description could clarify if the path must be relative or absolute.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Get metadata for a file or folder in Dropbox', which specifies the verb (get), resource (metadata), and scope (file or folder). It is clear but does not differentiate from siblings like 'dropbox_list_folder' or 'dropbox_download', which could also involve reading metadata.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives. For example, 'dropbox_list_folder' might also return metadata for items in a folder, and there is no mention of when to prefer this tool. No exclusions or alternatives are provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
dropbox_list_folderBInspect
List files and folders in a Dropbox directory. Returns names, types, sizes, and modification dates. Use when browsing folder contents or checking what's stored at a path.
| Name | Required | Description | Default |
|---|---|---|---|
| path | Yes | Folder path (e.g., "" for root, "/Documents") | |
| limit | No | Max entries to return (default 100) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It does not disclose behavioral traits like pagination, error behavior, or that it only lists immediate children (not recursive). Minimal info beyond listing.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, front-loaded with purpose. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given simple tool (list folder), no output schema, but lacks details on recursive listing, pagination, or error cases. Adequate for basic listing but could be more complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. Description does not add meaning beyond schema; it's generic. No mention of format for path or limit defaults beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'List' and resource 'files and folders in a Dropbox directory'. It clearly distinguishes from siblings like dropbox_create_folder (create), dropbox_download (download), dropbox_get_metadata (metadata), and dropbox_search (search).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use vs alternatives, but description implies listing use case. Context signals show siblings with distinct purposes, so agent can infer usage from names.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
dropbox_searchCInspect
Search Dropbox for files and folders by name or content. Returns matching paths, file types, and metadata. Use when you need to find a file without knowing its exact location.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | Search query string | |
| max_results | No | Maximum results to return (default 20) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description should disclose behavioral traits. It does not mention if the search is case-sensitive, supports wildcards, or any rate limits. The description is too brief to cover these aspects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence with 10 words, no waste. It is front-loaded with the action. Could benefit from a bit more detail without becoming verbose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a search tool with 2 params and no output schema, the description is minimal. It lacks details on search scope, behavior, or return format, making it incomplete for an agent to use effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the schema already describes the parameters. The description adds no extra meaning beyond what the schema provides. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states it searches for files and folders by name or content, which is clear but lacks differentiation from sibling tools like dropbox_list_folder. It does not specify that it is a full-text search or distinguish it from listing.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool vs alternatives. For example, when to use dropbox_search vs dropbox_list_folder is not mentioned. No context on prerequisites or limitations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forgetAInspect
Delete a stored memory by key.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses the destructive action (delete) but does not specify behavior if key is missing, whether deletion is irreversible, or any side effects. Adequate but minimal.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, zero waste. Essential information front-loaded. Perfect for a simple tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (1 param, no output schema, no nested objects), the description is mostly sufficient. However, it lacks details on return value (e.g., success/failure indication) and error handling, which could be inferred but not explicit.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with the 'key' parameter described as 'Memory key to delete'. The description adds no additional meaning beyond the schema, so baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it deletes a stored memory by key, with a specific verb ('Delete') and resource ('stored memory'). It distinguishes from siblings like 'remember' (store) and 'recall' (retrieve), though could explicitly contrast them.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when a memory needs to be removed, but lacks guidance on when not to use it (e.g., if key doesn't exist) or alternatives. No exclusions or prerequisites mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recallAInspect
Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.
| Name | Required | Description | Default |
|---|---|---|---|
| key | No | Memory key to retrieve (omit to list all keys) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, but description clarifies behavior (retrieve vs list, session persistence). However, does not mention side effects or constraints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no wasted words, essential information front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and one optional parameter, description is sufficient. Could mention return format but not critical.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and description adds purpose of key ('to retrieve') and behavior when omitted, which is clear and helpful beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it retrieves a memory by key or lists all memories, distinguishing it from remember and forget.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says when to omit key (list all) and provides context ('context you saved earlier'), but does not explicitly say when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rememberAInspect
Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key (e.g., "subject_property", "target_ticker", "user_preference") | |
| value | Yes | Value to store (any text — findings, addresses, preferences, notes) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Since no annotations are provided, the description carries full burden. It discloses key behavioral traits: persistence differences for authenticated vs anonymous users ('Authenticated users get persistent memory; anonymous sessions last 24 hours'). No contradictions with annotations (none exist).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise with three sentences, each adding distinct value: purpose, usage guidance, and behavioral transparency. No redundant or unnecessary text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 2 parameters, no output schema, and no annotations, the description provides sufficient completeness: purpose, usage, behavioral persistence, and key-value semantics. It could mention that value is a string (implied by schema) but overall adequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the baseline is 3. The description does not add extra meaning beyond the schema; it only mentions 'key-value pair' generally. The schema already describes key and value with examples and usage notes.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states 'Store a key-value pair in your session memory' and provides specific use cases like saving 'intermediate findings, user preferences, or context across tool calls'. It clearly distinguishes itself from siblings like 'recall' (retrieve) and 'forget' (remove).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description says 'Use this to save intermediate findings, user preferences, or context across tool calls', giving clear usage guidance. It does not explicitly mention when not to use it or alternatives, but the context signals show siblings like 'recall' for retrieval and 'forget' for deletion, which helps differentiation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!