Freshdesk
Server Details
Freshdesk MCP Pack — helpdesk ticket and contact management via Freshdesk API v2.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-freshdesk
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.7/5 across 10 of 10 tools scored. Lowest: 2.8/5.
Most tools have distinct purposes (Freshdesk CRUD vs memory vs tool discovery), but ask_pipeworx overlaps with all other tools as a meta-tool that wraps them, creating ambiguity about when to use ask_pipeworx versus individual tools.
The Freshdesk tools follow a consistent freshdesk_verb_noun pattern, but other tools use different styles (ask_pipeworx, discover_tools, forget, recall, remember). This mixing of conventions reduces overall consistency.
10 tools is a reasonable count, but the set includes 5 memory/tool discovery tools alongside 5 Freshdesk-specific tools, which feels slightly imbalanced for a server named 'Freshdesk'.
The Freshdesk tools cover basic ticket and contact retrieval and listing, but lack create/update/delete operations for both tickets and contacts, which are significant gaps for a CRM support server.
Available Tools
10 toolsask_pipeworxAInspect
Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | Your question or request in natural language |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description transparently explains that Pipeworx selects the right tool and fills arguments, implying autonomous decision-making. It doesn't mention limitations or failure cases, but is otherwise clear.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise, well-structured with a clear purpose statement and examples, all in a few sentences.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given a single parameter and no output schema, the description provides sufficient context for an agent to use the tool. Examples enhance completeness, though it could mention potential delays or error handling.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, and the description adds meaning beyond the schema by explaining that the question should be in natural language and providing examples of valid inputs.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it answers questions in plain English using the best available data source, with specific examples. It distinguishes itself from sibling tools by acting as a general question-answering interface.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says 'no need to browse tools or learn schemas — just describe what you need' and provides usage examples, effectively guiding when to use this tool over siblings like freshdesk tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discover_toolsAInspect
Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of tools to return (default 20, max 50) | |
| query | Yes | Natural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description must convey behavioral traits. It states the tool returns 'the most relevant tools with names and descriptions,' indicating a search operation without side effects. However, it doesn't detail how ranking works, whether it uses embedding search or keyword matching, or if the catalog is up-to-date. A 3 is fair as it covers basic behavior but lacks depth.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences: first states purpose, second gives a clear when-to-use directive. Every word adds value, no fluff or repetition.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description clarifies return type: 'most relevant tools with names and descriptions.' For a search tool with simple parameters, this is sufficient. The complexity is low, and the context signals (no nested objects, no enums) align with a straightforward interface.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description does not add new meaning beyond the schema; the query parameter is described in schema as 'natural language description' and the tool description reinforces this. No additional usage hints for the limit parameter are provided.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Search the Pipeworx tool catalog by describing what you need.' It specifies the verb ('search'), resource ('tool catalog'), and method ('natural language query'), distinguishing it from siblings like ask_pipeworx (which likely answers questions) and the recall/forget tools (which manage context).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly advises 'Call this FIRST when you have 500+ tools available and need to find the right ones for your task.' This sets a clear precedence rule, guiding the agent to use this tool before others when many tools exist. It also implies the tool is for discovery, not execution.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forgetBInspect
Delete a stored memory by key.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description must cover behavioral aspects. It states 'Delete' indicating mutation but does not disclose if deletion is permanent, if authorization is needed, or any side effects. Lack of output schema also leaves return value ambiguous.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, no wasted words. Clearly conveys the action and resource.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given low complexity (1 param, no output schema, no annotations), the description is minimally complete: it says what it does. However, missing behavioral details (e.g., confirmation, error cases) could be improved for a deletion operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and schema already describes 'key' as 'Memory key to delete'. Description does not add extra meaning beyond confirming the key identifies the memory to delete.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the verb 'Delete' and the resource 'stored memory by key', distinguishing it from siblings like 'remember' (store) and 'recall' (retrieve).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this vs alternatives like 'recall' or 'remember'. The description implies it deletes a specific memory by key but does not mention prerequisites or caution (e.g., deletion is irreversible).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
freshdesk_get_contactAInspect
Get full contact details by ID including name, email, phone, company, address, and ticket history.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Contact ID | |
| _apiKey | Yes | Freshdesk API key | |
| _domain | Yes | Freshdesk subdomain |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It states it returns 'full contact details', which is useful, but does not mention authentication requirements, rate limits, or error behavior. Since it's a read operation, the description is adequate but not rich.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that conveys the essential purpose and scope. Every word is necessary and front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple read operation with no output schema, the description is minimally complete. However, it lacks details like the structure of the returned contact details or error handling, but for a straightforward GET, it is adequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description adds no additional meaning beyond the schema for the parameters; it simply restates 'by ID' which matches the 'id' field.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Get') and resource ('Freshdesk contact by ID'), and clearly states the scope ('single'). It distinguishes itself from siblings like freshdesk_list_contacts and freshdesk_get_ticket.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies use when you have a contact ID, but does not explicitly state when not to use it (e.g., for listing contacts, use freshdesk_list_contacts) or provide alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
freshdesk_get_ticketAInspect
Get full ticket details by ID including subject, status, priority, description, conversations, attachments, and resolution notes.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Ticket ID | |
| _apiKey | Yes | Freshdesk API key | |
| _domain | Yes | Freshdesk subdomain |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It indicates that the tool returns full ticket details including conversations, but does not disclose any side effects, authentication requirements beyond the obvious API key, rate limits, or error behaviors. The description is accurate but minimal.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that is concise and front-loaded with the essential purpose. Every word is meaningful and there is no wasted text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the low complexity (simple get by ID) and good schema coverage, the description is adequate but lacks details about error cases or what happens if the ticket doesn't exist. With no output schema, the description could be more helpful by describing the structure of the returned ticket details.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description does not add any extra meaning beyond the parameter names and types in the schema. The schema already documents that id is a number and the required authentication parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb (Get), resource (a single Freshdesk ticket), and identifier (by its ID). It also mentions what is returned (full ticket details including conversations), which distinguishes it from sibling tools like freshdesk_list_tickets and freshdesk_search_tickets that return multiple tickets.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description does not explicitly state when to use this tool versus alternatives. However, since the name and description clearly indicate it fetches a single ticket by ID, it is implicitly appropriate when you have a specific ticket ID. No explicit when-not or alternative tool guidance is provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
freshdesk_list_contactsCInspect
List customer contacts. Returns name, email, phone, company, and contact ID for filtering and pagination.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | Page number for pagination (default 1) | |
| _apiKey | Yes | Freshdesk API key | |
| _domain | Yes | Freshdesk subdomain | |
| per_page | No | Results per page (default 30, max 100) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so the description must carry the behavioral burden. It mentions pagination support, which is helpful, but does not disclose any other behaviors like rate limits, idempotency, or response size. With no annotations, a score of 3 is appropriate as it provides some but not full transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is very concise at two sentences. It is front-loaded with the main purpose. However, it could include more useful information without being verbose, so not a perfect 5.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given there is no output schema, the description does not explain the return value or structure. The tool has 4 parameters (2 required) and the description only covers pagination. It is minimally complete for a simple list operation but lacks detail on the output format.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, meaning all parameters are documented in the schema. The description adds no additional meaning beyond the schema. It mentions pagination generally but does not clarify the purpose of page vs. per_page parameters. A score of 2 reflects that the description adds minimal value over the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states 'List contacts from Freshdesk' which clearly indicates the verb (list) and resource (contacts). However, it does not distinguish itself from sibling tools like freshdesk_get_contact or freshdesk_list_tickets. The purpose is clear but lacks differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
There is no guidance on when to use this tool vs. alternatives. For example, it does not mention that freshdesk_get_contact is for a single contact or that freshdesk_search_tickets is for searching tickets. The description only mentions pagination but no context on when to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
freshdesk_list_ticketsBInspect
List support tickets filtered by status (e.g., "open", "closed") and priority (e.g., "1" for urgent). Returns ticket ID, subject, status, priority, and requester.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | Page number for pagination (default 1) | |
| filter | No | Predefined filter: new_and_my_open, watching, spam, deleted (default: new_and_my_open) | |
| _apiKey | Yes | Freshdesk API key | |
| _domain | Yes | Freshdesk subdomain (e.g., "mycompany" for mycompany.freshdesk.com) | |
| order_by | No | Sort by: created_at, due_by, updated_at, status (default: created_at) | |
| per_page | No | Results per page (default 30, max 100) | |
| order_type | No | Sort order: asc or desc (default: desc) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries burden. Description is adequate but lacks disclosure of rate limits, data freshness, or error behaviors. Provides basic filtering and pagination info but not full behavioral traits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
One short sentence that clearly states purpose and key features. No unnecessary words, but could add more detail without becoming verbose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Tool has 7 parameters (moderate complexity) and no output schema. Description covers main aspects but omits details like default values and sort options that are in the schema. Lacks explanation of response structure.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. However, description adds no value beyond schema; it only mentions filtering and pagination generically without explaining parameter specifics like order_by values or filter options. Does not compensate for low-coverage scenario.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States it lists tickets from Freshdesk and supports filtering by status, priority, and pagination. Clearly identifies the resource (tickets) and action (list). Differentiates from sibling tools like freshdesk_search_tickets by emphasizing listing with predefined filters.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool vs freshdesk_search_tickets or other listing tools. Does not specify prerequisites beyond API key and domain.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
freshdesk_search_ticketsAInspect
Search tickets by query (e.g., "status:2 AND priority:3" or keyword text). Returns matching ticket ID, subject, status, and priority.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | Search query in Freshdesk syntax (e.g., "status:2", "priority:1 AND type:'Question'") | |
| _apiKey | Yes | Freshdesk API key | |
| _domain | Yes | Freshdesk subdomain |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so the description bears full burden. It does not disclose any behavioral traits such as rate limits, authentication requirements beyond what's in schema, or whether searches are case-sensitive. The description only explains the query syntax, missing other important behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, front-loaded with purpose, and no wasted words. It efficiently conveys the core functionality and gives an example of the query syntax.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (3 parameters, no output schema), the description is adequate but not complete. It explains the query parameter well but does not mention the return format, pagination, or any limitations. The sibling list shows similar search tools exist, but no comparison is provided.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description adds minimal value beyond the schema: it mentions 'query string' and 'Freshdesk filter syntax' which is already in the schema's description. It does not explain the _apiKey and _domain parameters, but those are self-explanatory from their names and descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (search), the resource (Freshdesk tickets), and the method (using a query string). It distinguishes itself from siblings like freshdesk_list_tickets and freshdesk_get_ticket by specifying that it supports Freshdesk filter syntax for complex queries.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies when to use this tool (for searching with filter syntax) but does not explicitly state when not to use it or mention alternatives among siblings. For example, it doesn't clarify that for simple listing, freshdesk_list_tickets might be more appropriate.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recallAInspect
Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.
| Name | Required | Description | Default |
|---|---|---|---|
| key | No | Memory key to retrieve (omit to list all keys) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses the two modes (by key vs list all) and indicates persistence across sessions ('saved earlier in the session or in previous sessions'). However, it doesn't mention if retrieval is destructive, requires authentication, or has any side effects. Without annotations, a 3 is fair.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no fluff. First sentence states the core action, second sentence adds usage context. Efficient and front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (1 optional param, no output schema, no nested objects), the description covers the essential behavior: retrieval modes and persistence. Missing details like return format or error handling, but complexity is low enough that this is mostly complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% for the single parameter 'key'. Description adds context that omitting the key lists all memories, but otherwise repeats what schema says (key is memory key). With full schema coverage, baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the verb ('Retrieve'/'list'), resource ('stored memory'), and dual behavior: retrieve by key or list all when key omitted. Distinguishes from sibling 'remember' (store) and 'forget' (delete) by focusing on retrieval.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Describes when to use (retrieve context saved earlier) and includes implicit guidance: omit key to list all. No explicit when-not or alternatives to other tools, but the purpose is narrow enough that exclusion is clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rememberAInspect
Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key (e.g., "subject_property", "target_ticker", "user_preference") | |
| value | Yes | Value to store (any text — findings, addresses, preferences, notes) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description discloses that authenticated users get persistent memory while anonymous sessions last 24 hours. This adds behavioral context beyond annotations (none provided). It could also mention that values are overwritten on duplicate keys, but the key field description hints at unique keys.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences long, front-loads the core purpose, and adds useful context about persistence. Every sentence adds value without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (two required string parameters, no output schema), the description is complete. It explains purpose, use cases, and persistence behavior. Could mention that values are overwritten on duplicate keys, but this is minor.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema already describes both parameters (key and value) with examples, achieving 100% schema coverage. The description adds no further parameter semantics beyond what the schema provides, so a baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool stores a key-value pair in session memory. It specifies the verb 'store' and the resource 'session memory', distinguishing it from sibling tools like 'recall' (retrieve) and 'forget' (delete).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains when to use it: to save intermediate findings, user preferences, or context across tool calls. It also provides context on persistence (authenticated vs anonymous). However, it does not explicitly say when not to use it or mention alternatives like 'recall' or 'forget'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!