Clinicaltrials
Server Details
ClinicalTrials MCP — wraps ClinicalTrials.gov API v2 (free, no auth)
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-clinicaltrials
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.8/5 across 10 of 10 tools scored. Lowest: 2.9/5.
Tools like ct_search, ct_count_by_condition, ct_sponsor_trials, and ct_get_study are mostly distinct, but ct_count_by_condition could overlap with ct_search (both can find counts). The non-clinical tools (ask_pipeworx, discover_tools, memory tools) are clearly separate, but their purpose within a clinical trials server is unclear, causing ambiguity about the server's focus.
Most clinical trial tools use the prefix 'ct_' with descriptive names (ct_search, ct_get_study), but ask_pipeworx, discover_tools, forget, recall, remember break the pattern. This mixed convention reduces predictability.
With 10 tools, the count is reasonable, but about half are not clinical-trial-specific (ask_pipeworx, discover_tools, memory tools). This feels like padding and dilutes the server's focus.
The clinical trial tools cover basic search and retrieval but lack essential operations like comparing trials, analyzing results, or accessing historical data. The memory tools are out of place and do not fill gaps in the clinical trial domain. The presence of ask_pipeworx and discover_tools suggests an attempt to handle completeness, but they are generic and not tailored to clinical trials.
Available Tools
10 toolsask_pipeworxBInspect
Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | Your question or request in natural language |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It mentions automatic tool selection and argument filling but does not disclose potential limitations, such as scope of data sources, response format, latency, or error behavior. The description lacks important behavioral traits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is concise and front-loaded with the core purpose. Each sentence adds value, though the examples could be slightly more varied. No waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (one string param, no output schema), the description is mostly adequate but lacks detail on what happens after a question is asked (e.g., whether it returns a citation, confidence score, or raw text). Behavioral gaps reduce completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description adds minimal extra meaning beyond the schema, mostly elaborating on the single parameter's purpose through examples.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: answering natural language questions by automatically selecting the best data source and filling arguments. It distinguishes itself from sibling tools by acting as a general-purpose query interface rather than a specific data source tool.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies when to use (when you have a plain English question) but does not explicitly state when not to use it or provide alternatives among sibling tools. Examples help, but no exclusion criteria are given.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ct_count_by_conditionCInspect
Count trials for a condition (e.g., 'diabetes'). Returns breakdown by status and phase for landscape analysis.
| Name | Required | Description | Default |
|---|---|---|---|
| phase | No | Optional phase filter: PHASE1, PHASE2, PHASE3, PHASE4 | |
| status | No | Optional status filter: RECRUITING, COMPLETED, etc. | |
| condition | Yes | Condition or disease (e.g., "breast cancer", "diabetes", "Alzheimer") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description should disclose behavioral traits. It doesn't mention if the count is approximate or exact, if it includes all phases by default, or any rate limits. The description is minimal and lacks important behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise with two sentences. The first sentence clearly states the primary function. However, the second sentence about use cases could be integrated or made more specific. Overall, it's appropriately sized and front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given there is no output schema, the description could explain what the output looks like (e.g., just a number? also grouped by phase?). The tool has 3 optional parameters, but the description doesn't guide on how filters affect counting. It's minimally complete but leaves gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so all parameters have descriptions. The description adds no additional parameter meaning beyond the schema. Baseline 3 is appropriate as the schema already documents parameters adequately.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states it counts clinical trials by condition, which is clear. However, it doesn't differentiate from sibling tools like ct_search, which also deals with clinical trials. The verb 'count' helps distinguish, but more explicit distinction would improve clarity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions 'landscape analysis and competitive intelligence' as use cases, which is good. But it doesn't specify when not to use this tool or compare it to alternatives like ct_search or ct_get_study. No guidance on excluding other tools is provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ct_get_studyAInspect
Get full trial details by NCT ID (e.g., 'NCT04567890'). Returns protocol, eligibility criteria, primary outcomes, sponsor, locations, and results.
| Name | Required | Description | Default |
|---|---|---|---|
| nct_id | Yes | ClinicalTrials.gov NCT identifier (e.g., "NCT05462717") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It states returns complete protocol sections, which is helpful. However, it doesn't disclose whether the tool requires authentication, has rate limits, or what happens if the NCT ID doesn't exist (e.g., error behavior). The description adds moderate value beyond the schema but lacks depth.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences that front-load the purpose and immediately provide scope. Every word adds value: 'full study details', 'by its NCT ID', and listing returned sections. No redundancy or filler.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (one parameter, no output schema, no annotations), the description is nearly complete. It covers what the tool does and what it returns. Minor gaps: no mention of error handling or output format, but for a straightforward lookup tool this is acceptable.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the description need not repeat parameter details. It adds context by explaining the tool's purpose (full study details) and the content returned (eligibility, outcomes, results), which clarifies what the parameter 'nct_id' is used for. This adds meaning beyond the schema's basic description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves full study details for a clinical trial by NCT ID. It specifies the verb 'Get', the resource 'study details', and the identifier type 'NCT ID'. It also lists returned content (eligibility, outcomes, results), distinguishing it from siblings like ct_search or ct_count_by_condition.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies when to use this tool: when needing full study details for a known NCT ID. It doesn't explicitly mention when not to use it or alternatives, but the context of sibling tools (e.g., ct_search for broader queries) provides implicit guidance. The clear purpose helps the agent decide.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ct_recent_updatesAInspect
Get recently posted or updated trials sorted by date. Returns NCT IDs, titles, status changes, and conditions.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Number of results (1-100, default 20) | |
| query | No | Optional search term to narrow results |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must cover behavioral traits. It clarifies sorting by last update date, but does not disclose other behaviors like rate limits, authentication needs, or whether results are cached.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences with no waste. Front-loaded with the action and result, then a one-sentence use case.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple input schema with only two optional parameters and no output schema, the description adequately covers purpose and usage. It could mention return format or behavior when no results are found, but is complete enough for this complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description adds value by indicating that the 'query' is 'Optional' and for narrowing results, but does not add meaning beyond the schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb 'Get' and resource 'recently updated or posted clinical trials' with clear sorting criteria. It distinguishes from siblings like ct_search by focusing on recency rather than general search.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description states it is 'Good for monitoring pipeline changes,' providing a clear use case. However, it does not explicitly mention when not to use it or suggest alternatives among the listed siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ct_searchAInspect
Search clinical trials by keyword, condition, status (e.g., 'Recruiting'), or phase (e.g., 'Phase 2'). Returns NCT IDs, titles, status, enrollment, and sponsor info.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Number of results (1-100, default 10) | |
| phase | No | Filter by phase: EARLY_PHASE1, PHASE1, PHASE2, PHASE3, PHASE4 | |
| query | Yes | Search term (e.g., "GLP-1 receptor agonist", "breast cancer immunotherapy") | |
| status | No | Filter by overall status: RECRUITING, ACTIVE_NOT_RECRUITING, COMPLETED, TERMINATED, WITHDRAWN, ENROLLING_BY_INVITATION, SUSPENDED, NOT_YET_RECRUITING | |
| sponsor | No | Filter by sponsor name (e.g., "Pfizer", "Novo Nordisk") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It describes the search and filter behavior and the return structure, which is adequate but lacks details on pagination, rate limits, or data freshness. A neutral score is appropriate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, concise sentence that front-loads the key information: what it searches, by which filters, and what it returns. No extraneous words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (5 parameters, no output schema), the description is minimal but covers the basic search and filter functionality. It does not explain the response structure beyond 'study count and array of matching trials', which may be insufficient for an agent to parse results.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the baseline is 3. The description mentions keyword, status, phase, and sponsor as filter options, adding no semantics beyond the schema's parameter descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches clinical trials by keyword and filters (status, phase, sponsor), and mentions the return value (count + array of trials). It distinguishes itself from siblings like ct_get_study (single study) and ct_count_by_condition (count only), though it could be more explicit about the difference.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for searching trials with filters, but provides no explicit guidance on when to use this tool vs alternatives like ct_count_by_condition or ct_get_study. No exclusions or prerequisites are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
ct_sponsor_trialsAInspect
List all trials by sponsor or organization name. Returns status, phase, and conditions to map research pipelines.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Number of results (1-100, default 20) | |
| phase | No | Optional phase filter | |
| status | No | Optional status filter | |
| sponsor | Yes | Sponsor or company name (e.g., "Pfizer", "Novo Nordisk", "Moderna") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so the description carries full burden. It indicates read-only behavior (listing), but does not disclose rate limits, pagination, or data freshness. Lacks explicit statement that it is non-destructive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a concise two-sentence structure. The first sentence states purpose, the second adds context. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The tool has 4 parameters (1 required) and no output schema. The description provides minimal additional context beyond the schema. It mentions pipeline analysis but does not explain return format, ordering, or error handling.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description does not add parameter semantics beyond the schema. It does not explain the relationship between parameters or provide examples of how filters interact.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool lists clinical trials by sponsor, which is a specific verb+resource combination. It differentiates from siblings like ct_search (general search) and ct_count_by_condition (counts by condition).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description says 'useful for pipeline analysis,' implying when to use, but does not explicitly state when not to use or mention alternatives like ct_search for broader queries.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discover_toolsAInspect
Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of tools to return (default 20, max 50) | |
| query | Yes | Natural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Description states it returns the most relevant tools but does not specify the ranking algorithm, whether it uses semantic search, or any limitations (e.g., rate limits). No annotations provided, so some behavioral detail is missing.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences: first states purpose, second describes output, third gives usage guidance. No wasted words, front-loaded with key information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, description clarifies what is returned (names and descriptions). Tool has simple inputs; description covers essential aspects. Could mention what happens on empty results or errors, but not critical.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% coverage with descriptions for both 'query' (natural language description) and 'limit' (max number). Description reinforces the natural language aspect, adding value beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states it searches a tool catalog and returns relevant tools with names and descriptions. The verb 'search' plus the resource 'Pipeworx tool catalog' makes the purpose specific and distinct.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says 'Call this FIRST when you have 500+ tools available and need to find the right ones for your task.' This gives clear when-to-use guidance and implies it's a discovery step before using other tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forgetCInspect
Delete a stored memory by key.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It states 'Delete' (destructive), but doesn't disclose whether deletion is permanent, if confirmation is needed, or if it affects other data. The description is too brief to cover behavioral traits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise (6 words) and front-loaded with the action. However, it is perhaps too terse, lacking context that would earn a 5.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simplicity (1 required param, no output schema), the description is minimal but complete enough to convey basic purpose. However, it lacks any behavioral or usage context that would help an agent decide to invoke it.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with a single parameter 'key' described as 'Memory key to delete'. The description adds no additional semantic value beyond the schema. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Delete', the resource 'stored memory', and the means 'by key'. It distinguishes from siblings like 'remember' (create) and 'recall' (retrieve), though it could explicitly contrast with them.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this vs alternatives. The tool is for deletion, but no context is given about prerequisites (e.g., memory must exist) or consequences (irreversible?). No mention of when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recallAInspect
Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.
| Name | Required | Description | Default |
|---|---|---|---|
| key | No | Memory key to retrieve (omit to list all keys) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It discloses the behavior: retrieving by key or listing all. It does not specify return format or error handling, but for a simple key-value retrieval, the description is adequate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no fluff. Each sentence adds distinct information: retrieval method and usage context. Efficiently front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (one optional parameter, no output schema, no nested objects), the description is complete enough. It covers the core functionality and usage hint. Minor gap: no mention of what happens if key doesn't exist.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema already describes the 'key' parameter with 100% coverage. The description adds context that omitting the key lists all memories, which is a key behavioral insight not in the schema. This adds value beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Retrieve' and the resource 'memory by key', and distinguishes between retrieving a specific key and listing all memories. This effectively differentiates it from sibling tools like 'remember' (store) and 'forget' (delete).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says when to use it ('to retrieve context you saved earlier') and provides the alternative action (omit key to list all). It also implies when not to use it (e.g., for storing, use 'remember'). This gives clear guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rememberAInspect
Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key (e.g., "subject_property", "target_ticker", "user_preference") | |
| value | Yes | Value to store (any text — findings, addresses, preferences, notes) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses persistence behavior (authenticated vs 24-hour anonymous) and implies non-destructive nature. No contradiction since annotations absent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences: first explains function, second gives usage guidance. Every sentence adds value with no fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given simple tool with no output schema and only two parameters, description covers purpose, usage, and persistence behavior. Minor gap: does not mention if value is overwritten on same key, but schema hints at that.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and description does not add parameter details beyond the schema's descriptions. Baseline 3 is appropriate as schema does the work.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool stores a key-value pair in session memory, specifying what it saves (intermediate findings, user preferences, context) and differentiates from siblings like 'recall' and 'forget'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains when to use it (save context across tool calls) and notes persistence differences (authenticated vs anonymous). However, it doesn't explicitly mention when not to use it or alternatives like 'recall' for retrieval.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!