clinicaltrialsgov-mcp-server
Server Details
ClinicalTrials.gov MCP server. Search studies, retrieve results, match patients to eligible trials.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- cyanheads/clinicaltrialsgov-mcp-server
- GitHub Stars
- 64
- Server Listing
- clinicaltrialsgov-mcp-server
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
7 toolsclinicaltrials_find_eligibleClinicaltrials Find EligibleARead-onlyIdempotentInspect
Match patient demographics and conditions to eligible recruiting clinical trials. Provide age, sex, conditions, and location to find studies with matching eligibility criteria, contact information, and recruiting locations.
| Name | Required | Description | Default |
|---|---|---|---|
| age | Yes | Patient age in years. | |
| sex | Yes | Patient's biological sex. Use 'All' to include studies regardless of sex restrictions. | |
| location | Yes | Patient location. | |
| conditions | Yes | Medical conditions or diagnoses. E.g., ["Type 2 Diabetes", "Hypertension"]. | |
| maxResults | No | Maximum results to return. | |
| recruitingOnly | No | Only include actively recruiting studies. | |
| healthyVolunteer | No | Whether the patient is a healthy volunteer. When true, only studies accepting healthy volunteers are queried. |
Output Schema
| Name | Required | Description |
|---|---|---|
| studies | Yes | Matching studies with eligibility and location fields. |
| totalCount | No | Total matching studies from the API. |
| noMatchHints | No | Hints when no studies match, with suggestions to broaden the search. |
| searchCriteria | Yes | Search criteria used. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover read-only/idempotent safety, so the description appropriately focuses on behavioral context: it discloses the query optimization ('Builds an optimized... query'), specifies which fields are returned ('eligibility and location fields'), and crucially sets expectations that the caller must perform final evaluation ('for the caller to evaluate').
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two well-structured sentences with zero waste. First sentence establishes purpose and mechanism; second details input mapping and output expectations. Information is front-loaded and dense.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the output schema exists, the description appropriately summarizes the return value (studies with specific fields) without redundant detail. It covers the external API (ClinicalTrials.gov), the patient-to-trial workflow, and the necessary human-in-the-loop evaluation step. A 5 would require mentioning pagination limits or specific caveats about location matching radius.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description maps the four required parameters to the conceptual 'patient profile' framing, which adds mild semantic grouping, but individual parameter details (ranges, formats) are already well-covered by the schema itself.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action (Match) and resource (recruiting clinical trials), emphasizing the patient profile matching aspect that distinguishes it from generic search siblings like 'clinicaltrials_search_studies'. It explicitly mentions ClinicalTrials.gov as the target data source.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies the specific use case (patient eligibility matching vs. general research) by focusing on 'patient demographics' and 'patient profile'. However, it lacks explicit guidance on when to use the sibling 'search_studies' instead, or explicit warnings about the limitations of automated eligibility matching.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
clinicaltrials_get_field_definitionsClinicaltrials Get Field DefinitionsARead-onlyIdempotentInspect
Get field definitions from the ClinicalTrials.gov study data model. Returns the field tree with piece names (used in the fields parameter and AREA[] filters), data types, and nesting structure. Call with no path for a top-level overview, then drill into a section with the path parameter to see its fields.
| Name | Required | Description | Default |
|---|---|---|---|
| path | No | Dot-notation path to get a subtree. E.g., "protocolSection.designModule", "protocolSection.eligibilityModule", "resultsSection". Omit for top-level overview (sections + direct children, not the full tree). | |
| includeIndexedOnly | No | Only return indexed (searchable) fields. Default: false. Has no visible effect at the top level — use with a path to filter leaf fields. |
Output Schema
| Name | Required | Description |
|---|---|---|
| fields | Yes | Field definitions. |
| totalFields | Yes | Total fields returned. |
| resolvedPath | No | Resolved path when path param was used. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover read-only/idempotent safety. The description adds valuable behavioral context: it explains the hierarchical/tree structure of the output, defines 'piece names' and their usage in filters, and describes the drill-down exploration pattern. No contradictions with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three well-structured sentences with zero waste: purpose (sentence 1), return structure/relevance (sentence 2), and usage pattern (sentence 3). Front-loaded with the core action and appropriately sized for the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of an output schema and comprehensive annotations, the description appropriately focuses on usage patterns and semantic relationships (piece names in filters) rather than return value details. It fully covers the hierarchical discovery workflow implied by the parameters.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, baseline is 3. The description adds meaningful usage context for the 'path' parameter ('drill into a section'), explaining the interaction between empty path (overview) and specific paths (subtree). It does not add explicit semantics for 'includeIndexedOnly,' but the schema handles that.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states a specific verb ('Get') + resource ('field definitions') + domain ('ClinicalTrials.gov study data model'). It clearly distinguishes this metadata tool from siblings like search_studies or get_field_values by emphasizing it returns the 'field tree' with 'piece names,' 'data types,' and 'nesting structure' rather than actual study data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear usage context with the progressive disclosure pattern ('Call with no path for top-level overview, then drill into a section'). Mentions the output is 'used in the fields parameter and AREA[] filters,' implying when to use it (for query construction), but does not explicitly contrast with get_field_values or state prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
clinicaltrials_get_field_valuesClinicaltrials Get Field ValuesARead-onlyIdempotentInspect
Discover valid values for ClinicalTrials.gov fields with study counts per value. Use to explore available filter options before building a search — e.g., valid OverallStatus, Phase, InterventionType, StudyType, or LeadSponsorClass values.
| Name | Required | Description | Default |
|---|---|---|---|
| fields | Yes | PascalCase piece name(s) to get values for. Common fields: OverallStatus, Phase, StudyType, InterventionType, LeadSponsorClass, Sex, StdAge, DesignAllocation, DesignPrimaryPurpose, DesignMasking. |
Output Schema
| Name | Required | Description |
|---|---|---|
| fieldStats | Yes | Statistics per requested field. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover readOnly/idempotent/openWorld hints. Description adds value by specifying that results include 'study counts per value,' indicating the tool returns prevalence metadata, not just enum values. Does not contradict annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two efficiently structured sentences. Front-loaded with purpose, followed by usage guideline and examples. No redundant or wasted words; em-dash usage effectively integrates examples without breaking flow.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of output schema and comprehensive annotations, the description adequately covers the exploration use case. No gaps remain for the agent to understand when and why to invoke this tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% with detailed parameter documentation including PascalCase naming and common field list. Description reinforces usage with contextual examples but does not add semantic information beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific verb (Discover) + resource (valid values for ClinicalTrials.gov fields) + scope (with study counts per value). Clearly distinguishes from sibling search tools by positioning as a field discovery mechanism rather than study retrieval.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states 'Use to explore available filter options before building a search,' establishing clear workflow context. Provides concrete field examples (OverallStatus, Phase, etc.). Lacks explicit naming of sibling search tools as the alternative for the next step.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
clinicaltrials_get_study_countClinicaltrials Get Study CountARead-onlyIdempotentInspect
Get total study count matching a query without fetching study data. Fast and lightweight. Use for quick statistics or to build breakdowns by calling multiple times with different filters (e.g., count by phase, count by status, count recruiting vs completed for a condition).
| Name | Required | Description | Default |
|---|---|---|---|
| query | No | General full-text search across all fields. | |
| phaseFilter | No | Filter by trial phase. Values: EARLY_PHASE1, PHASE1, PHASE2, PHASE3, PHASE4, NA. | |
| sponsorQuery | No | Sponsor/collaborator name search. | |
| statusFilter | No | Filter by study status. Values: RECRUITING, COMPLETED, ACTIVE_NOT_RECRUITING, NOT_YET_RECRUITING, ENROLLING_BY_INVITATION, SUSPENDED, TERMINATED, WITHDRAWN, UNKNOWN, WITHHELD, NO_LONGER_AVAILABLE, AVAILABLE, APPROVED_FOR_MARKETING, TEMPORARILY_NOT_AVAILABLE. | |
| advancedFilter | No | Advanced filter using AREA[] Essie syntax. E.g., "AREA[StudyType]INTERVENTIONAL", "AREA[EnrollmentCount]RANGE[100, 1000]". Combine with AND/OR/NOT and parentheses. | |
| conditionQuery | No | Condition/disease-specific search. E.g., "Type 2 Diabetes", "non-small cell lung cancer". | |
| interventionQuery | No | Intervention/treatment search. E.g., "pembrolizumab", "cognitive behavioral therapy". |
Output Schema
| Name | Required | Description |
|---|---|---|
| totalCount | Yes | Total studies matching the query/filters. |
| noMatchHints | No | Suggestions when no studies match (totalCount is 0). |
| searchCriteria | No | Echo of query/filter criteria used. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnly/idempotent, while the description adds valuable performance context ('Fast and lightweight') and behavioral usage patterns (iterative filtering for breakdowns). Does not mention rate limits or caching, but covers the essential behavioral traits beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with zero waste: purpose (sentence 1), performance (sentence 2), usage guidelines with examples (sentence 3). Front-loaded with core action and appropriately sized.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 7 optional parameters, output schema existence, and rich annotations, the description adequately covers tool purpose, performance characteristics, and usage patterns without needing to describe return values (handled by output schema).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, baseline is 3. The description adds semantic value by mapping parameters to usage patterns ('count by phase' references phaseFilter, 'count by status' references statusFilter), explaining the intent behind the filter parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verb 'Get' with resource 'total study count' and explicitly distinguishes from siblings by stating 'without fetching study data', clearly positioning it against clinicaltrials_search_studies and clinicaltrials_get_study_record.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use ('quick statistics', 'build breakdowns') and provides concrete implementation patterns ('calling multiple times with different filters') with specific examples ('count by phase', 'count by status', 'count recruiting vs completed').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
clinicaltrials_get_study_recordClinicaltrials Get Study RecordARead-onlyIdempotentInspect
Fetch a single clinical study by NCT ID. Returns the full study record including protocol details, eligibility criteria, outcomes, arms, interventions, contacts, and locations.
| Name | Required | Description | Default |
|---|---|---|---|
| nctId | Yes | NCT identifier (e.g., NCT03722472). |
Output Schema
| Name | Required | Description |
|---|---|---|
| study | Yes | Full study record. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover readOnly/idempotent hints, so the description appropriately focuses on adding scope context ('Returns the full study record') and architectural equivalence. No contradictions with annotations. Does not disclose rate limits or caching, but these are not strictly necessary.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences, all earning their place: core action, return value scope, and resource equivalence. Front-loaded with the essential verb-resource pairing. No filler text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple single-parameter input, 100% schema coverage, existing output schema, and comprehensive annotations, the description provides complete context by clarifying it returns the 'full' record (differentiating from get_study_results) and handling resource equivalence.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage (nctId fully documented with pattern and example), the description meets the baseline by reinforcing the parameter's purpose ('by NCT ID') without needing to add redundant syntax details.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the specific action ('Fetch'), resource ('single clinical study'), and key identifier ('by NCT ID'), distinguishing it clearly from siblings like search_studies (plural/search) and get_study_results (subset of data).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides valuable context about resource equivalence ('Equivalent to the clinicaltrials://{nctId} resource') indicating when to use this over MCP resources. However, it does not explicitly contrast with clinicaltrials_search_studies for cases where the user lacks a specific NCT ID.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
clinicaltrials_get_study_resultsClinicaltrials Get Study ResultsARead-onlyIdempotentInspect
Fetch trial results data for completed studies — outcome measures with statistics, adverse events, participant flow, and baseline characteristics. Only available for studies where hasResults is true. Use clinicaltrials_search_studies first to find studies with results.
| Name | Required | Description | Default |
|---|---|---|---|
| nctIds | Yes | One or more NCT IDs (max 20). E.g., "NCT12345678" or ["NCT12345678", "NCT87654321"]. Use summary=true for large batches to avoid large payloads. | |
| summary | No | Return condensed summaries instead of full data. Reduces payload from ~200KB to ~5KB per study. Summaries include outcome titles, types, timeframes, group counts, and top-level stats — omitting individual measurements, analyses, and per-group data. | |
| sections | No | Filter which sections to return. Values: outcomes, adverseEvents, participantFlow, baseline. Omit for all sections. |
Output Schema
| Name | Required | Description |
|---|---|---|
| results | Yes | Results per study. |
| fetchErrors | No | Studies that could not be fetched. |
| studiesWithoutResults | No | NCT IDs that do not have results data. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover safety profile (readOnly, idempotent). Description adds valuable behavioral context: specific data contents returned, availability constraint (hasResults check), and contrasts with search workflow. Does not mention rate limits or error states, but provides solid content characterization.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with zero waste: data content front-loaded, followed by availability constraint, then workflow guidance. Each sentence serves distinct purpose (what, when available, how to prepare).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 100% schema coverage, existing output schema, and comprehensive annotations, the description provides complete workflow context (prerequisite search step) and content characterization without redundancy.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, providing detailed descriptions for nctIds, summary, and sections. Description adds implicit context that nctIds must reference completed studies with results, but does not elaborate on parameter semantics beyond what the schema already provides. Baseline score appropriate given comprehensive schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb 'Fetch' + resource 'trial results data' + scope 'completed studies'. Lists concrete data types (outcome measures, adverse events, participant flow, baseline characteristics) that distinguish it from sibling get_study_record.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit constraint 'Only available for studies where hasResults is true' and workflow instruction 'Use search_studies first to find studies with results' — clearly defines prerequisites and sequences with named siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
clinicaltrials_search_studiesClinicaltrials Search StudiesARead-onlyIdempotentInspect
Search for clinical trial studies from ClinicalTrials.gov. Supports full-text and field-specific queries, status/phase/geographic filters, pagination, sorting, and field selection. Use the fields parameter to reduce payload size — full study records are ~70KB each.
| Name | Required | Description | Default |
|---|---|---|---|
| sort | No | Sort order. Format: FieldName:asc or FieldName:desc. E.g., "LastUpdatePostDate:desc", "EnrollmentCount:desc". Max 2 fields comma-separated. | |
| query | No | General full-text search across all fields. | |
| fields | No | Fields to return (PascalCase piece names). Strongly recommended to reduce payload. Common: NCTId, BriefTitle, OverallStatus, Phase, LeadSponsorName, Condition, InterventionName, BriefSummary, EnrollmentCount, StartDate. | |
| nctIds | No | Filter to specific NCT IDs for batch lookups. | |
| pageSize | No | Results per page, 1–200. | |
| geoFilter | No | Geographic proximity filter. Format: distance(lat,lon,radius). E.g., "distance(47.6062,-122.3321,50mi)" for studies within 50 miles of Seattle. | |
| pageToken | No | Pagination cursor from a previous response. | |
| countTotal | No | Include total study count in response. Only computed on the first page. | |
| titleQuery | No | Search within study titles and acronyms only. | |
| phaseFilter | No | Filter by trial phase. Values: EARLY_PHASE1, PHASE1, PHASE2, PHASE3, PHASE4, NA. | |
| outcomeQuery | No | Search within outcome measure fields. | |
| sponsorQuery | No | Sponsor/collaborator name search. | |
| statusFilter | No | Filter by study status. Values: RECRUITING, COMPLETED, ACTIVE_NOT_RECRUITING, NOT_YET_RECRUITING, ENROLLING_BY_INVITATION, SUSPENDED, TERMINATED, WITHDRAWN, UNKNOWN, WITHHELD, NO_LONGER_AVAILABLE, AVAILABLE, APPROVED_FOR_MARKETING, TEMPORARILY_NOT_AVAILABLE. | |
| locationQuery | No | Location search — city, state, country, or facility name. | |
| advancedFilter | No | Advanced filter using AREA[] Essie syntax. E.g., "AREA[StudyType]INTERVENTIONAL", "AREA[EnrollmentCount]RANGE[100, 1000]". Combine with AND/OR/NOT and parentheses. | |
| conditionQuery | No | Condition/disease-specific search. E.g., "Type 2 Diabetes", "non-small cell lung cancer". | |
| interventionQuery | No | Intervention/treatment search. E.g., "pembrolizumab", "cognitive behavioral therapy". |
Output Schema
| Name | Required | Description |
|---|---|---|
| studies | Yes | Matching studies. |
| totalCount | No | Total matching studies (first page only when countTotal=true). |
| noMatchHints | No | Suggestions for broadening the search when no results are found. |
| nextPageToken | No | Token for the next page. Absent on last page. |
| searchCriteria | No | Echo of query/filter criteria used. Present when results are empty. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint, openWorldHint, and idempotentHint. The description adds crucial behavioral context beyond these: the ~70KB payload size warning helps the agent understand performance implications and why field selection matters. It does not mention rate limits or authentication requirements, but the payload warning is significant added value.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, zero waste. The first sentence front-loads the core purpose and capability categories. The second sentence provides actionable optimization advice. Every word earns its place; no redundancy with the schema or title.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 17 parameters, 100% schema coverage, and existence of an output schema, the description appropriately focuses on high-level capabilities and critical usage constraints (payload size) rather than enumerating all filters. It omits return value details (appropriately, since output schema exists) but could briefly acknowledge the pagination pattern.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline is 3. The description elevates this by providing rationale for the 'fields' parameter (payload reduction) and categorizing the 17 parameters into functional groups (full-text, field-specific, status/phase/geo filters). This helps the agent understand the query model beyond individual parameter descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches ClinicalTrials.gov with specific capabilities (full-text, filters, pagination, sorting). However, it does not explicitly distinguish from siblings like 'clinicaltrials_get_study_record' (for single-record retrieval) or 'clinicaltrials_find_eligible' (for patient matching), which would help the agent select the correct tool in a suite of 7 related tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides excellent specific guidance to 'use the fields parameter to reduce payload size' and quantifies the cost (~70KB per record). However, it lacks broader guidance on when to use this search tool versus alternatives like 'get_study_record' for known NCT IDs or 'find_eligible' for patient-specific queries.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!
Your Connectors
Sign in to create a connector for this server.