Skip to main content
Glama

Server Details

ClinicalTrials.gov MCP server. Search studies, retrieve results, match patients to eligible trials.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
cyanheads/clinicaltrialsgov-mcp-server
GitHub Stars
64
Server Listing
clinicaltrialsgov-mcp-server

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

7 tools
clinicaltrials_find_eligibleClinicaltrials Find EligibleA
Read-onlyIdempotent
Inspect

Match patient demographics and conditions to eligible recruiting clinical trials. Provide age, sex, conditions, and location to find studies with matching eligibility criteria, contact information, and recruiting locations.

ParametersJSON Schema
NameRequiredDescriptionDefault
ageYesPatient age in years.
sexYesPatient's biological sex. Use 'All' to include studies regardless of sex restrictions.
locationYesPatient location.
conditionsYesMedical conditions or diagnoses. E.g., ["Type 2 Diabetes", "Hypertension"].
maxResultsNoMaximum results to return.
recruitingOnlyNoOnly include actively recruiting studies.
healthyVolunteerNoWhether the patient is a healthy volunteer. When true, only studies accepting healthy volunteers are queried.

Output Schema

ParametersJSON Schema
NameRequiredDescription
studiesYesMatching studies with eligibility and location fields.
totalCountNoTotal matching studies from the API.
noMatchHintsNoHints when no studies match, with suggestions to broaden the search.
searchCriteriaYesSearch criteria used.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover read-only/idempotent safety, so the description appropriately focuses on behavioral context: it discloses the query optimization ('Builds an optimized... query'), specifies which fields are returned ('eligibility and location fields'), and crucially sets expectations that the caller must perform final evaluation ('for the caller to evaluate').

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two well-structured sentences with zero waste. First sentence establishes purpose and mechanism; second details input mapping and output expectations. Information is front-loaded and dense.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the output schema exists, the description appropriately summarizes the return value (studies with specific fields) without redundant detail. It covers the external API (ClinicalTrials.gov), the patient-to-trial workflow, and the necessary human-in-the-loop evaluation step. A 5 would require mentioning pagination limits or specific caveats about location matching radius.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description maps the four required parameters to the conceptual 'patient profile' framing, which adds mild semantic grouping, but individual parameter details (ranges, formats) are already well-covered by the schema itself.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action (Match) and resource (recruiting clinical trials), emphasizing the patient profile matching aspect that distinguishes it from generic search siblings like 'clinicaltrials_search_studies'. It explicitly mentions ClinicalTrials.gov as the target data source.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies the specific use case (patient eligibility matching vs. general research) by focusing on 'patient demographics' and 'patient profile'. However, it lacks explicit guidance on when to use the sibling 'search_studies' instead, or explicit warnings about the limitations of automated eligibility matching.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

clinicaltrials_get_field_definitionsClinicaltrials Get Field DefinitionsA
Read-onlyIdempotent
Inspect

Get field definitions from the ClinicalTrials.gov study data model. Returns the field tree with piece names (used in the fields parameter and AREA[] filters), data types, and nesting structure. Call with no path for a top-level overview, then drill into a section with the path parameter to see its fields.

ParametersJSON Schema
NameRequiredDescriptionDefault
pathNoDot-notation path to get a subtree. E.g., "protocolSection.designModule", "protocolSection.eligibilityModule", "resultsSection". Omit for top-level overview (sections + direct children, not the full tree).
includeIndexedOnlyNoOnly return indexed (searchable) fields. Default: false. Has no visible effect at the top level — use with a path to filter leaf fields.

Output Schema

ParametersJSON Schema
NameRequiredDescription
fieldsYesField definitions.
totalFieldsYesTotal fields returned.
resolvedPathNoResolved path when path param was used.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover read-only/idempotent safety. The description adds valuable behavioral context: it explains the hierarchical/tree structure of the output, defines 'piece names' and their usage in filters, and describes the drill-down exploration pattern. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three well-structured sentences with zero waste: purpose (sentence 1), return structure/relevance (sentence 2), and usage pattern (sentence 3). Front-loaded with the core action and appropriately sized for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of an output schema and comprehensive annotations, the description appropriately focuses on usage patterns and semantic relationships (piece names in filters) rather than return value details. It fully covers the hierarchical discovery workflow implied by the parameters.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, baseline is 3. The description adds meaningful usage context for the 'path' parameter ('drill into a section'), explaining the interaction between empty path (overview) and specific paths (subtree). It does not add explicit semantics for 'includeIndexedOnly,' but the schema handles that.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states a specific verb ('Get') + resource ('field definitions') + domain ('ClinicalTrials.gov study data model'). It clearly distinguishes this metadata tool from siblings like search_studies or get_field_values by emphasizing it returns the 'field tree' with 'piece names,' 'data types,' and 'nesting structure' rather than actual study data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear usage context with the progressive disclosure pattern ('Call with no path for top-level overview, then drill into a section'). Mentions the output is 'used in the fields parameter and AREA[] filters,' implying when to use it (for query construction), but does not explicitly contrast with get_field_values or state prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

clinicaltrials_get_field_valuesClinicaltrials Get Field ValuesA
Read-onlyIdempotent
Inspect

Discover valid values for ClinicalTrials.gov fields with study counts per value. Use to explore available filter options before building a search — e.g., valid OverallStatus, Phase, InterventionType, StudyType, or LeadSponsorClass values.

ParametersJSON Schema
NameRequiredDescriptionDefault
fieldsYesPascalCase piece name(s) to get values for. Common fields: OverallStatus, Phase, StudyType, InterventionType, LeadSponsorClass, Sex, StdAge, DesignAllocation, DesignPrimaryPurpose, DesignMasking.

Output Schema

ParametersJSON Schema
NameRequiredDescription
fieldStatsYesStatistics per requested field.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover readOnly/idempotent/openWorld hints. Description adds value by specifying that results include 'study counts per value,' indicating the tool returns prevalence metadata, not just enum values. Does not contradict annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two efficiently structured sentences. Front-loaded with purpose, followed by usage guideline and examples. No redundant or wasted words; em-dash usage effectively integrates examples without breaking flow.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of output schema and comprehensive annotations, the description adequately covers the exploration use case. No gaps remain for the agent to understand when and why to invoke this tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% with detailed parameter documentation including PascalCase naming and common field list. Description reinforces usage with contextual examples but does not add semantic information beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific verb (Discover) + resource (valid values for ClinicalTrials.gov fields) + scope (with study counts per value). Clearly distinguishes from sibling search tools by positioning as a field discovery mechanism rather than study retrieval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states 'Use to explore available filter options before building a search,' establishing clear workflow context. Provides concrete field examples (OverallStatus, Phase, etc.). Lacks explicit naming of sibling search tools as the alternative for the next step.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

clinicaltrials_get_study_countClinicaltrials Get Study CountA
Read-onlyIdempotent
Inspect

Get total study count matching a query without fetching study data. Fast and lightweight. Use for quick statistics or to build breakdowns by calling multiple times with different filters (e.g., count by phase, count by status, count recruiting vs completed for a condition).

ParametersJSON Schema
NameRequiredDescriptionDefault
queryNoGeneral full-text search across all fields.
phaseFilterNoFilter by trial phase. Values: EARLY_PHASE1, PHASE1, PHASE2, PHASE3, PHASE4, NA.
sponsorQueryNoSponsor/collaborator name search.
statusFilterNoFilter by study status. Values: RECRUITING, COMPLETED, ACTIVE_NOT_RECRUITING, NOT_YET_RECRUITING, ENROLLING_BY_INVITATION, SUSPENDED, TERMINATED, WITHDRAWN, UNKNOWN, WITHHELD, NO_LONGER_AVAILABLE, AVAILABLE, APPROVED_FOR_MARKETING, TEMPORARILY_NOT_AVAILABLE.
advancedFilterNoAdvanced filter using AREA[] Essie syntax. E.g., "AREA[StudyType]INTERVENTIONAL", "AREA[EnrollmentCount]RANGE[100, 1000]". Combine with AND/OR/NOT and parentheses.
conditionQueryNoCondition/disease-specific search. E.g., "Type 2 Diabetes", "non-small cell lung cancer".
interventionQueryNoIntervention/treatment search. E.g., "pembrolizumab", "cognitive behavioral therapy".

Output Schema

ParametersJSON Schema
NameRequiredDescription
totalCountYesTotal studies matching the query/filters.
noMatchHintsNoSuggestions when no studies match (totalCount is 0).
searchCriteriaNoEcho of query/filter criteria used.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnly/idempotent, while the description adds valuable performance context ('Fast and lightweight') and behavioral usage patterns (iterative filtering for breakdowns). Does not mention rate limits or caching, but covers the essential behavioral traits beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with zero waste: purpose (sentence 1), performance (sentence 2), usage guidelines with examples (sentence 3). Front-loaded with core action and appropriately sized.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 7 optional parameters, output schema existence, and rich annotations, the description adequately covers tool purpose, performance characteristics, and usage patterns without needing to describe return values (handled by output schema).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, baseline is 3. The description adds semantic value by mapping parameters to usage patterns ('count by phase' references phaseFilter, 'count by status' references statusFilter), explaining the intent behind the filter parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verb 'Get' with resource 'total study count' and explicitly distinguishes from siblings by stating 'without fetching study data', clearly positioning it against clinicaltrials_search_studies and clinicaltrials_get_study_record.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use ('quick statistics', 'build breakdowns') and provides concrete implementation patterns ('calling multiple times with different filters') with specific examples ('count by phase', 'count by status', 'count recruiting vs completed').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

clinicaltrials_get_study_recordClinicaltrials Get Study RecordA
Read-onlyIdempotent
Inspect

Fetch a single clinical study by NCT ID. Returns the full study record including protocol details, eligibility criteria, outcomes, arms, interventions, contacts, and locations.

ParametersJSON Schema
NameRequiredDescriptionDefault
nctIdYesNCT identifier (e.g., NCT03722472).

Output Schema

ParametersJSON Schema
NameRequiredDescription
studyYesFull study record.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover readOnly/idempotent hints, so the description appropriately focuses on adding scope context ('Returns the full study record') and architectural equivalence. No contradictions with annotations. Does not disclose rate limits or caching, but these are not strictly necessary.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, all earning their place: core action, return value scope, and resource equivalence. Front-loaded with the essential verb-resource pairing. No filler text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple single-parameter input, 100% schema coverage, existing output schema, and comprehensive annotations, the description provides complete context by clarifying it returns the 'full' record (differentiating from get_study_results) and handling resource equivalence.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage (nctId fully documented with pattern and example), the description meets the baseline by reinforcing the parameter's purpose ('by NCT ID') without needing to add redundant syntax details.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the specific action ('Fetch'), resource ('single clinical study'), and key identifier ('by NCT ID'), distinguishing it clearly from siblings like search_studies (plural/search) and get_study_results (subset of data).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides valuable context about resource equivalence ('Equivalent to the clinicaltrials://{nctId} resource') indicating when to use this over MCP resources. However, it does not explicitly contrast with clinicaltrials_search_studies for cases where the user lacks a specific NCT ID.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

clinicaltrials_get_study_resultsClinicaltrials Get Study ResultsA
Read-onlyIdempotent
Inspect

Fetch trial results data for completed studies — outcome measures with statistics, adverse events, participant flow, and baseline characteristics. Only available for studies where hasResults is true. Use clinicaltrials_search_studies first to find studies with results.

ParametersJSON Schema
NameRequiredDescriptionDefault
nctIdsYesOne or more NCT IDs (max 20). E.g., "NCT12345678" or ["NCT12345678", "NCT87654321"]. Use summary=true for large batches to avoid large payloads.
summaryNoReturn condensed summaries instead of full data. Reduces payload from ~200KB to ~5KB per study. Summaries include outcome titles, types, timeframes, group counts, and top-level stats — omitting individual measurements, analyses, and per-group data.
sectionsNoFilter which sections to return. Values: outcomes, adverseEvents, participantFlow, baseline. Omit for all sections.

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultsYesResults per study.
fetchErrorsNoStudies that could not be fetched.
studiesWithoutResultsNoNCT IDs that do not have results data.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover safety profile (readOnly, idempotent). Description adds valuable behavioral context: specific data contents returned, availability constraint (hasResults check), and contrasts with search workflow. Does not mention rate limits or error states, but provides solid content characterization.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with zero waste: data content front-loaded, followed by availability constraint, then workflow guidance. Each sentence serves distinct purpose (what, when available, how to prepare).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 100% schema coverage, existing output schema, and comprehensive annotations, the description provides complete workflow context (prerequisite search step) and content characterization without redundancy.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, providing detailed descriptions for nctIds, summary, and sections. Description adds implicit context that nctIds must reference completed studies with results, but does not elaborate on parameter semantics beyond what the schema already provides. Baseline score appropriate given comprehensive schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb 'Fetch' + resource 'trial results data' + scope 'completed studies'. Lists concrete data types (outcome measures, adverse events, participant flow, baseline characteristics) that distinguish it from sibling get_study_record.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicit constraint 'Only available for studies where hasResults is true' and workflow instruction 'Use search_studies first to find studies with results' — clearly defines prerequisites and sequences with named siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

clinicaltrials_search_studiesClinicaltrials Search StudiesA
Read-onlyIdempotent
Inspect

Search for clinical trial studies from ClinicalTrials.gov. Supports full-text and field-specific queries, status/phase/geographic filters, pagination, sorting, and field selection. Use the fields parameter to reduce payload size — full study records are ~70KB each.

ParametersJSON Schema
NameRequiredDescriptionDefault
sortNoSort order. Format: FieldName:asc or FieldName:desc. E.g., "LastUpdatePostDate:desc", "EnrollmentCount:desc". Max 2 fields comma-separated.
queryNoGeneral full-text search across all fields.
fieldsNoFields to return (PascalCase piece names). Strongly recommended to reduce payload. Common: NCTId, BriefTitle, OverallStatus, Phase, LeadSponsorName, Condition, InterventionName, BriefSummary, EnrollmentCount, StartDate.
nctIdsNoFilter to specific NCT IDs for batch lookups.
pageSizeNoResults per page, 1–200.
geoFilterNoGeographic proximity filter. Format: distance(lat,lon,radius). E.g., "distance(47.6062,-122.3321,50mi)" for studies within 50 miles of Seattle.
pageTokenNoPagination cursor from a previous response.
countTotalNoInclude total study count in response. Only computed on the first page.
titleQueryNoSearch within study titles and acronyms only.
phaseFilterNoFilter by trial phase. Values: EARLY_PHASE1, PHASE1, PHASE2, PHASE3, PHASE4, NA.
outcomeQueryNoSearch within outcome measure fields.
sponsorQueryNoSponsor/collaborator name search.
statusFilterNoFilter by study status. Values: RECRUITING, COMPLETED, ACTIVE_NOT_RECRUITING, NOT_YET_RECRUITING, ENROLLING_BY_INVITATION, SUSPENDED, TERMINATED, WITHDRAWN, UNKNOWN, WITHHELD, NO_LONGER_AVAILABLE, AVAILABLE, APPROVED_FOR_MARKETING, TEMPORARILY_NOT_AVAILABLE.
locationQueryNoLocation search — city, state, country, or facility name.
advancedFilterNoAdvanced filter using AREA[] Essie syntax. E.g., "AREA[StudyType]INTERVENTIONAL", "AREA[EnrollmentCount]RANGE[100, 1000]". Combine with AND/OR/NOT and parentheses.
conditionQueryNoCondition/disease-specific search. E.g., "Type 2 Diabetes", "non-small cell lung cancer".
interventionQueryNoIntervention/treatment search. E.g., "pembrolizumab", "cognitive behavioral therapy".

Output Schema

ParametersJSON Schema
NameRequiredDescription
studiesYesMatching studies.
totalCountNoTotal matching studies (first page only when countTotal=true).
noMatchHintsNoSuggestions for broadening the search when no results are found.
nextPageTokenNoToken for the next page. Absent on last page.
searchCriteriaNoEcho of query/filter criteria used. Present when results are empty.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint, openWorldHint, and idempotentHint. The description adds crucial behavioral context beyond these: the ~70KB payload size warning helps the agent understand performance implications and why field selection matters. It does not mention rate limits or authentication requirements, but the payload warning is significant added value.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, zero waste. The first sentence front-loads the core purpose and capability categories. The second sentence provides actionable optimization advice. Every word earns its place; no redundancy with the schema or title.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 17 parameters, 100% schema coverage, and existence of an output schema, the description appropriately focuses on high-level capabilities and critical usage constraints (payload size) rather than enumerating all filters. It omits return value details (appropriately, since output schema exists) but could briefly acknowledge the pagination pattern.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description elevates this by providing rationale for the 'fields' parameter (payload reduction) and categorizing the 17 parameters into functional groups (full-text, field-specific, status/phase/geo filters). This helps the agent understand the query model beyond individual parameter descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool searches ClinicalTrials.gov with specific capabilities (full-text, filters, pagination, sorting). However, it does not explicitly distinguish from siblings like 'clinicaltrials_get_study_record' (for single-record retrieval) or 'clinicaltrials_find_eligible' (for patient matching), which would help the agent select the correct tool in a suite of 7 related tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides excellent specific guidance to 'use the fields parameter to reduce payload size' and quantifies the cost (~70KB per record). However, it lacks broader guidance on when to use this search tool versus alternatives like 'get_study_record' for known NCT IDs or 'find_eligible' for patient-specific queries.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.