Skip to main content
Glama
cyanheads

clinicaltrialsgov-mcp-server

Clinicaltrials Get Study Count

clinicaltrials_get_study_count
Read-onlyIdempotent

Retrieve total trial counts without downloading full study data. Filter by condition, intervention, or phase to generate statistics and build comparative breakdowns.

Instructions

Get total study count matching a query without fetching study data. Fast and lightweight. Use for quick statistics or to build breakdowns by calling multiple times with different filters (e.g., count by phase, count by status, count recruiting vs completed for a condition).

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
queryNoGeneral full-text search.
conditionQueryNoCondition/disease search.
interventionQueryNoIntervention/treatment search.
sponsorQueryNoSponsor search.
statusFilterNoFilter by study status. Values: RECRUITING, COMPLETED, ACTIVE_NOT_RECRUITING, NOT_YET_RECRUITING, ENROLLING_BY_INVITATION, SUSPENDED, TERMINATED, WITHDRAWN, UNKNOWN, WITHHELD, NO_LONGER_AVAILABLE, AVAILABLE, APPROVED_FOR_MARKETING, TEMPORARILY_NOT_AVAILABLE.
phaseFilterNoFilter by trial phase. Values: EARLY_PHASE1, PHASE1, PHASE2, PHASE3, PHASE4, NA.
advancedFilterNoAdvanced AREA[] filter expression.

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
totalCountYesTotal studies matching the query/filters.
searchCriteriaNoEcho of query/filter criteria used.
noMatchHintsNoSuggestions when no studies match (totalCount is 0).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds 'Fast and lightweight' (performance characteristic) and emphasizes 'without fetching study data' (efficiency/scope distinction) beyond the annotations. Annotations already cover read-only/idempotent/open-world status, but the description adds valuable performance context and clarifies that only counts are returned, not records.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences efficiently deliver: (1) core purpose and distinction from data-fetching tools, (2) performance characteristic, and (3) usage patterns with examples. Every sentence earns its place; front-loaded with the critical 'without fetching study data' distinction.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the rich input schema (100% coverage, 7 parameters) and existence of output schema, the description is complete. It appropriately focuses on high-value usage patterns and sibling distinctions rather than repeating detailed parameter documentation already present in the schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema fully documents all 7 parameters (query, conditionQuery, statusFilter, etc.). The description references 'different filters' conceptually but does not add semantic meaning or syntax details beyond what the comprehensive schema already provides, warranting the baseline score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states 'Get total study count matching a query without fetching study data,' providing a specific verb (get/count), resource (studies), and critical scope limitation. The phrase 'without fetching study data' clearly distinguishes this tool from siblings like search_studies or get_study_record.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use: 'Use for quick statistics or to build breakdowns by calling multiple times with different filters.' Provides concrete examples: '(e.g., count by phase, count by status, count recruiting vs completed for a condition),' which clarifies the multi-call aggregation pattern.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/cyanheads/clinicaltrialsgov-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server