Skip to main content
Glama

fetch_clinical_trials

Search ClinicalTrials.gov for cancer-related studies using filters like condition, status, and location, then store results in a research database for medical document management.

Instructions

Fetch clinical trials from ClinicalTrials.gov and store in research_entries.

Searches the ClinicalTrials.gov API v2 for matching studies and saves them to the research_entries table (deduplicates by NCT number).

Args: condition: Medical condition to search for (e.g. "colorectal cancer"). keywords: Additional search terms (e.g. "FOLFOX", "immunotherapy"). status: Trial status filter (RECRUITING, ACTIVE_NOT_RECRUITING, COMPLETED). location_country: Country filter (e.g. "United States", "Slovakia"). phase: Phase filter (PHASE1, PHASE2, PHASE3, PHASE4). limit: Maximum number of trials to fetch (default 20).

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
conditionYes
keywordsNo
statusNoRECRUITING
location_countryNo
phaseNo
limitNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses key behaviors: fetching from an external API (ClinicalTrials.gov v2), storing to a database table (research_entries), and deduplication by NCT number. However, it lacks details on error handling, rate limits, authentication needs, or what happens if storage fails. For a tool with external API calls and database writes, more behavioral context would be helpful.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and appropriately sized. It starts with a high-level summary, then details the process, and finally lists parameters with explanations. Every sentence earns its place: the first sentence states the core action, the second explains the search and storage logic, and the parameter section provides essential usage details without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (external API fetch + database storage, 6 parameters, no annotations), the description is mostly complete. It covers purpose, usage, parameters, and key behaviors like deduplication. However, with no annotations and an output schema present (though not shown here), it could benefit from mentioning response format or error cases. Still, it provides enough context for an agent to use the tool effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must fully compensate. It provides clear semantics for all 6 parameters: 'condition' (medical condition to search), 'keywords' (additional search terms), 'status' (trial status filter with examples), 'location_country' (country filter), 'phase' (phase filter with examples), and 'limit' (maximum number with default). This adds significant value beyond the bare schema, explaining what each parameter means and providing examples.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('fetch', 'store', 'search', 'save') and resources ('clinical trials from ClinicalTrials.gov', 'research_entries table'). It distinguishes this tool from siblings like 'search_research' or 'list_research_entries' by specifying it fetches from an external API and stores results locally with deduplication.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies when to use this tool (to fetch and store clinical trials from ClinicalTrials.gov) but doesn't explicitly state when not to use it or name alternatives. For example, it doesn't compare to 'search_research' (which might search locally) or 'add_research_entry' (which might add manually). However, the context is clear enough for an agent to understand its primary use case.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/peter-fusek/oncofiles'

If you have feedback or need assistance with the MCP directory API, please join our Discord server