Skip to main content
Glama

literature_search

Search medical literature and health technology assessment sources for evidence on drugs or indications, returning structured results with audit trails for HTA submissions.

Instructions

Search PubMed, ClinicalTrials.gov, bioRxiv/medRxiv, ChEMBL, FDA Orange Book, FDA Purple Book, enterprise sources (Embase, ScienceDirect, Cochrane, Citeline, Pharmapendium, Cortellis), HTA cost reference sources (CMS NADAC, PSSRU, NHS National Cost Collection, BNF, PBS Schedule), LATAM sources (DATASUS, CONITEC, ANVISA, PAHO, IETS, FONASA), APAC sources (HITAP), and HTA appraisal/guidance sources (NICE TAs, CADTH CDR/pCODR, ICER, PBAC PSDs, G-BA AMNOG, HAS Transparency Committee, IQWiG, AIFA, TLV Sweden, INESSS Quebec) for evidence on a drug or indication. Returns structured results including HTA precedents and appraisal decisions with a full audit trail suitable for HTA submissions.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
queryYesResearch question (e.g. 'semaglutide type 2 diabetes cost-effectiveness')
sourcesNoData sources to query. Default: pubmed, clinicaltrials, biorxiv, chembl (+ embase if ELSEVIER_API_KEY set). Use 'who_gho' and 'world_bank' for epidemiology and demographic data. Use 'oecd' for OECD health statistics (expenditure, hospital beds, physicians, life expectancy). Use 'ihme_gbd' for Global Burden of Disease estimates (DALYs, prevalence, mortality across 204 countries). Use 'orange_book' for FDA drug approvals and therapeutic equivalence. Use 'purple_book' for FDA-licensed biologics and biosimilars. Enterprise (require API key): 'cochrane' (COCHRANE_API_KEY), 'citeline' (CITELINE_API_KEY), 'pharmapendium' (PHARMAPENDIUM_API_KEY), 'cortellis' (CORTELLIS_API_KEY). HTA cost reference sources: 'cms_nadac' (US drug acquisition costs via CMS API), 'pssru' (UK unit costs, reference links), 'nhs_costs' (NHS National Cost Collection, reference links), 'bnf' (UK drug pricing, reference links), 'pbs_schedule' (Australia PBS/MBS pricing, reference links). LATAM sources (explicit request only): 'datasus' (Brazil SUS hospital/ambulatory data), 'conitec' (Brazil HTA reports), 'anvisa' (Brazil drug pricing/registry), 'paho' (Pan American regional health statistics), 'iets' (Colombia HTA reports), 'fonasa' (Chile public health insurance data). APAC sources (explicit request only): 'hitap' (Thailand HTA reports and methodology). HTA appraisal/precedent sources (explicit request only): 'nice_ta' (NICE Technology Appraisals, UK), 'cadth_reviews' (CADTH CDR/pCODR, Canada), 'icer_reports' (ICER evidence reports and HBPBs, US), 'pbac_psd' (PBAC Public Summary Documents, Australia), 'gba_decisions' (G-BA AMNOG benefit assessments, Germany), 'has_tc' (HAS Transparency Committee opinions, France), 'iqwig' (IQWiG systematic reviews and dossier assessments, Germany), 'aifa' (AIFA reimbursement decisions, Italy), 'tlv' (TLV value-based pricing decisions, Sweden), 'inesss' (INESSS drug evaluations, Quebec Canada).
max_resultsNoMaximum results to return (default: 20, max: 100)
date_fromNoExclude results before this date (ISO format: YYYY-MM-DD)
output_formatNoOutput format. 'docx' requires hosted tier.
projectNoProject ID for knowledge base persistence. When set, results are saved to ~/.heor-agent/projects/{project}/raw/literature/
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the tool returns structured results with audit trails suitable for HTA submissions, which adds useful context about output quality. However, it lacks details on rate limits, authentication needs for enterprise sources, or potential costs/limitations of the search operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is overly verbose with a long list of sources in the first sentence, making it difficult to parse quickly. While informative, it could be more front-loaded and structured for clarity, with some details better placed in the schema or annotations.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of the tool (6 parameters, no output schema, no annotations), the description is mostly complete by explaining the broad purpose, sources, and output suitability. However, it could better address behavioral aspects like authentication or limitations to fully compensate for the lack of annotations and output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds minimal value beyond the schema by implying the query parameter should target drug/indication evidence, but does not provide additional syntax or format details. Baseline 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('search') and the comprehensive scope of resources (PubMed, ClinicalTrials.gov, bioRxiv/medRxiv, etc.), distinguishing it from sibling tools like cost_effectiveness_model or hta_dossier_prep. It explicitly mentions the purpose is to find evidence for drugs or indications with HTA suitability.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool (searching evidence for drugs/indications with HTA submissions), but does not explicitly state when not to use it or name specific alternatives among siblings. It implies usage for evidence gathering versus other tools focused on modeling or dossier preparation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/neptun2000/heor-agent-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server