Skip to main content
Glama
h-lu
by h-lu

search_pubmed

Search PubMed for biomedical literature including medical research, clinical trials, and peer-reviewed papers. Get abstracts and metadata to identify relevant studies.

Instructions

Search biomedical literature on PubMed (NCBI database).

USE THIS TOOL WHEN: - Searching for medical, clinical, or biomedical research - You need peer-reviewed published papers (not preprints) - Searching for drug studies, clinical trials, disease research DOMAIN: Medicine, Biology, Pharmacology, Public Health, Clinical Research, Genetics, Biochemistry. LIMITATION: PubMed provides metadata/abstracts ONLY, not full PDFs. WORKFLOW FOR FULL TEXT: 1. search_pubmed(query) -> get DOI from results 2. download_scihub(doi) -> download PDF (if published before 2023) Args: query: Medical/scientific terms (e.g., 'cancer immunotherapy', 'COVID-19 vaccine'). max_results: Number of results (default: 10). Returns: List of paper dicts with: paper_id (PMID), title, authors, abstract, published_date, doi, url. Example: search_pubmed("CRISPR gene therapy", max_results=5)

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
queryYes
max_resultsNo

Implementation Reference

  • MCP tool handler for 'search_pubmed'. Decorated with @mcp.tool() for registration and schema definition via signature/docstring. Delegates execution to the generic _search function with 'pubmed' searcher.
    @mcp.tool() async def search_pubmed(query: str, max_results: int = 10) -> List[Dict]: """Search biomedical literature on PubMed (NCBI database). USE THIS TOOL WHEN: - Searching for medical, clinical, or biomedical research - You need peer-reviewed published papers (not preprints) - Searching for drug studies, clinical trials, disease research DOMAIN: Medicine, Biology, Pharmacology, Public Health, Clinical Research, Genetics, Biochemistry. LIMITATION: PubMed provides metadata/abstracts ONLY, not full PDFs. WORKFLOW FOR FULL TEXT: 1. search_pubmed(query) -> get DOI from results 2. download_scihub(doi) -> download PDF (if published before 2023) Args: query: Medical/scientific terms (e.g., 'cancer immunotherapy', 'COVID-19 vaccine'). max_results: Number of results (default: 10). Returns: List of paper dicts with: paper_id (PMID), title, authors, abstract, published_date, doi, url. Example: search_pubmed("CRISPR gene therapy", max_results=5) """ return await _search('pubmed', query, max_results)
  • Core implementation of PubMed search logic in PubMedSearcher.search(). Uses NCBI E-utilities API: esearch.fcgi to get PMIDs, efetch.fcgi to fetch details, parses XML to extract title, authors, abstract, DOI, etc., into Paper objects.
    def search(self, query: str, max_results: int = 10) -> List[Paper]: """搜索 PubMed 论文 Args: query: 搜索关键词,支持 PubMed 查询语法 例如: "cancer[Title]", "Smith J[Author]" max_results: 最大返回数量(上限 10000) Returns: List[Paper]: 论文列表 """ # Step 1: 搜索获取 PMIDs search_params = { **self._get_base_params(), 'term': query, 'retmax': min(max_results, 10000), # NCBI 限制 'retmode': 'xml' } search_response = self._make_request(self.SEARCH_URL, search_params) if not search_response: return [] try: search_root = ET.fromstring(search_response.content) ids = [id_elem.text for id_elem in search_root.findall('.//Id')] except ET.ParseError as e: logger.error(f"Failed to parse search response: {e}") return [] if not ids: logger.info(f"No results found for query: {query}") return [] # Step 2: 获取论文详情 fetch_params = { **self._get_base_params(), 'id': ','.join(ids), 'retmode': 'xml' } fetch_response = self._make_request(self.FETCH_URL, fetch_params) if not fetch_response: return [] try: fetch_root = ET.fromstring(fetch_response.content) except ET.ParseError as e: logger.error(f"Failed to parse fetch response: {e}") return [] # Step 3: 解析论文数据 papers = [] for article in fetch_root.findall('.//PubmedArticle'): paper = self._parse_article(article) if paper: papers.append(paper) logger.info(f"Found {len(papers)} papers for query: {query}") return papers
  • Generic helper function _search that retrieves the platform-specific searcher (PubMedSearcher for 'pubmed'), calls its search method, and converts Paper objects to dicts for the tool response.
    async def _search( searcher_name: str, query: str, max_results: int = 10, **kwargs ) -> List[Dict]: """通用搜索函数""" searcher = SEARCHERS.get(searcher_name) if not searcher: logger.error(f"Unknown searcher: {searcher_name}") return [] try: papers = searcher.search(query, max_results=max_results, **kwargs) return [paper.to_dict() for paper in papers] except Exception as e: logger.error(f"Search failed for {searcher_name}: {e}") return []
  • Global SEARCHERS dictionary instantiates singleton PubMedSearcher instance used by all pubmed tools.
    SEARCHERS = { 'arxiv': ArxivSearcher(), 'pubmed': PubMedSearcher(), 'biorxiv': BioRxivSearcher(), 'medrxiv': MedRxivSearcher(), 'google_scholar': GoogleScholarSearcher(), 'iacr': IACRSearcher(), 'semantic': SemanticSearcher(), 'crossref': CrossRefSearcher(), 'repec': RePECSearcher(), }

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/h-lu/paper-search-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server