Skip to main content
Glama
h-lu

Paper Search MCP Server

by h-lu

download_medrxiv

Download PDF files from medRxiv's open-access repository by providing the paper's DOI identifier and specifying a save location.

Instructions

Download PDF from medRxiv (free and open access).

Args:
    paper_id: medRxiv DOI (e.g., '10.1101/2024.01.01.12345678').
    save_path: Directory to save PDF.

Returns:
    Path to downloaded PDF.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
paper_idYes
save_pathNo

Implementation Reference

  • The main handler function for the 'download_medrxiv' tool. It is registered via @mcp.tool() decorator and delegates the download to the generic _download helper using the 'medrxiv' searcher instance.
    @mcp.tool()
    async def download_medrxiv(paper_id: str, save_path: Optional[str] = None) -> str:
        """Download PDF from medRxiv (free and open access).
        
        Args:
            paper_id: medRxiv DOI (e.g., '10.1101/2024.01.01.12345678').
            save_path: Directory to save PDF.
        
        Returns:
            Path to downloaded PDF.
        """
        return await _download('medrxiv', paper_id, save_path)
  • Generic download helper function that retrieves the appropriate searcher from SEARCHERS dict and calls its download_pdf method. This is called by all download_* tool handlers.
    async def _download(
        searcher_name: str, 
        paper_id: str, 
        save_path: Optional[str] = None
    ) -> str:
        """通用下载函数"""
        if save_path is None:
            save_path = get_download_path()
        
        searcher = SEARCHERS.get(searcher_name)
        if not searcher:
            return f"Error: Unknown searcher {searcher_name}"
        
        try:
            return searcher.download_pdf(paper_id, save_path)
        except NotImplementedError as e:
            return str(e)
        except Exception as e:
            logger.error(f"Download failed for {searcher_name}: {e}")
            return f"Error downloading: {str(e)}"
  • The core implementation of PDF download for medRxiv papers in the MedRxivSearcher class. Downloads the PDF from the constructed URL https://www.medrxiv.org/content/{paper_id}v1.full.pdf and saves it to the specified path.
    def download_pdf(self, paper_id: str, save_path: str) -> str:
        """下载 PDF
        
        Args:
            paper_id: medRxiv DOI
            save_path: 保存目录
            
        Returns:
            下载的文件路径或错误信息
        """
        if not paper_id:
            return "Error: paper_id is empty"
        
        pdf_url = f"https://www.medrxiv.org/content/{paper_id}v1.full.pdf"
        
        try:
            response = self.session.get(
                pdf_url, 
                timeout=self.timeout,
                headers={'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36'}
            )
            response.raise_for_status()
            
            os.makedirs(save_path, exist_ok=True)
            filename = f"{paper_id.replace('/', '_')}.pdf"
            pdf_path = os.path.join(save_path, filename)
            
            with open(pdf_path, 'wb') as f:
                f.write(response.content)
            
            logger.info(f"PDF downloaded: {pdf_path}")
            return pdf_path
            
        except Exception as e:
            logger.error(f"PDF download failed: {e}")
            return f"Error downloading PDF: {e}"
  • Global SEARCHERS dictionary where the 'medrxiv' key is registered with a MedRxivSearcher instance, which provides the download_pdf method used by the tool.
    SEARCHERS = {
        'arxiv': ArxivSearcher(),
        'pubmed': PubMedSearcher(),
        'biorxiv': BioRxivSearcher(),
        'medrxiv': MedRxivSearcher(),
        'google_scholar': GoogleScholarSearcher(),
        'iacr': IACRSearcher(),
        'semantic': SemanticSearcher(),
        'crossref': CrossRefSearcher(),
        'repec': RePECSearcher(),
    }

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/h-lu/paper-search-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server