Skip to main content
Glama
h-lu
by h-lu

download_repec

Provides guidance for accessing RePEc-indexed papers by explaining that PDFs are institution-hosted and suggesting alternative download methods.

Instructions

RePEc/IDEAS does NOT support direct PDF downloads.

RePEc is a metadata index - PDFs are hosted at original institutions. INSTEAD (try in order): 1. Visit paper URL - many NBER/Fed papers are freely available 2. download_scihub(doi) - if published before 2023 Args: paper_id: RePEc handle (unused). save_path: Unused. Returns: Error message with alternatives.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
paper_idYes
save_pathNo

Implementation Reference

  • MCP tool handler for 'download_repec'. Decorated with @mcp.tool() for registration. Delegates to generic _download using RePECSearcher.
    @mcp.tool() async def download_repec(paper_id: str, save_path: Optional[str] = None) -> str: """RePEc/IDEAS does NOT support direct PDF downloads. RePEc is a metadata index - PDFs are hosted at original institutions. INSTEAD (try in order): 1. Visit paper URL - many NBER/Fed papers are freely available 2. download_scihub(doi) - if published before 2023 Args: paper_id: RePEc handle (unused). save_path: Unused. Returns: Error message with alternatives. """ return await _download('repec', paper_id, save_path)
  • The platform-specific download_pdf method in RePECSearcher class, called by the generic _download. Returns an informative error message explaining limitations and alternatives.
    def download_pdf(self, paper_id: str, save_path: str) -> str: """RePEc/IDEAS 不支持直接 PDF 下载 RePEc 是元数据索引,不托管 PDF 文件。 PDF 通常在原机构网站(如 NBER、央行官网)。 Args: paper_id: RePEc handle (未使用) save_path: 保存路径 (未使用) Returns: str: 错误信息和替代方案 """ return ( "RePEc/IDEAS does not host PDF files directly. " "PDFs are available from the original institution's website. " "ALTERNATIVES:\n" "1. Use the paper URL to visit the source (NBER, Fed, etc.)\n" "2. If DOI is available, use download_scihub(doi)\n" "3. Many NBER/Fed working papers are freely available at source" )
  • Generic _download helper function used by all platform-specific download tools, including download_repec. Dispatches to searcher.download_pdf.
    async def _download( searcher_name: str, paper_id: str, save_path: Optional[str] = None ) -> str: """通用下载函数""" if save_path is None: save_path = get_download_path() searcher = SEARCHERS.get(searcher_name) if not searcher: return f"Error: Unknown searcher {searcher_name}" try: return searcher.download_pdf(paper_id, save_path) except NotImplementedError as e: return str(e) except Exception as e: logger.error(f"Download failed for {searcher_name}: {e}") return f"Error downloading: {str(e)}"
  • Global SEARCHERS dictionary instantiation, registering the RePECSearcher() instance for the 'repec' platform used by download_repec.
    SEARCHERS = { 'arxiv': ArxivSearcher(), 'pubmed': PubMedSearcher(), 'biorxiv': BioRxivSearcher(), 'medrxiv': MedRxivSearcher(), 'google_scholar': GoogleScholarSearcher(), 'iacr': IACRSearcher(), 'semantic': SemanticSearcher(), 'crossref': CrossRefSearcher(), 'repec': RePECSearcher(),

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/h-lu/paper-search-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server