Skip to main content
Glama
h-lu
by h-lu

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault
NCBI_API_KEYNoAPI key to increase PubMed request limits (optional)
SCIHUB_MIRRORNoCustom Sci-Hub mirror URL (optional)
CROSSREF_MAILTONoEmail address for CrossRef polite pool access (recommended)
PAPER_DOWNLOAD_PATHNoDirectory for PDF downloads (optional, default: ~/paper_downloads)
SEMANTIC_SCHOLAR_API_KEYNoAPI key to increase Semantic Scholar request limits (recommended)

Tools

Functions exposed to the LLM to take actions

NameDescription
search_arxiv

Search preprints on arXiv - major open-access preprint server.

USE THIS TOOL WHEN: - Searching for PREPRINTS (not peer-reviewed yet) - You need free, immediate access to full-text PDFs - Searching in: Physics, Mathematics, Computer Science, Statistics, Quantitative Biology, Quantitative Finance, Electrical Engineering NOTE: arXiv is a PREPRINT server - papers may not be peer-reviewed. For peer-reviewed papers, use search_crossref or search_semantic. WORKFLOW: 1. search_arxiv(query) -> get paper_id (e.g., '2106.12345') 2. download_arxiv(paper_id) -> get PDF (always available) 3. read_arxiv_paper(paper_id) -> get full text as Markdown Args: query: Search terms in any supported field. max_results: Number of results (default: 10). Returns: List of paper dicts with: paper_id, title, authors, abstract, published_date, pdf_url, categories. Example: search_arxiv("quantum computing error correction", max_results=5)
download_arxiv

Download PDF from arXiv (always free and available).

Args: paper_id: arXiv ID (e.g., '2106.12345', '2312.00001v2'). save_path: Directory to save PDF (default: ~/paper_downloads). Returns: Path to downloaded PDF file. Example: download_arxiv("2106.12345")
read_arxiv_paper

Download and extract full text from arXiv paper as Markdown.

Args: paper_id: arXiv ID (e.g., '2106.12345'). save_path: Directory to save PDF (default: ~/paper_downloads). Returns: Full paper text in Markdown format. Example: read_arxiv_paper("2106.12345")
search_pubmed

Search biomedical literature on PubMed (NCBI database).

USE THIS TOOL WHEN: - Searching for medical, clinical, or biomedical research - You need peer-reviewed published papers (not preprints) - Searching for drug studies, clinical trials, disease research DOMAIN: Medicine, Biology, Pharmacology, Public Health, Clinical Research, Genetics, Biochemistry. LIMITATION: PubMed provides metadata/abstracts ONLY, not full PDFs. WORKFLOW FOR FULL TEXT: 1. search_pubmed(query) -> get DOI from results 2. download_scihub(doi) -> download PDF (if published before 2023) Args: query: Medical/scientific terms (e.g., 'cancer immunotherapy', 'COVID-19 vaccine'). max_results: Number of results (default: 10). Returns: List of paper dicts with: paper_id (PMID), title, authors, abstract, published_date, doi, url. Example: search_pubmed("CRISPR gene therapy", max_results=5)
download_pubmed

PubMed does NOT support direct PDF downloads.

PubMed is a metadata database - it does not host PDFs. INSTEAD (try in order): 1. download_scihub(doi) - if published before 2023 2. download_semantic(id) - last resort Args: paper_id: PMID (unused). save_path: Unused. Returns: Error message with alternatives.
read_pubmed_paper

PubMed does NOT support direct paper reading.

INSTEAD (try in order): 1. read_scihub_paper(doi) - if published before 2023 2. read_semantic_paper(id) - last resort Args: paper_id: PMID (unused). save_path: Unused. Returns: Error message with alternatives.
search_biorxiv

Search biology preprints on bioRxiv.

USE THIS TOOL WHEN: - Searching for cutting-edge biology research (preprints) - You need the latest findings before peer review - Searching by CATEGORY, not keyword (see below) DOMAIN: Molecular Biology, Cell Biology, Genetics, Neuroscience, Bioinformatics, Evolutionary Biology, Microbiology, etc. NOTE: bioRxiv search uses CATEGORY names, not keywords. Categories: 'neuroscience', 'cell_biology', 'genetics', 'genomics', 'bioinformatics', 'cancer_biology', 'immunology', etc. WORKFLOW: 1. search_biorxiv(category) -> get DOI 2. download_biorxiv(doi) or read_biorxiv_paper(doi) Args: query: Category name (e.g., 'neuroscience', 'cell_biology'). max_results: Number of results (default: 10). Returns: List of recent preprints in that category. Example: search_biorxiv("neuroscience", max_results=5)
download_biorxiv

Download PDF from bioRxiv (free and open access).

Args: paper_id: bioRxiv DOI (e.g., '10.1101/2024.01.01.123456'). save_path: Directory to save PDF. Returns: Path to downloaded PDF.
read_biorxiv_paper

Download and extract full text from bioRxiv paper.

Args: paper_id: bioRxiv DOI. save_path: Directory to save PDF. Returns: Full paper text in Markdown format.
search_medrxiv

Search medical preprints on medRxiv.

USE THIS TOOL WHEN: - Searching for clinical/medical research preprints - You need latest COVID-19, epidemiology, or clinical studies - Searching by CATEGORY, not keyword (see below) DOMAIN: Epidemiology, Infectious Diseases, Cardiology, Oncology, Public Health, Psychiatry, Health Informatics, etc. NOTE: medRxiv search uses CATEGORY names, not keywords. Categories: 'infectious_diseases', 'epidemiology', 'cardiology', 'oncology', 'health_informatics', 'psychiatry', etc. WORKFLOW: 1. search_medrxiv(category) -> get DOI 2. download_medrxiv(doi) or read_medrxiv_paper(doi) Args: query: Category name (e.g., 'infectious_diseases', 'epidemiology'). max_results: Number of results (default: 10). Returns: List of recent preprints in that category. Example: search_medrxiv("infectious_diseases", max_results=5)
download_medrxiv

Download PDF from medRxiv (free and open access).

Args: paper_id: medRxiv DOI (e.g., '10.1101/2024.01.01.12345678'). save_path: Directory to save PDF. Returns: Path to downloaded PDF.
read_medrxiv_paper

Download and extract full text from medRxiv paper.

Args: paper_id: medRxiv DOI. save_path: Directory to save PDF. Returns: Full paper text in Markdown format.
search_google_scholar

Search academic papers on Google Scholar (broad coverage).

USE THIS TOOL WHEN: - You need broad academic search across ALL disciplines - You want citation counts and "cited by" information - Other specialized tools don't cover your topic COVERAGE: All academic disciplines, books, theses, patents. LIMITATIONS: - Uses web scraping (may be rate-limited) - Does NOT support PDF download FOR FULL TEXT (try in order): 1. download_arxiv(id) - if arXiv preprint 2. download_scihub(doi) - if published before 2023 3. download_semantic(id) - last resort Args: query: Search terms (any academic topic). max_results: Number of results (default: 10, keep small to avoid blocks). Returns: List of paper dicts with: title, authors, abstract snippet, citations count, url, source. Example: search_google_scholar("climate change economic impact", max_results=5)
search_iacr

Search cryptography papers on IACR ePrint Archive.

USE THIS TOOL WHEN: - Searching for cryptography or security research - You need papers on encryption, blockchain, zero-knowledge proofs - Looking for security protocols, hash functions, signatures DOMAIN: Cryptography ONLY - encryption, signatures, protocols, blockchain, secure computation, zero-knowledge, hash functions. All papers are FREE and open access with PDF download. WORKFLOW: 1. search_iacr(query) -> get paper_id (e.g., '2024/123') 2. download_iacr(paper_id) or read_iacr_paper(paper_id) Args: query: Crypto terms (e.g., 'zero knowledge', 'homomorphic encryption'). max_results: Number of results (default: 10). fetch_details: Get full metadata per paper (default: True). Returns: List of paper dicts with: paper_id, title, authors, abstract, published_date, pdf_url. Example: search_iacr("post-quantum cryptography", max_results=5)
download_iacr

Download PDF from IACR ePrint (always free).

Args: paper_id: IACR ID (e.g., '2024/123', '2009/101'). save_path: Directory to save PDF. Returns: Path to downloaded PDF. Example: download_iacr("2024/123")
read_iacr_paper

Download and extract full text from IACR paper.

Args: paper_id: IACR ID (e.g., '2024/123'). save_path: Directory to save PDF. Returns: Full paper text in Markdown format. Example: read_iacr_paper("2024/123")
search_semantic

Search papers on Semantic Scholar - general-purpose academic search engine.

USE THIS TOOL WHEN: - You want to search across ALL academic disciplines - You need citation counts and influence metrics - You want to filter by publication year - You need open-access PDF links when available COVERAGE: ALL academic fields - sciences, humanities, medicine, etc. Indexes 200M+ papers from journals, conferences, and preprints. WORKFLOW: 1. search_semantic(query) -> get paper_id or DOI 2. download_semantic(paper_id) -> get PDF (if open-access) 3. If no PDF: use download_scihub(doi) for older papers Args: query: Search terms (any topic, any field). year: Optional year filter: '2023', '2020-2023', '2020-', '-2019'. max_results: Number of results (default: 10). Returns: List of paper dicts with: paper_id, title, authors, abstract, published_date, doi, citations, url, pdf_url (if available). Example: search_semantic("climate change impact agriculture", year="2020-", max_results=5)
download_semantic

Download PDF via Semantic Scholar (open-access only, use as LAST RESORT).

DOWNLOAD PRIORITY (try in order): 1. If arXiv paper -> use download_arxiv(arxiv_id) (always works) 2. If published before 2023 -> use download_scihub(doi) 3. Use this tool as last resort (may not have PDF) Args: paper_id: Semantic Scholar ID, or prefixed: 'DOI:xxx', 'ARXIV:xxx', 'PMID:xxx' save_path: Directory to save PDF. Returns: Path to downloaded PDF, or error if not available.
read_semantic_paper

Read paper via Semantic Scholar (open-access only, use as LAST RESORT).

DOWNLOAD PRIORITY (try in order): 1. If arXiv paper -> use read_arxiv_paper(arxiv_id) 2. If published before 2023 -> use read_scihub_paper(doi) 3. Use this tool as last resort Args: paper_id: Semantic Scholar ID or prefixed ID (DOI:, ARXIV:, PMID:). save_path: Directory to save PDF. Returns: Full paper text in Markdown format.
search_crossref

Search academic papers in CrossRef - the largest DOI citation database.

USE THIS TOOL WHEN: - You need to find papers by DOI or citation metadata - You want to search across all academic publishers (not just preprints) - You need publication metadata like journal, volume, issue, citations - You want to verify if a DOI exists or get its metadata CrossRef indexes 150M+ scholarly works from thousands of publishers. Results include DOI, authors, title, abstract, citations, and publisher info. Args: query: Search terms (e.g., 'machine learning', 'CRISPR gene editing'). max_results: Number of results (default: 10, max: 1000). **kwargs: Optional filters: - filter: 'has-full-text:true,from-pub-date:2020' - sort: 'relevance' | 'published' | 'cited' - order: 'asc' | 'desc' Returns: List of paper metadata dicts with keys: paper_id (DOI), title, authors, abstract, doi, published_date, citations, url. Example: search_crossref("attention mechanism transformer", max_results=5)
get_crossref_paper_by_doi

Get paper metadata from CrossRef using its DOI.

USE THIS TOOL WHEN: - You have a DOI and need full metadata (title, authors, journal, etc.) - You want to verify a DOI exists - You need citation count for a specific paper Args: doi: Digital Object Identifier (e.g., '10.1038/nature12373'). Returns: Paper metadata dict, or empty dict {} if DOI not found. Example: get_crossref_paper_by_doi("10.1038/nature12373")
download_crossref

CrossRef does NOT support direct PDF downloads.

CrossRef is a metadata/citation database only - it does not host PDFs. INSTEAD (try in order): 1. download_arxiv(id) - if arXiv preprint (always works) 2. download_scihub(doi) - if published before 2023 3. download_semantic(id) - last resort (may not have PDF) Args: paper_id: DOI (e.g., '10.1038/nature12373'). save_path: Unused. Returns: Error message explaining alternatives.
read_crossref_paper

CrossRef does NOT support direct paper reading.

CrossRef provides metadata only, not full-text content. INSTEAD (try in order): 1. read_arxiv_paper(id) - if arXiv preprint 2. read_scihub_paper(doi) - if published before 2023 3. read_semantic_paper(id) - last resort Args: paper_id: DOI (e.g., '10.1038/nature12373'). save_path: Unused. Returns: Error message explaining alternatives.
search_repec

Search economics papers on RePEc/IDEAS - the largest open economics bibliography.

USE THIS TOOL WHEN: - Searching for ECONOMICS research (macro, micro, finance, econometrics) - You need working papers from NBER, Federal Reserve, World Bank, etc. - You want to find papers by JEL classification - Searching for economic policy analysis COVERAGE: 4.5M+ items including: - Working Papers: NBER, Fed banks, ECB, IMF, World Bank - Journal Articles: AER, JPE, QJE, Econometrica, etc. - Books and Book Chapters SEARCH SYNTAX: - Boolean: + for AND, | for OR, ~ for NOT (e.g., 'money ~liquidity') - Phrase: use double quotes (e.g., '"monetary policy"') - Author(Year): e.g., 'Acemoglu (2019)' or 'Kydland Prescott (1977)' - Synonyms: automatic (labor=labour, USA=United States) - Word stemming: automatic (find matches finds, finding, findings) LIMITATION: RePEc provides metadata only, not full PDFs. PDFs are hosted at original institutions (often freely available). Args: query: Search terms with optional boolean operators. max_results: Number of results (default: 10). year_from: Optional start year filter (e.g., 2020). year_to: Optional end year filter (e.g., 2025). search_field: Where to search, one of: - 'all': Whole record (default) - 'abstract': Abstract only - 'keywords': Keywords only - 'title': Title only - 'author': Author only sort_by: How to sort results, one of: - 'relevance': Most relevant (default) - 'newest': Most recent first - 'oldest': Oldest first - 'citations': Most cited first - 'recent_relevant': Recent and relevant - 'relevant_cited': Relevant and cited doc_type: Document type filter, one of: - 'all': All types (default) - 'articles': Journal articles - 'papers': Working papers (NBER, Fed, etc.) - 'chapters': Book chapters - 'books': Books - 'software': Software components series: Institution/journal series to search within, one of: - Institutions: 'nber', 'imf', 'worldbank', 'ecb', 'bis', 'cepr', 'iza' - Federal Reserve: 'fed', 'fed_ny', 'fed_chicago', 'fed_stlouis' - Top 5 Journals: 'aer', 'jpe', 'qje', 'econometrica', 'restud' - Other journals: 'jfe', 'jme', 'aej_macro', 'aej_micro', 'aej_applied' Returns: List of paper dicts with: paper_id (RePEc handle), title, authors, abstract, published_date, url, categories (JEL codes). Example: search_repec('inflation', series='nber') # Search NBER only search_repec('causal inference', series='aer', sort_by='newest') search_repec('machine learning', series='fed', year_from=2020)
download_repec

RePEc/IDEAS does NOT support direct PDF downloads.

RePEc is a metadata index - PDFs are hosted at original institutions. INSTEAD (try in order): 1. Visit paper URL - many NBER/Fed papers are freely available 2. download_scihub(doi) - if published before 2023 Args: paper_id: RePEc handle (unused). save_path: Unused. Returns: Error message with alternatives.
read_repec_paper

RePEc/IDEAS does NOT support direct paper reading.

INSTEAD (try in order): 1. Visit paper URL - many NBER/Fed papers are freely available 2. read_scihub_paper(doi) - if published before 2023 Args: paper_id: RePEc handle (unused). save_path: Unused. Returns: Error message with alternatives.
get_repec_paper

Get detailed paper information from RePEc/IDEAS.

Fetches complete metadata from an IDEAS paper detail page, including abstract, authors, keywords, and JEL codes that may be missing from search results. USE THIS WHEN: - You have a paper URL/handle from search results and need the abstract - You want complete author information for a specific paper - You need JEL classification codes or keywords Args: url_or_handle: Paper URL or RePEc handle, e.g.: - URL: "https://ideas.repec.org/p/nbr/nberwo/32000.html" - Handle: "RePEc:nbr:nberwo:32000" Returns: Paper dict with: paper_id, title, authors, abstract, keywords, categories (JEL codes), published_date, url, pdf_url (if available), doi (if found), and extra info like journal name. Example: get_repec_paper("https://ideas.repec.org/a/aea/aecrev/v110y2020i1p1-40.html")
download_scihub

Download paper PDF via Sci-Hub using DOI (for older papers only).

USE THIS TOOL WHEN: - You have a DOI and need the full PDF - The paper was published BEFORE 2023 - The paper is behind a paywall and not on arXiv - You first searched CrossRef and got the DOI WORKFLOW: search_crossref(query) -> get DOI -> download_scihub(doi) Args: doi: Paper DOI (e.g., '10.1038/nature12373', '10.1126/science.1234567'). save_path: Directory to save PDF (default: ~/paper_downloads). Returns: Path to downloaded PDF file (e.g., 'downloads/scihub_10.1038_xxx.pdf'), or error message if download fails. Example: download_scihub("10.1038/nature12373") # 2013 Nature paper
read_scihub_paper

Download and extract full text from paper via Sci-Hub (older papers only).

USE THIS TOOL WHEN: - You need the complete text content of a paper (not just abstract) - The paper was published BEFORE 2023 - You want to analyze, summarize, or answer questions about a paper This downloads the PDF and extracts text as clean Markdown format, suitable for LLM processing. Includes paper metadata at the start. WORKFLOW: search_crossref(query) -> get DOI -> read_scihub_paper(doi) Args: doi: Paper DOI (e.g., '10.1038/nature12373'). save_path: Directory to save PDF (default: ~/paper_downloads). Returns: Full paper text in Markdown format with metadata header, or error message if download/extraction fails. Example: read_scihub_paper("10.1038/nature12373")

Prompts

Interactive templates invoked by user choice

NameDescription

No prompts

Resources

Contextual data attached and managed by the client

NameDescription

No resources

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/h-lu/paper-search-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server