Skip to main content
Glama

search_papers

Search academic papers using OpenAlex API with filters for title, abstract, institution, author, and sorting by relevance, citations, or date.

Instructions

Searches for academic papers using the OpenAlex API.

Args: query: The search term or keywords to look for in the papers. search_by: The field to search in ("default", "title", or "title_and_abstract"). sort_by: The sorting criteria ("relevance_score", "cited_by_count", or "publication_date"). institution_name: An optional institution or affiliation name to filter search results. author_id: An optional OpenAlex Author ID to filter search results. e.g., "https://openalex.org/A123456789" page: The page number of the results to retrieve (default: 1).

Returns: A JSON object containing a list of searched papers+ids, or an error message if the search fails.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
queryYes
search_byNodefault
sort_byNorelevance_score
institution_nameNo
author_idNo
pageNo

Implementation Reference

  • The core handler function for the 'search_papers' tool, decorated with @mcp.tool for registration. It constructs API parameters, queries OpenAlex API for papers, processes results using Work models, and returns a PageResult.
    @mcp.tool async def search_papers( query: str, search_by: Literal["default", "title", "title_and_abstract"] = "default", sort_by: Literal["relevance_score", "cited_by_count", "publication_date"] = "relevance_score", institution_name: Optional[str] = None, author_id: Optional[str] = None, page: int = 1, ) -> PageResult: """ Searches for academic papers using the OpenAlex API. Args: query: The search term or keywords to look for in the papers. search_by: The field to search in ("default", "title", or "title_and_abstract"). sort_by: The sorting criteria ("relevance_score", "cited_by_count", or "publication_date"). institution_name: An optional institution or affiliation name to filter search results. author_id: An optional OpenAlex Author ID to filter search results. e.g., "https://openalex.org/A123456789" page: The page number of the results to retrieve (default: 1). Returns: A JSON object containing a list of searched papers+ids, or an error message if the search fails. """ query = sanitize_search_text(query) institution_name = sanitize_search_text(institution_name) params = { "filter": f"{search_by}.search:\"{query}\"", "sort": f"{sort_by}:desc", "page": page, "per_page": 10, } if institution_name: params["filter"] += f",raw_affiliation_strings.search:\"{institution_name}\"" if author_id: params["filter"] += f",authorships.author.id:{author_id}" # Fetches search results from the OpenAlex API async with RequestAPI("https://api.openalex.org", default_params={"mailto": OPENALEX_MAILTO}) as api: logger.info(f"Searching for papers using: query={query}, search_by={search_by}, sort_by={sort_by}, page={page}") try: result = await api.aget("/works", params=params) # Returns a message for when the search results are empty if result is None or len(result.get("results", []) or []) == 0: error_message = "No works found with the query." logger.info(error_message) raise ToolError(error_message) # Successfully returns the searched papers works = Work.from_list(result.get("results", []) or []) success_message = f"Found {len(works)} papers." logger.info(success_message) total_count = (result.get("meta", {}) or {}).get("count") if total_count and total_count > params["per_page"] * params["page"]: has_next = True else: has_next = None return PageResult( data=Work.list_to_json(works), total_count=total_count, per_page=params["per_page"], page=params["page"], has_next=has_next ) except httpx.HTTPStatusError as e: error_message = f"Request failed with status: {e.response.status_code}" logger.error(error_message) raise ToolError(error_message) except httpx.RequestError as e: error_message = f"Network error: {str(e)}" logger.error(error_message) raise ToolError(error_message)
  • Pydantic model defining the output schema for search_papers (and similar paginated tools), including data list, pagination metadata.
    class PageResult(BaseModel): data: List[Union[Institution, Author, Work, dict]] = Field(default_factory=list) total_count: Optional[int] = None per_page: int page: int has_next: Optional[bool] = None
  • Pydantic model for Work (paper) objects, used in search_papers results data. Includes parsing from OpenAlex JSON and serialization.
    class Work(BaseModel): model_config = ConfigDict( frozen=False, # set True for immutability validate_assignment=True, # runtime type safety on attribute set str_strip_whitespace=True, # trims incoming strings ) title: str = None ids: Dict[str, str] = Field(default_factory=dict) cited_by_count: Optional[int] = None authors: List[Author] = Field(default_factory=list) publication_date: Optional[str] = None preferred_fulltext_url: Optional[str] = None @classmethod def from_json(cls, json_obj: Dict[str, Any]) -> "Work": # Gets title and page urls title = json_obj.get("title") or json_obj.get("display_name") or "" # Prioritize Open Access url preferred_fulltext_url = (json_obj.get("best_oa_location", {}) or {}).get("pdf_url") if preferred_fulltext_url is None: preferred_fulltext_url = (json_obj.get("best_oa_location", {}) or {}).get("landing_page_url") if preferred_fulltext_url is None: preferred_fulltext_url = (json_obj.get("primary_location", {}) or {}).get("pdf_url") if preferred_fulltext_url is None: preferred_fulltext_url = (json_obj.get("primary_location", {}) or {}).get("landing_page_url") # Gets individual authors of the work authors = Author.from_list(json_obj.get("authorships", []) or []) return cls( title=title, ids=json_obj.get("ids", {}) or {}, authors=authors, cited_by_count=json_obj.get("cited_by_count"), publication_date=json_obj.get("publication_date"), preferred_fulltext_url=preferred_fulltext_url ) @classmethod def from_list(cls, json_list: List[dict]) -> List["Work"]: return [cls.from_json(item) for item in json_list] @staticmethod def list_to_json(works: List["Work"]) -> List[dict]: return [work.model_dump(exclude_none=True) for work in works] def __str__(self) -> str: return self.model_dump_json(exclude_none=True)
  • Utility function to sanitize search query text by removing commas and normalizing whitespace, used in search_papers.
    def sanitize_search_text(s: str) -> str: """Remove commas and collapse whitespace for API search terms for OpenAlex to work""" if not s: return s s = s.replace(",", " ") s = re.sub(r"\s+", " ", s).strip() return s

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/ErikNguyen20/ScholarScope-MCP'

If you have feedback or need assistance with the MCP directory API, please join our Discord server