Skip to main content
Glama

search_institutions

Find academic institutions using OpenAlex API to support research by searching names, sorting by relevance or citations, and retrieving detailed results.

Instructions

Searches for institutions using the OpenAlex API.

Args: query: The search name to look for the institutions. sort_by: The sorting criteria ("relevance_score" or "cited_by_count"). page: The page number of the results to retrieve (default: 1).

Returns: A JSON object containing a list of institutions+ids, or an error message if the search fails.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
queryYes
sort_byNorelevance_score
pageNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
dataNo
pageYes
has_nextNo
per_pageYes
total_countNo

Implementation Reference

  • The handler function decorated with @mcp.tool that implements the core logic for searching institutions via the OpenAlex API, including query sanitization, API request, result parsing into Institution models, pagination handling, and error management.
    @mcp.tool
    async def search_institutions(
            query: str,
            sort_by: Literal["relevance_score", "cited_by_count"] = "relevance_score",
            page: int = 1,
    ) -> PageResult:
        """
        Searches for institutions using the OpenAlex API.
    
        Args:
            query: The search name to look for the institutions.
            sort_by: The sorting criteria ("relevance_score" or "cited_by_count").
            page: The page number of the results to retrieve (default: 1).
    
        Returns:
            A JSON object containing a list of institutions+ids, or an error message if the search fails.
        """
        query = sanitize_search_text(query)
    
        params = {
            "filter": f"default.search:\"{query}\"",
            "sort": f"{sort_by}:desc",
            "page": page,
            "per_page": 10,
        }
    
        # Fetches search results from the OpenAlex API
        async with RequestAPI("https://api.openalex.org", default_params={"mailto": OPENALEX_MAILTO}) as api:
            logger.info(f"Searching for authors using: query={query}, sort_by={sort_by}, page={page}")
            try:
                result = await api.aget("/institutions", params=params)
    
                # Returns a message for when the search results are empty
                if result is None or len(result.get("results", []) or []) == 0:
                    error_message = "No institutions found with the query."
                    logger.info(error_message)
                    raise ToolError(error_message)
    
                # Successfully returns the searched papers
                institutions = Institution.from_list(result.get("results", []) or [])
                success_message = f"Found {len(institutions)} institution(s)."
                logger.info(success_message)
    
                total_count = (result.get("meta", {}) or {}).get("count")
                if total_count and total_count > params["per_page"] * params["page"]:
                    has_next = True
                else:
                    has_next = None
                return PageResult(
                    data=Institution.list_to_json(institutions),
                    total_count=total_count,
                    per_page=params["per_page"],
                    page=params["page"],
                    has_next=has_next
                )
            except httpx.HTTPStatusError as e:
                error_message = f"Request failed with status: {e.response.status_code}"
                logger.error(error_message)
                raise ToolError(error_message)
            except httpx.RequestError as e:
                error_message = f"Network error: {str(e)}"
                logger.error(error_message)
                raise ToolError(error_message)
  • Pydantic BaseModel for Institution, including parsing from OpenAlex JSON responses and serialization methods used in the search_institutions output.
    class Institution(BaseModel):
        model_config = ConfigDict(
            frozen=False,  # set True for immutability
            validate_assignment=True,  # runtime type safety on attribute set
            str_strip_whitespace=True,  # trims incoming strings
        )
    
        name: str
        id: Optional[str] = None
    
        @classmethod
        def from_json(cls, json_obj: Dict[str, Any]) -> "Institution":
            inst_name = ""
            inst_id = None
    
            if "institution" in json_obj:
                institution = json_obj.get("institution", {}) or {}
                inst_name = institution.get("display_name", "") or ""
                inst_id = institution.get("id")
            elif "raw_affiliation_string" in json_obj:
                inst_name = json_obj.get("raw_affiliation_string", "") or ""
                ids = json_obj.get("institution_ids")
                if ids and len(ids) >= 1:
                    inst_id = ids[0]
            elif "id" in json_obj:
                inst_name = json_obj.get("display_name", "")
                inst_id = json_obj.get("id")
    
            return cls(name=inst_name, id=inst_id)
    
        @classmethod
        def from_list(cls, json_list: List[dict]) -> List["Institution"]:
            return [cls.from_json(item) for item in json_list]
    
        @staticmethod
        def list_to_json(institutions: List["Institution"]) -> List[dict]:
            return [institution.model_dump(exclude_none=True) for institution in institutions]
    
        def __str__(self) -> str:
            return self.model_dump_json(exclude_none=True)
  • Pydantic model defining the structure of paginated results returned by the search_institutions tool.
    class PageResult(BaseModel):
        data: List[Union[Institution, Author, Work, dict]] = Field(default_factory=list)
        total_count: Optional[int] = None
        per_page: int
        page: int
        has_next: Optional[bool] = None
  • Utility function to sanitize search queries by removing commas and normalizing whitespace, called at the start of search_institutions.
    def sanitize_search_text(s: str) -> str:
        """Remove commas and collapse whitespace for API search terms for OpenAlex to work"""
        if not s:
            return s
        s = s.replace(",", " ")
        s = re.sub(r"\s+", " ", s).strip()
        return s
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the API source and error handling, but lacks critical details like rate limits, authentication requirements, pagination behavior beyond the 'page' parameter, or what constitutes a 'failed' search. This leaves significant gaps for an agent to understand operational constraints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (Args, Returns) and uses efficient sentences. However, the first sentence could be more front-loaded with key details, and some redundancy exists (e.g., 'The search name to look for the institutions' could be tighter). Overall, it's concise but has minor room for improvement.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (3 parameters, no annotations, but with an output schema), the description is partially complete. It covers basic input semantics and return values, but lacks behavioral context (e.g., API limits, error conditions) and usage guidelines relative to siblings. The output schema reduces the need to detail return values, but other gaps remain.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It adds basic semantics for all three parameters (e.g., 'query' is a search name, 'sort_by' has two criteria, 'page' retrieves results), but doesn't provide deeper context like query syntax examples, how 'cited_by_count' sorting works, or pagination limits. This partially addresses the schema gap but not fully.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Searches for') and resource ('institutions') using a specific API ('OpenAlex API'), making the purpose unambiguous. However, it doesn't explicitly differentiate this tool from sibling tools like 'search_authors' or 'search_papers' beyond the resource type, which prevents a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools like 'search_authors' or 'search_papers' for comparison, nor does it specify scenarios where searching institutions is appropriate over other search tools. Usage is implied by the tool name alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/ErikNguyen20/ScholarScope-MCP'

If you have feedback or need assistance with the MCP directory API, please join our Discord server