Skip to main content
Glama

related_works_of_paper

Retrieve related academic papers for a specified work using the OpenAlex API, helping researchers explore connections and build literature reviews.

Instructions

Gets related works used to the specified paper using the OpenAlex API. Note: May return empty if the paper's full text is inaccessible.

Args: paper_id: An OpenAlex Work ID of the target paper. e.g., "https://openalex.org/W123456789"

Returns: A JSON object containing a list of paper ids related to the work, or an error message if the fetch fails.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
paper_idYes

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
dataNo
countNo

Implementation Reference

  • The handler function for the 'related_works_of_paper' tool. It is decorated with @mcp.tool, which registers it in the FastMCP server. The function retrieves the 'related_works' array from the OpenAlex API for the specified paper_id and returns it wrapped in a ListResult object.
    @mcp.tool
    async def related_works_of_paper(
            paper_id: str,
    ) -> ListResult:
        """
        Gets related works used to the specified paper using the OpenAlex API.
        Note: May return empty if the paper's full text is inaccessible.
    
        Args:
            paper_id: An OpenAlex Work ID of the target paper. e.g., "https://openalex.org/W123456789"
    
        Returns:
            A JSON object containing a list of paper ids related to the work, or an error message if the fetch fails.
        """
    
        # Fetches search results from the OpenAlex API
        async with RequestAPI("https://api.openalex.org", default_params={"mailto": OPENALEX_MAILTO}) as api:
            logger.info(f"Fetching related_works works for paper_id={paper_id}")
            try:
                result = await api.aget(f"/works/{paper_id}")
    
                # Returns a message for when the search results are empty
                if result is None or len(result.get("related_works", []) or []) == 0:
                    error_message = f"No related_works works found for paper_id={paper_id}."
                    logger.info(error_message)
                    raise ToolError(error_message)
    
                # Successfully returns the searched papers
                works = result.get("related_works", []) or []
                success_message = f"Retrieved {len(works)} related_works works for paper_id={paper_id}."
                logger.info(success_message)
                return ListResult(data=works, count=len(works))
            except httpx.HTTPStatusError as e:
                error_message = f"Request failed with status: {e.response.status_code}"
                logger.error(error_message)
                raise ToolError(error_message)
            except httpx.RequestError as e:
                error_message = f"Network error: {str(e)}"
                logger.error(error_message)
                raise ToolError(error_message)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and does well by disclosing key behavioral traits: it specifies the API source (OpenAlex), notes that it may return empty results due to accessibility issues, and describes error handling on fetch failure. This covers operational context beyond basic functionality.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, with the core purpose in the first sentence, a critical note in the second, and structured sections for Args and Returns that add necessary detail without waste. Every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (1 parameter, no annotations, but with an output schema), the description is complete enough. It covers purpose, usage notes, parameter details, and return behavior, and since an output schema exists, it doesn't need to explain return values in depth, making it well-rounded for the context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds significant meaning beyond the input schema, which has 0% coverage. It explains that 'paper_id' is an OpenAlex Work ID, provides an example format ('https://openalex.org/W123456789'), and clarifies it's for the target paper, fully compensating for the schema's lack of documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Gets related works') and resource ('specified paper using the OpenAlex API'), distinguishing it from siblings like 'referenced_works_in_paper' or 'works_citing_paper' by focusing on general related works rather than specific citation directions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides implied usage through the note about empty returns when full text is inaccessible, which hints at when results might be limited. However, it lacks explicit guidance on when to use this tool versus alternatives like 'referenced_works_in_paper' or 'works_citing_paper', leaving the agent to infer based on the purpose.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/ErikNguyen20/ScholarScope-MCP'

If you have feedback or need assistance with the MCP directory API, please join our Discord server