Skip to main content
Glama

get_related_articles

Find related biomedical articles in PubMed using NCBI's co-citation and text similarity algorithm. Input a PubMed ID to discover relevant research articles with brief metadata.

Instructions

Find PubMed articles related to a given article.

Uses NCBI's "similar articles" algorithm (co-citation and text similarity).

Args: pmid: The PubMed ID of the reference article. max_results: Number of related articles to return (1-50, default 10).

Returns: A ranked list of related articles with brief metadata. Returns an error message if the PMID is invalid or not found.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
pmidYes
max_resultsNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • main.py:542-585 (handler)
    The get_related_articles tool handler implementation. It uses NCBI's elink API to fetch similar articles.
    @mcp.tool()
    async def get_related_articles(pmid: str, max_results: int = 10) -> str:
        """Find PubMed articles related to a given article.
    
        Uses NCBI's "similar articles" algorithm (co-citation and text similarity).
    
        Args:
            pmid: The PubMed ID of the reference article.
            max_results: Number of related articles to return (1-50, default 10).
    
        Returns:
            A ranked list of related articles with brief metadata.
            Returns an error message if the PMID is invalid or not found.
        """
        pmid = pmid.strip()
        if not pmid.isdigit():
            return _err(f"Invalid PMID: {pmid!r}. A PMID must be a numeric string.")
    
        max_results = max(1, min(max_results, 50))
    
        try:
            link_resp = await _get(
                "elink.fcgi",
                {
                    "dbfrom": "pubmed",
                    "db": "pubmed",
                    "id": pmid,
                    "cmd": "neighbor_score",
                    "retmode": "json",
                },
            )
    
            link_data = link_resp.json()
            related_pmids: list[str] = []
    
            try:
                for linkset in link_data.get("linksets", []):
                    for lsdb in linkset.get("linksetdbs", []):
                        if lsdb.get("linkname") == "pubmed_pubmed":
                            related_pmids = [
                                str(lid)
                                for lid in lsdb.get("links", [])
                                if str(lid) != pmid
                            ][:max_results]
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and discloses the specific algorithm used, return format ('ranked list with brief metadata'), and error conditions ('Returns an error message if the PMID is invalid'). It could be improved by explicitly stating the read-only/safe nature of the operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with clear sections: purpose statement, algorithm explanation, Args block, and Returns block. Every sentence provides unique value without repetition, and critical information is front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 2 simple parameters and no complex nested structures, the description is complete: it covers the algorithmic approach, parameter semantics, return value structure, and error handling. The presence of return value documentation in the description compensates adequately despite the lack of formal output schema in the structured data.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Given 0% schema description coverage (schema only has titles 'Pmid' and 'Max Results'), the description fully compensates by documenting both parameters in the Args section: pmid is defined as 'The PubMed ID of the reference article' and max_results includes constraints '1-50, default 10' and semantic meaning 'Number of related articles to return'.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a specific verb ('Find') + resource ('PubMed articles') + relationship ('related to a given article'), and further distinguishes itself from siblings by specifying it uses NCBI's 'similar articles' algorithm based on co-citation and text similarity, clearly differentiating it from general search tools like search_pubmed or retrieval tools like get_article.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context through the algorithm explanation (co-citation/text similarity), suggesting when to use this versus keyword searching, but lacks explicit when-to-use/when-not-to-use guidance or direct comparison to sibling alternatives like search_pubmed.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/benoitleq/mcp-pubmed'

If you have feedback or need assistance with the MCP directory API, please join our Discord server