Skip to main content
Glama

get_related

Find related biomedical articles from PubMed using NCBI's relevance ranking algorithm to expand research exploration.

Instructions

Get related articles for a given PubMed article (PMID), ranked by relevance using NCBI's algorithm.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
pmidYesPubMed ID to find related articles for
max_resultsNoMaximum results

Implementation Reference

  • The getRelated handler function implements the core logic: uses NCBI's elink API with 'pubmed_pubmed' link type to find related articles for a given PMID, skips the query article itself, fetches article details via efetch, and returns JSON-formatted results with related article metadata.
    export async function getRelated(args: z.infer<typeof getRelatedSchema>): Promise<string> {
      const result = await client.elink([args.pmid], "pubmed_pubmed") as {
        linksets?: Array<{ linksetdbs?: Array<{ links?: string[] }> }>;
      };
    
      const links = result.linksets?.[0]?.linksetdbs?.[0]?.links || [];
    
      if (links.length === 0) {
        return JSON.stringify({ pmid: args.pmid, related_count: 0, related_articles: [] }, null, 2);
      }
    
      // Skip first (it's usually the query article itself)
      const fetchIds = links.filter((id: string) => id !== args.pmid).slice(0, args.max_results);
      if (fetchIds.length === 0) {
        return JSON.stringify({ pmid: args.pmid, related_count: 0, related_articles: [] }, null, 2);
      }
    
      const xml = await client.efetch(fetchIds);
      const articles = parseArticles(xml);
    
      return JSON.stringify({
        pmid: args.pmid,
        related_count: links.length - 1,
        showing: articles.length,
        related_articles: articles.map(formatArticleSummary),
      }, null, 2);
    }
  • The getRelatedSchema defines the input validation for the tool, requiring a pmid (PubMed ID) string and an optional max_results number between 1-100 with a default of 10.
    export const getRelatedSchema = z.object({
      pmid: z.string().describe("PubMed ID to find related articles for"),
      max_results: z.number().min(1).max(100).default(10).describe("Maximum results"),
    });
  • src/index.ts:52-59 (registration)
    The get_related tool is registered with the MCP server, binding the schema to the handler function with a description explaining it finds related articles ranked by relevance using NCBI's algorithm.
    server.tool(
      "get_related",
      "Get related articles for a given PubMed article (PMID), ranked by relevance using NCBI's algorithm.",
      getRelatedSchema.shape,
      async (args) => ({
        content: [{ type: "text", text: await getRelated(getRelatedSchema.parse(args)) }],
      })
    );
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the ranking algorithm ('NCBI's algorithm') which adds useful context beyond basic functionality, but does not cover other behavioral aspects such as rate limits, error conditions, or response format. The description is adequate but lacks depth for a tool with no annotation support.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that efficiently conveys the tool's purpose, resource, and key behavioral trait (ranking algorithm). It is front-loaded with essential information and contains no redundant or unnecessary details, making it highly concise and effective.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (2 parameters, no output schema, no annotations), the description is minimally complete. It covers the core functionality and ranking behavior but lacks details on output format, error handling, or integration with sibling tools. Without an output schema, the agent must infer return values, which is a gap in completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already fully documents both parameters ('pmid' and 'max_results'). The description does not add any additional semantic information about the parameters beyond what is in the schema, such as format details for 'pmid' or implications of 'max_results'. This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Get related articles') and resources ('PubMed article (PMID)'), and distinguishes it from siblings by specifying the ranking algorithm ('NCBI's algorithm'). It explicitly identifies the target resource type and the ranking methodology, making it distinct from tools like 'get_article' or 'search_articles'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by mentioning 'PubMed article (PMID)' and 'ranked by relevance', suggesting it should be used when seeking related content for a specific article. However, it lacks explicit guidance on when to use this tool versus alternatives like 'get_citations' or 'search_articles', and does not specify prerequisites or exclusions, leaving some ambiguity for the agent.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/PetrefiedThunder/mcp-pubmed'

If you have feedback or need assistance with the MCP directory API, please join our Discord server