Skip to main content
Glama
AIAnytime

pubmed-mcp-server

by AIAnytime

search_pubmed

Retrieve relevant PubMed article abstracts by entering a search query and specifying the maximum number of results. Simplify biomedical research with structured, easy-to-read outputs.

Instructions

Search PubMed for articles matching the query.

Args:
    query: The search term for PubMed.
    max_results: Maximum number of articles to retrieve.

Returns:
    A string containing the abstracts of found articles, separated by two newlines.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
max_resultsNo
queryNoendocarditis

Implementation Reference

  • The main handler function for the 'search_pubmed' tool. It uses asyncio.to_thread to call the blocking fetch_pubmed_articles helper and formats the results as a string.
    @mcp.tool()
    async def search_pubmed(query: str = "endocarditis", max_results: int = 10) -> str:
        """
        Search PubMed for articles matching the query.
    
        Args:
            query: The search term for PubMed.
            max_results: Maximum number of articles to retrieve.
    
        Returns:
            A string containing the abstracts of found articles, separated by two newlines.
        """
        # Run the blocking function in a separate thread
        articles = await asyncio.to_thread(fetch_pubmed_articles, query, max_results)
        if articles:
            return "\n\n".join(articles)
        else:
            return "No articles found for the given query."
  • Helper function that performs the actual PubMed search using Bio.Entrez, fetching abstracts for the top results.
    def fetch_pubmed_articles(query: str = "endocarditis", max_results: int = 20) -> list[str]:
        """Fetch PubMed articles for the given query without saving to a file."""
        handle = Entrez.esearch(db="pubmed", term=query, retmax=max_results)
        record = Entrez.read(handle)
        handle.close()
    
        ids = record.get('IdList', [])
        articles = []
        for pmid in ids:
            time.sleep(0.5)  # Delay to avoid overwhelming the API
            try:
                handle = Entrez.efetch(db="pubmed", id=pmid, rettype="abstract", retmode="text")
                abstract = handle.read()
                handle.close()
                if abstract:
                    articles.append(abstract.strip())
            except Exception:
                continue
        return articles
  • The @mcp.tool() decorator registers the search_pubmed function as an MCP tool.
    @mcp.tool()
  • Function signature and docstring define the input schema (query: str, max_results: int) and output (str).
    async def search_pubmed(query: str = "endocarditis", max_results: int = 10) -> str:
        """
        Search PubMed for articles matching the query.
    
        Args:
            query: The search term for PubMed.
            max_results: Maximum number of articles to retrieve.
    
        Returns:
            A string containing the abstracts of found articles, separated by two newlines.
        """
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the tool 'searches' and 'retrieves' articles, implying a read-only operation, but doesn't clarify if it's safe, requires authentication, has rate limits, or details the search behavior (e.g., relevance ranking, filters). The return format is described, but key behavioral traits like error handling or performance are omitted.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and front-loaded with the core purpose, followed by clear sections for Args and Returns. Every sentence adds value: the first states the purpose, and the others explain parameters and output. There's no redundancy or unnecessary information, making it highly efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (2 parameters, no output schema, no annotations), the description is minimally adequate. It covers the purpose, parameters, and return format, but lacks behavioral context like error handling, search constraints, or performance details. Without annotations or output schema, more completeness would be beneficial, but it meets basic requirements.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds basic meaning for both parameters: 'query' is described as 'the search term for PubMed,' and 'max_results' as 'maximum number of articles to retrieve.' This compensates for the 0% schema description coverage by providing semantic context. However, it lacks details on query syntax, result limits, or default behaviors, keeping it at a baseline level.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Search PubMed for articles matching the query.' This specifies the verb ('search'), resource ('PubMed articles'), and scope ('matching the query'). It's not a tautology and is unambiguous. However, with no sibling tools mentioned, there's no explicit differentiation from alternatives, preventing a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It lacks context about prerequisites, limitations, or scenarios where it's most appropriate. While it implies usage for searching PubMed articles, there's no explicit when/when-not advice or mention of other tools, leaving the agent with minimal usage direction.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/AIAnytime/Awesome-MCP-Server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server