Skip to main content
Glama
DaniManas
by DaniManas

get_paper_abstract

Retrieve complete abstracts for academic papers using OpenAlex IDs to support detailed research analysis and paper evaluation.

Instructions

Get the full abstract for a specific paper.

Args: paper_id: The OpenAlex paper ID (from search_papers results)

Returns: The paper's abstract text with title and metadata

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
paper_idYes

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • Primary MCP tool handler for 'get_paper_abstract'. Fetches paper by ID using PaperFetcher, extracts abstract with helper method, formats and returns structured response including title, authors, year, citations, and full abstract text.
    @mcp.tool()
    def get_paper_abstract(paper_id: str) -> str:
        """
        Get the full abstract for a specific paper.
    
        Args:
            paper_id: The OpenAlex paper ID (from search_papers results)
    
        Returns:
            The paper's abstract text with title and metadata
        """
        paper = fetcher.fetch_paper_by_id(paper_id)
    
        if "error" in paper:
            return paper["error"]
    
        abstract_text = fetcher.get_paper_abstract(paper)
    
        result = f"**{paper['title']}**\n"
        result += f"Authors: {paper['authors']}\n"
        result += f"Year: {paper['publication_year']}\n"
        result += f"Citations: {paper['cited_by_count']}\n\n"
        result += f"**Abstract:**\n{abstract_text}\n"
    
        return result
  • Supporting helper method in PaperFetcher class that reconstructs the paper's abstract from OpenAlex's inverted index format (word positions) into readable plain text.
    def get_paper_abstract(self, paper_data: Dict) -> str:
        """
        Convert OpenAlex inverted index abstract to readable text.
    
        Args:
            paper_data: Paper dictionary containing abstract in inverted index format
    
        Returns:
            Readable abstract text
        """
        inverted_index = paper_data.get("abstract")
    
        if not inverted_index:
            return "No abstract available"
    
        word_positions = []
        for word, positions in inverted_index.items():
            for pos in positions:
                word_positions.append((pos, word))
    
        word_positions.sort(key=lambda x: x[0])
        abstract = " ".join([word for _, word in word_positions])
    
        return abstract
  • src/server.py:47-47 (registration)
    The @mcp.tool() decorator registers the get_paper_abstract function as an MCP tool.
    @mcp.tool()
  • Helper method that fetches detailed paper data from OpenAlex API by ID, which is used by the tool handler to retrieve paper metadata before extracting the abstract.
    def fetch_paper_by_id(self, paper_id: str) -> Dict:
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states the tool retrieves abstract text with title and metadata, implying a read-only operation, but doesn't specify potential limitations like rate limits, authentication needs, or error conditions. It adds basic context but lacks depth for a tool with no annotation support.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence, followed by structured Args and Returns sections that efficiently document inputs and outputs. Every sentence adds value without redundancy, making it highly concise and well-organized.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (1 parameter) and the presence of an output schema (which handles return values), the description is largely complete. It covers purpose, parameter semantics, and output overview. However, without annotations, it could benefit from more behavioral details like error handling or data freshness, slightly reducing completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds meaningful context for the single parameter by specifying that paper_id is 'The OpenAlex paper ID (from search_papers results)', which clarifies its source and format beyond the schema's basic string type. Since schema description coverage is 0%, this compensates well, though it doesn't detail exact ID formats or constraints.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get the full abstract') and resource ('for a specific paper'), distinguishing it from sibling tools like search_papers (which finds papers) or get_citations (which retrieves citations). It precisely defines the tool's function without ambiguity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implicitly indicates usage by specifying the paper_id should come 'from search_papers results', providing context for when to use this tool. However, it lacks explicit guidance on when not to use it or alternatives (e.g., vs. extract_claims for different content), which prevents a perfect score.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/DaniManas/ResearchMCP'

If you have feedback or need assistance with the MCP directory API, please join our Discord server