Skip to main content
Glama
DaniManas
by DaniManas

compare_papers

Analyze research papers to identify contradictions and consensus by comparing claims across multiple academic sources.

Instructions

Compare claims across multiple papers to find contradictions and consensus.

Args: paper_ids: Comma-separated list of OpenAlex paper IDs (e.g., "W123,W456,W789")

Returns: Abstracts from all papers for comparison analysis

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
paper_idsYes

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • The core handler function for the 'compare_papers' tool, registered via @mcp.tool(). It parses comma-separated paper IDs, fetches paper details and abstracts using PaperFetcher, validates input (2-5 papers), formats a comparison report with titles, authors, years, citations, abstracts, and provides instructions for analyzing contradictions, consensus, and gaps.
    @mcp.tool()
    def compare_papers(paper_ids: str) -> str:
        """
        Compare claims across multiple papers to find contradictions and consensus.
    
        Args:
            paper_ids: Comma-separated list of OpenAlex paper IDs (e.g., "W123,W456,W789")
    
        Returns:
            Abstracts from all papers for comparison analysis
        """
        ids = [pid.strip() for pid in paper_ids.split(",")]
    
        if len(ids) < 2:
            return "Error: Please provide at least 2 paper IDs separated by commas"
    
        if len(ids) > 5:
            return "Error: Maximum 5 papers can be compared at once"
    
        papers_data = []
        for paper_id in ids:
            paper = fetcher.fetch_paper_by_id(paper_id)
    
            if "error" in paper:
                papers_data.append(f"**Error fetching {paper_id}:** {paper['error']}\n")
                continue
    
            abstract_text = fetcher.get_paper_abstract(paper)
    
            paper_info = f"**Paper {len(papers_data) + 1}:**\n"
            paper_info += f"Title: {paper['title']}\n"
            paper_info += f"Authors: {paper['authors']}\n"
            paper_info += f"Year: {paper['publication_year']}\n"
            paper_info += f"Citations: {paper['cited_by_count']}\n\n"
            paper_info += f"Abstract: {abstract_text}\n"
            paper_info += f"{'-' * 80}\n\n"
    
            papers_data.append(paper_info)
    
        result = f"**Comparing {len(papers_data)} papers:**\n\n"
        result += "".join(papers_data)
        result += "\n**Analysis Instructions:**\n"
        result += "Please analyze these papers and identify:\n"
        result += "1. **Contradictions:** Where do the papers disagree or present conflicting findings?\n"
        result += "2. **Consensus:** What do the papers agree on?\n"
        result += "3. **Gaps:** What questions remain unanswered or areas need more research?\n"
    
        return result
  • src/server.py:107-107 (registration)
    The @mcp.tool() decorator registers the compare_papers function as an MCP tool.
    @mcp.tool()
  • The docstring provides the tool schema, describing input (paper_ids as comma-separated string) and output (formatted abstracts for comparison). Type hints: paper_ids: str -> str.
    """
    Compare claims across multiple papers to find contradictions and consensus.
    
    Args:
        paper_ids: Comma-separated list of OpenAlex paper IDs (e.g., "W123,W456,W789")
    
    Returns:
        Abstracts from all papers for comparison analysis
    """
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions the tool compares claims to find contradictions and consensus, but doesn't describe how this analysis is performed, what the output format entails beyond abstracts, or any limitations (e.g., number of papers, processing time). For a tool with no annotation coverage, this is a significant gap in transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, with the core purpose stated first, followed by structured sections for Args and Returns. Each sentence earns its place, though the Returns section could be more informative given the output schema exists.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has one parameter with 0% schema coverage and an output schema exists, the description is partially complete. It clarifies the parameter format but lacks behavioral details (e.g., how comparison is done, limitations). The output schema reduces the need to explain return values, but more context on the tool's operation would improve completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It adds value by explaining that 'paper_ids' is a 'comma-separated list of OpenAlex paper IDs' with an example ('W123,W456,W789'), which clarifies the format beyond the schema's basic string type. However, it doesn't detail constraints like ID validation or list length limits, leaving some semantics undocumented.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Compare claims across multiple papers to find contradictions and consensus.' This specifies both the verb (compare) and resource (papers/claims). However, it doesn't explicitly differentiate from sibling tools like 'extract_claims' or 'find_research_gaps' beyond the comparison focus.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives like 'extract_claims' or 'find_research_gaps' is provided. The description implies usage for comparison analysis but doesn't specify scenarios, prerequisites, or exclusions. This leaves the agent with minimal contextual direction.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/DaniManas/ResearchMCP'

If you have feedback or need assistance with the MCP directory API, please join our Discord server