Skip to main content
Glama
renyumeng1

mcp-scholar

paper_detail

Retrieve detailed information about a specific paper using its unique ID. Integrates with the MCP Scholar server to support research analysis and data extraction from Google Scholar.

Instructions

获取论文详细信息

Args:
    paper_id: 论文ID

Returns:
    Dict: 论文详细信息

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
paper_idYes

Implementation Reference

  • The MCP tool handler for 'paper_detail' that takes a paper_id, calls the helper get_paper_detail, processes the result (adding URLs), and returns success/error dict.
    @mcp.tool()
    async def paper_detail(ctx: Context, paper_id: str) -> Dict[str, Any]:
        """
        获取论文详细信息
    
        Args:
            paper_id: 论文ID
    
        Returns:
            Dict: 论文详细信息
        """
        try:
            # 移除进度显示
            logger.info(f"正在获取论文ID为 {paper_id} 的详细信息...")
            detail = await get_paper_detail(paper_id)
    
            if detail:
                # 确保URL信息被返回
                if "url" not in detail and detail.get("pub_url"):
                    detail["url"] = detail["pub_url"]
    
                # 如果有DOI,添加DOI URL
                if "doi" in detail and "doi_url" not in detail:
                    detail["doi_url"] = f"https://doi.org/{detail['doi']}"
    
                return {"status": "success", "detail": detail}
            else:
                # 移除错误通知
                logger.warning(f"未找到ID为 {paper_id} 的论文")
                return {"status": "error", "message": f"未找到ID为 {paper_id} 的论文"}
        except Exception as e:
            # 移除错误通知
            logger.error(f"获取论文详情失败: {str(e)}", exc_info=True)
            return {"status": "error", "message": "论文详情服务暂时不可用", "error": str(e)}
  • Core helper function that queries the OpenAlex API to fetch detailed paper information based on paper_id (supports DOI, OpenAlex ID, arXiv), processes abstract from inverted index, extracts authors, venue, DOI, PDF, concepts.
    async def get_paper_detail(paper_id: str) -> Optional[Dict[str, Any]]:
        """
        通过OpenAlex API获取论文详情
    
        Args:
            paper_id: 论文ID,可以是OpenAlex ID、DOI或ArXiv ID
    
        Returns:
            Dict: 论文详细信息
        """
        try:
            # 设置电子邮件参数(礼貌请求)
            email_param = f"?mailto={EMAIL}" if EMAIL else ""
    
            # 确定使用什么ID类型
            if paper_id.startswith("10."):  # 看起来是DOI
                api_url = f"{OPENALEX_API}/works/doi:{paper_id}{email_param}"
            elif paper_id.startswith("W"):  # OpenAlex ID
                api_url = f"{OPENALEX_API}/works/{paper_id}{email_param}"
            elif paper_id.lower().startswith("arxiv:"):  # arXiv ID
                api_url = f"{OPENALEX_API}/works/arxiv:{paper_id.replace('arxiv:', '')}{email_param}"
            else:  # 尝试作为OpenAlex ID (不带前缀的)
                api_url = f"{OPENALEX_API}/works/W{paper_id}{email_param}"
    
            async with httpx.AsyncClient(timeout=10.0) as client:
                response = await client.get(api_url)
    
                if response.status_code == 200:
                    data = response.json()
    
                    # 提取论文详细信息
                    result = {
                        "title": data.get("title", "未知标题"),
                        "abstract": "",  # 默认为空,稍后处理
                        "citations": data.get("cited_by_count", 0),
                        "year": data.get("publication_year", "未知年份"),
                        "venue": "",  # 需要从期刊/会议信息中提取
                        "paper_id": data.get("id", "").replace("https://openalex.org/", ""),
                        "url": data.get("id", ""),
                    }
    
                    # 处理摘要(OpenAlex 摘要是倒排索引格式)
                    if data.get("abstract_inverted_index"):
                        result["abstract"] = convert_inverted_index_to_text(
                            data.get("abstract_inverted_index", {})
                        )
    
                    # 处理作者信息
                    authors = data.get("authorships", [])
                    author_names = []
                    for author in authors:
                        if author.get("author", {}).get("display_name"):
                            author_names.append(author["author"]["display_name"])
                    result["authors"] = ", ".join(author_names)
    
                    # 处理期刊/会议信息
                    if data.get("host_venue", {}).get("display_name"):
                        result["venue"] = data["host_venue"]["display_name"]
    
                    # 处理DOI信息
                    if data.get("doi"):
                        result["doi"] = data["doi"]
                        result["doi_url"] = f"https://doi.org/{result['doi']}"
    
                    # 添加PDF链接(如果有)
                    if data.get("open_access", {}).get("oa_url"):
                        result["pdf_url"] = data["open_access"]["oa_url"]
    
                    # 添加关键概念
                    if data.get("concepts"):
                        concepts = []
                        for concept in data["concepts"]:
                            if (
                                concept.get("display_name")
                                and concept.get("score", 0) > 0.5
                            ):  # 只添加相关性高的概念
                                concepts.append(concept["display_name"])
                        if concepts:
                            result["concepts"] = ", ".join(concepts)
    
                    return result
                else:
                    print(f"获取论文详情错误: {response.status_code} - {response.text}")
                    return None
    
        except Exception as e:
            print(f"获取论文详情时出错: {str(e)}")
            return None

Tool Definition Quality

Score is being calculated. Check back soon.

Install Server

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/renyumeng1/mcp_scholar'

If you have feedback or need assistance with the MCP directory API, please join our Discord server