semantic_scholar_bulk_papers
Retrieve multiple academic papers in a single request to efficiently access Semantic Scholar's database of 200M+ papers for research analysis.
Instructions
Retrieve multiple papers in a single request (max 500).
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| params | Yes |
Implementation Reference
- Pydantic model defining the input schema for the semantic_scholar_bulk_papers tool: list of paper_ids (up to 500) and response_format.class BulkPaperInput(BaseModel): model_config = ConfigDict(str_strip_whitespace=True, extra="forbid") paper_ids: List[str] = Field(..., description="List of paper IDs (max 500)", min_length=1, max_length=500) response_format: ResponseFormat = Field(default=ResponseFormat.JSON, description="Output format")
- The handler function decorated with @mcp.tool, implementing batch retrieval of papers from Semantic Scholar API using POST /paper/batch endpoint. Handles JSON/markdown output formatting.@mcp.tool(name="semantic_scholar_bulk_papers") async def get_bulk_papers(params: BulkPaperInput) -> str: """Retrieve multiple papers in a single request (max 500).""" logger.info(f"Bulk retrieval: {len(params.paper_ids)} papers") response = await _make_request("POST", "paper/batch", params={"fields": ",".join(PAPER_FIELDS)}, json_body={"ids": params.paper_ids}) papers = response if isinstance(response, list) else response.get("data", []) if params.response_format == ResponseFormat.JSON: return json.dumps({"requested": len(params.paper_ids), "retrieved": len(papers), "papers": papers}, indent=2) lines = [f"## Bulk Retrieval", f"**Requested:** {len(params.paper_ids)} | **Retrieved:** {len(papers)}", ""] for paper in papers: if paper: lines.append(_format_paper_markdown(paper)) return "\n".join(lines)
- src/semantic_scholar_mcp/server.py:397-397 (registration)MCP tool registration via FastMCP decorator.@mcp.tool(name="semantic_scholar_bulk_papers")