get_yesterday_papers
Fetch yesterday's HuggingFace daily papers to stay updated on AI research developments.
Instructions
Get yesterday's HuggingFace daily papers
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
No arguments | |||
Implementation Reference
- scraper.py:44-46 (handler)Core handler function that calculates yesterday's date and fetches papers using the generic get_papers_by_date method.def get_yesterday_papers(self, fetch_details: bool = True) -> List[Dict]: yesterday = (datetime.now() - timedelta(days=1)).strftime("%Y-%m-%d") return self.get_papers_by_date(yesterday, fetch_details)
- main.py:163-191 (handler)MCP server @server.call_tool() handler implementation that calls the scraper, formats the response as text content.elif name == "get_yesterday_papers": papers = scraper.get_yesterday_papers() yesterday = (datetime.now() - timedelta(days=1)).strftime("%Y-%m-%d") if not papers: return [ types.TextContent( type="text", text=f"No papers found for yesterday ({yesterday}). There might be no papers published that day or a network issue." ) ] return [ types.TextContent( type="text", text=f"Yesterday's Papers ({yesterday}) - Found {len(papers)} papers:\n\n" + "\n".join([ f"Title: {paper['title']}\n" f"Authors: {', '.join(paper['authors'])}\n" f"Abstract: {paper['abstract']}\n" f"URL: {paper['url']}\n" f"PDF: {paper['pdf_url']}\n" f"Votes: {paper['votes']}\n" f"Submitted by: {paper['submitted_by']}\n" + "-" * 50 for paper in papers ]) ) ]
- main.py:83-90 (registration)Tool registration in @server.list_tools() defining the tool name, description, and input schema.types.Tool( name="get_yesterday_papers", description="Get yesterday's HuggingFace daily papers", inputSchema={ "type": "object", "properties": {}, }, ),
- main.py:86-89 (schema)Input schema for the get_yesterday_papers tool, specifying an empty object (no required parameters).inputSchema={ "type": "object", "properties": {}, },
- scraper.py:19-39 (helper)Helper method delegated to by get_yesterday_papers for fetching and parsing papers from HuggingFace for a specific date, optionally fetching details.def get_papers_by_date(self, date: str, fetch_details: bool = True) -> List[Dict]: url = f"{self.base_url}/{date}" try: response = self.session.get(url) response.raise_for_status() papers = self._parse_papers(response.text) if fetch_details and papers: # 获取所有论文的详细信息,包括具体作者姓名 for i, paper in enumerate(papers): if paper.get('url'): details = self._fetch_paper_details(paper['url']) if details: paper.update(details) time.sleep(1) # 避免请求过快 return papers except requests.RequestException as e: logging.error(f"Failed to fetch papers for {date}: {e}") return []