paperswithcode_trending
Fetch daily curated AI papers from Hugging Face's daily_papers feed. Filter by keyword, date range, and sort by upvotes to find relevant research.
Instructions
Daily curated AI papers feed (now backed by Hugging Face's daily_papers — Papers with Code API was sunset after the 2024 HF acquisition). Empty query returns the newest curated papers. Search is client-side filtering over the daily-papers stream.
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| query | No | ||
| days | No | ||
| sort_by | No | upvotes | |
| max_results | No | ||
| response_format | No | markdown |
Output Schema
| Name | Required | Description | Default |
|---|---|---|---|
| result | Yes |
Implementation Reference
- trends_mcp.py:701-716 (registration)Registration of the paperswithcode_trending tool via the @_maybe_tool decorator, gating it behind the 'paperswithcode' source.
@_maybe_tool( source="paperswithcode", name="paperswithcode_trending", description=( "Daily curated AI papers feed (now backed by Hugging Face's daily_papers — " "Papers with Code API was sunset after the 2024 HF acquisition). " "Empty query returns the newest curated papers. " "Search is client-side filtering over the daily-papers stream." ), annotations={ "readOnlyHint": True, "destructiveHint": False, "openWorldHint": True, "idempotentHint": False, }, ) - trends_mcp.py:717-770 (handler)Main handler function that fetches Hugging Face daily_papers feed, filters client-side by query/days, sorts by upvotes/comments/recent, and returns formatted results.
async def paperswithcode_trending( query: str | None = None, days: int | None = None, sort_by: str = "upvotes", max_results: int = 20, response_format: ResponseFormat = ResponseFormat.MARKDOWN, ) -> str: try: args = PwCInput( query=query, days=days, sort_by=sort_by, max_results=max_results, response_format=response_format, ) # daily_papers ignores q/search params; fetch the latest stream and # filter client-side. The endpoint returns up to 50 entries by default. raw = await _http_get_json(HF_DAILY_PAPERS_API, ttl=TTL_TRENDING) if not isinstance(raw, list): raw = [] cutoff = _utc_now() - timedelta(days=args.days) if args.days else None q_lower = args.query.lower() if args.query else None # First pass: filter by topic + date window (don't truncate yet — sort below). candidates: list[dict[str, Any]] = [] for entry in raw: paper = entry.get("paper") or {} title = entry.get("title") or paper.get("title") or "" summary = entry.get("summary") or paper.get("summary") or "" if q_lower and q_lower not in title.lower() and q_lower not in summary.lower(): continue pub = entry.get("publishedAt") or paper.get("publishedAt") or "" if cutoff and pub: try: pub_dt = datetime.fromisoformat(pub.replace("Z", "+00:00")) if pub_dt.tzinfo is None: pub_dt = pub_dt.replace(tzinfo=timezone.utc) except ValueError: continue if pub_dt < cutoff: continue candidates.append(_hf_paper_to_item(entry)) # Sort by chosen popularity signal. if args.sort_by == "upvotes": candidates.sort(key=lambda x: (x.get("upvotes") or 0, x.get("num_comments") or 0), reverse=True) elif args.sort_by == "comments": candidates.sort(key=lambda x: (x.get("num_comments") or 0, x.get("upvotes") or 0), reverse=True) # `recent` keeps the API's natural order (newest first). items = candidates[: args.max_results] suffix_bits: list[str] = [] if args.days: suffix_bits.append(f"최근 {args.days}일") suffix_bits.append(f"sort={args.sort_by}") suffix = " · " + " · ".join(suffix_bits) header = f"Daily AI Papers (HF) `{args.query or '최신'}`{suffix} ({len(items)}건)" return _format(items, args.response_format, render_md=lambda x: _render_pwc_md(x, header)) except Exception as e: return _handle_error(e, "paperswithcode_trending") - trends_mcp.py:647-657 (schema)Pydantic schema for paperswithcode_trending input parameters: query, days, sort_by, max_results, response_format.
class PwCInput(BaseModel): model_config = ConfigDict(str_strip_whitespace=True, extra="forbid") query: str | None = Field(None, max_length=300) days: int | None = Field(None, ge=1, le=3650, description="If set, drop results published more than N days ago.") sort_by: str = Field( "upvotes", pattern=r"^(upvotes|comments|recent)$", description="upvotes (community votes), comments (discussion volume), recent (publish time)", ) max_results: int = Field(20, ge=1, le=50) response_format: ResponseFormat = ResponseFormat.MARKDOWN - trends_mcp.py:677-698 (helper)Helper function mapping Hugging Face daily_papers API entries to the common paper dict shape used by the tool.
def _hf_paper_to_item(entry: dict[str, Any]) -> dict[str, Any]: """Map a HF daily_papers entry to our common paper dict shape. HF returns nested {paper: {id, authors, summary, upvotes, ai_summary, ...}, title, publishedAt, numComments, ...} where `paper.id` is the arXiv ID. We surface a HF papers URL for the abstract page and the canonical arXiv PDF. """ paper = entry.get("paper") or {} arxiv_id = paper.get("id") or "" authors = [a.get("name", "") for a in (paper.get("authors") or []) if a.get("name")] return { "title": (entry.get("title") or paper.get("title") or "").strip(), "abstract": (entry.get("summary") or paper.get("summary") or "").strip(), # HF's auto-generated one-line lede — perfect for newspaper headlines. "ai_summary": (paper.get("ai_summary") or "").strip(), "published": entry.get("publishedAt") or paper.get("publishedAt") or "", "authors": authors, "upvotes": int(paper.get("upvotes") or 0), "num_comments": int(entry.get("numComments") or 0), "url_abs": f"https://huggingface.co/papers/{arxiv_id}" if arxiv_id else "", "url_pdf": f"https://arxiv.org/pdf/{arxiv_id}" if arxiv_id else "", } - trends_mcp.py:660-674 (helper)Markdown rendering helper for paperswithcode_trending results.
def _render_pwc_md(items: list[dict[str, Any]], header: str) -> str: if not items: return f"# {header}\n\n_결과 없음_" lines = [f"# {header}", f"_총 {len(items)}건_", ""] for i, p in enumerate(items, 1): authors = ", ".join(p["authors"][:4]) if len(p["authors"]) > 4: authors += f" 외 {len(p['authors']) - 4}명" lines.append( f"## {i}. [{p['title']}]({p['url_abs']})\n" f"- {_fmt_date(p['published'])} · 저자: {authors}\n" f"- {_trim(p['abstract'], 500)}\n" + (f"- [PDF]({p['url_pdf']})\n" if p.get("url_pdf") else "") ) return "\n".join(lines)