Skip to main content
Glama
salwks

mcp-techTrend

paperswithcode_trending

Read-only

Fetch daily curated AI papers from Hugging Face's daily_papers feed. Filter by keyword, date range, and sort by upvotes to find relevant research.

Instructions

Daily curated AI papers feed (now backed by Hugging Face's daily_papers — Papers with Code API was sunset after the 2024 HF acquisition). Empty query returns the newest curated papers. Search is client-side filtering over the daily-papers stream.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
queryNo
daysNo
sort_byNoupvotes
max_resultsNo
response_formatNomarkdown

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • trends_mcp.py:701-716 (registration)
    Registration of the paperswithcode_trending tool via the @_maybe_tool decorator, gating it behind the 'paperswithcode' source.
    @_maybe_tool(
        source="paperswithcode",
        name="paperswithcode_trending",
        description=(
            "Daily curated AI papers feed (now backed by Hugging Face's daily_papers — "
            "Papers with Code API was sunset after the 2024 HF acquisition). "
            "Empty query returns the newest curated papers. "
            "Search is client-side filtering over the daily-papers stream."
        ),
        annotations={
            "readOnlyHint": True,
            "destructiveHint": False,
            "openWorldHint": True,
            "idempotentHint": False,
        },
    )
  • Main handler function that fetches Hugging Face daily_papers feed, filters client-side by query/days, sorts by upvotes/comments/recent, and returns formatted results.
    async def paperswithcode_trending(
        query: str | None = None,
        days: int | None = None,
        sort_by: str = "upvotes",
        max_results: int = 20,
        response_format: ResponseFormat = ResponseFormat.MARKDOWN,
    ) -> str:
        try:
            args = PwCInput(
                query=query, days=days, sort_by=sort_by,
                max_results=max_results, response_format=response_format,
            )
            # daily_papers ignores q/search params; fetch the latest stream and
            # filter client-side. The endpoint returns up to 50 entries by default.
            raw = await _http_get_json(HF_DAILY_PAPERS_API, ttl=TTL_TRENDING)
            if not isinstance(raw, list):
                raw = []
            cutoff = _utc_now() - timedelta(days=args.days) if args.days else None
            q_lower = args.query.lower() if args.query else None
            # First pass: filter by topic + date window (don't truncate yet — sort below).
            candidates: list[dict[str, Any]] = []
            for entry in raw:
                paper = entry.get("paper") or {}
                title = entry.get("title") or paper.get("title") or ""
                summary = entry.get("summary") or paper.get("summary") or ""
                if q_lower and q_lower not in title.lower() and q_lower not in summary.lower():
                    continue
                pub = entry.get("publishedAt") or paper.get("publishedAt") or ""
                if cutoff and pub:
                    try:
                        pub_dt = datetime.fromisoformat(pub.replace("Z", "+00:00"))
                        if pub_dt.tzinfo is None:
                            pub_dt = pub_dt.replace(tzinfo=timezone.utc)
                    except ValueError:
                        continue
                    if pub_dt < cutoff:
                        continue
                candidates.append(_hf_paper_to_item(entry))
            # Sort by chosen popularity signal.
            if args.sort_by == "upvotes":
                candidates.sort(key=lambda x: (x.get("upvotes") or 0, x.get("num_comments") or 0), reverse=True)
            elif args.sort_by == "comments":
                candidates.sort(key=lambda x: (x.get("num_comments") or 0, x.get("upvotes") or 0), reverse=True)
            # `recent` keeps the API's natural order (newest first).
            items = candidates[: args.max_results]
            suffix_bits: list[str] = []
            if args.days:
                suffix_bits.append(f"최근 {args.days}일")
            suffix_bits.append(f"sort={args.sort_by}")
            suffix = " · " + " · ".join(suffix_bits)
            header = f"Daily AI Papers (HF) `{args.query or '최신'}`{suffix} ({len(items)}건)"
            return _format(items, args.response_format, render_md=lambda x: _render_pwc_md(x, header))
        except Exception as e:
            return _handle_error(e, "paperswithcode_trending")
  • Pydantic schema for paperswithcode_trending input parameters: query, days, sort_by, max_results, response_format.
    class PwCInput(BaseModel):
        model_config = ConfigDict(str_strip_whitespace=True, extra="forbid")
        query: str | None = Field(None, max_length=300)
        days: int | None = Field(None, ge=1, le=3650, description="If set, drop results published more than N days ago.")
        sort_by: str = Field(
            "upvotes",
            pattern=r"^(upvotes|comments|recent)$",
            description="upvotes (community votes), comments (discussion volume), recent (publish time)",
        )
        max_results: int = Field(20, ge=1, le=50)
        response_format: ResponseFormat = ResponseFormat.MARKDOWN
  • Helper function mapping Hugging Face daily_papers API entries to the common paper dict shape used by the tool.
    def _hf_paper_to_item(entry: dict[str, Any]) -> dict[str, Any]:
        """Map a HF daily_papers entry to our common paper dict shape.
    
        HF returns nested {paper: {id, authors, summary, upvotes, ai_summary, ...},
        title, publishedAt, numComments, ...} where `paper.id` is the arXiv ID. We
        surface a HF papers URL for the abstract page and the canonical arXiv PDF.
        """
        paper = entry.get("paper") or {}
        arxiv_id = paper.get("id") or ""
        authors = [a.get("name", "") for a in (paper.get("authors") or []) if a.get("name")]
        return {
            "title": (entry.get("title") or paper.get("title") or "").strip(),
            "abstract": (entry.get("summary") or paper.get("summary") or "").strip(),
            # HF's auto-generated one-line lede — perfect for newspaper headlines.
            "ai_summary": (paper.get("ai_summary") or "").strip(),
            "published": entry.get("publishedAt") or paper.get("publishedAt") or "",
            "authors": authors,
            "upvotes": int(paper.get("upvotes") or 0),
            "num_comments": int(entry.get("numComments") or 0),
            "url_abs": f"https://huggingface.co/papers/{arxiv_id}" if arxiv_id else "",
            "url_pdf": f"https://arxiv.org/pdf/{arxiv_id}" if arxiv_id else "",
        }
  • Markdown rendering helper for paperswithcode_trending results.
    def _render_pwc_md(items: list[dict[str, Any]], header: str) -> str:
        if not items:
            return f"# {header}\n\n_결과 없음_"
        lines = [f"# {header}", f"_총 {len(items)}건_", ""]
        for i, p in enumerate(items, 1):
            authors = ", ".join(p["authors"][:4])
            if len(p["authors"]) > 4:
                authors += f" 외 {len(p['authors']) - 4}명"
            lines.append(
                f"## {i}. [{p['title']}]({p['url_abs']})\n"
                f"- {_fmt_date(p['published'])} · 저자: {authors}\n"
                f"- {_trim(p['abstract'], 500)}\n"
                + (f"- [PDF]({p['url_pdf']})\n" if p.get("url_pdf") else "")
            )
        return "\n".join(lines)
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds significant value beyond annotations: it explains the data source (Hugging Face's daily_papers after the Papers with Code API sunset), that empty query returns newest, and that search is client-side filtering. This complements the readOnlyHint and openWorldHint annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise with three sentences, no redundant information, and front-loads the core purpose and context.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the output schema exists and annotations are present, the description provides sufficient context for a read-only trending tool. The only minor gap is the lack of parameter documentation beyond 'query'.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description only partially explains the 'query' parameter (client-side filtering). It does not clarify the 'days', 'sort_by', 'max_results', or 'response_format' parameters, which remain ambiguous for an AI agent.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it provides a daily curated AI papers feed, backed by Hugging Face's daily_papers, and distinguishes it from sibling tools which focus on other sources like arXiv or GitHub.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It explains that an empty query returns newest papers and search is client-side filtering over the daily-papers stream, giving clear usage context. However, it does not explicitly mention when to use alternatives like arxiv_search or github_trending.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/salwks/mcp-techTrend'

If you have feedback or need assistance with the MCP directory API, please join our Discord server