Skip to main content
Glama
salwks

mcp-techTrend

huggingface_trending

Read-only

Browse Hugging Face Hub to find trending models, datasets, and spaces. Filter by sort order, time range, and query to discover popular or recent content.

Instructions

Browse Hugging Face Hub. kind selects models / datasets / spaces (default models). sort: trending / downloads / likes / recent. days filters by lastModified — CAUTION: this catches old entries with recent edits, not just newly published ones. For 'truly new' discovery prefer sort='recent' + days=N.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
kindNomodels
sortNotrending
queryNo
tagNo
daysNo
max_resultsNo
response_formatNomarkdown

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • HFTrendingInput Pydantic model — input schema for the huggingface_trending tool. Defines fields: kind (models/datasets/spaces), sort (trending/downloads/likes/recent), query, tag, days, max_results, response_format.
    class HFTrendingInput(BaseModel):
        model_config = ConfigDict(str_strip_whitespace=True, extra="forbid")
        kind: str = Field("models", pattern=r"^(models|datasets|spaces)$")
        sort: str = Field("trending", pattern=r"^(trending|downloads|likes|recent)$")
        query: str | None = Field(None, max_length=200)
        tag: str | None = Field(None, max_length=80)
        days: int | None = Field(None, ge=1, le=3650, description="If set, drop entries whose lastModified is older than N days.")
        max_results: int = Field(20, ge=1, le=50)
        response_format: ResponseFormat = ResponseFormat.MARKDOWN
  • Registration of huggingface_trending as an MCP tool via the @_maybe_tool decorator with source='huggingface'. This decorator conditionally registers the tool with FastMCP only if the 'huggingface' source is enabled.
    @_maybe_tool(
        source="huggingface",
        name="huggingface_trending",
        description=(
            "Browse Hugging Face Hub. `kind` selects models / datasets / spaces "
            "(default models). `sort`: trending / downloads / likes / recent. "
            "`days` filters by `lastModified` — CAUTION: this catches old entries "
            "with recent edits, not just newly published ones. For 'truly new' "
            "discovery prefer sort='recent' + days=N."
        ),
        annotations={
            "readOnlyHint": True,
            "destructiveHint": False,
            "openWorldHint": True,
            "idempotentHint": False,
        },
    )
  • Main handler function for huggingface_trending. Calls the Hugging Face Hub API (HF_API/{kind}) with sort, direction, limit, search, and filter params. Supports over-fetching for date filtering, constructs proper URLs per kind (models/datasets/spaces), and returns formatted markdown or JSON output.
    async def huggingface_trending(
        kind: str = "models",
        sort: str = "trending",
        query: str | None = None,
        tag: str | None = None,
        days: int | None = None,
        max_results: int = 20,
        response_format: ResponseFormat = ResponseFormat.MARKDOWN,
    ) -> str:
        try:
            args = HFTrendingInput(
                kind=kind,
                sort=sort,
                query=query,
                tag=tag,
                days=days,
                max_results=max_results,
                response_format=response_format,
            )
            url = f"{HF_API}/{args.kind}"
            # Over-fetch when filtering by days, since HF API has no date filter.
            fetch_n = min(args.max_results * (5 if args.days else 1), 200)
            params: dict[str, Any] = {
                "sort": _HF_SORT_MAP[args.sort],
                "direction": -1,
                "limit": fetch_n,
            }
            if args.query:
                params["search"] = args.query
            if args.tag:
                params["filter"] = args.tag
            headers: dict[str, str] = {}
            token = os.environ.get("HF_TOKEN")
            if token:
                headers["Authorization"] = f"Bearer {token}"
            ttl = TTL_TRENDING if args.sort == "trending" else TTL_DEFAULT
            raw = await _http_get_json(url, params=params, headers=headers or None, ttl=ttl)
            if not isinstance(raw, list):
                raw = []
            cutoff = _utc_now() - timedelta(days=args.days) if args.days else None
            items: list[dict[str, Any]] = []
            for r in raw:
                last_mod = r.get("lastModified") or r.get("last_modified")
                if cutoff:
                    if not last_mod:
                        continue
                    try:
                        lm_dt = datetime.fromisoformat(str(last_mod).replace("Z", "+00:00"))
                        if lm_dt.tzinfo is None:
                            lm_dt = lm_dt.replace(tzinfo=timezone.utc)
                    except ValueError:
                        continue
                    if lm_dt < cutoff:
                        continue
                rid = r.get("id") or r.get("modelId") or ""
                if args.kind == "models":
                    disp_url = f"https://huggingface.co/{rid}"
                elif args.kind == "datasets":
                    disp_url = f"https://huggingface.co/datasets/{rid}"
                else:
                    disp_url = f"https://huggingface.co/spaces/{rid}"
                items.append(
                    {
                        "id": rid,
                        "url": disp_url,
                        "downloads": r.get("downloads"),
                        "likes": r.get("likes", 0),
                        "lastModified": last_mod,
                        "tags": r.get("tags") or [],
                        "pipeline_tag": r.get("pipeline_tag"),
                        "library_name": r.get("library_name"),
                        "sdk": r.get("sdk"),
                        "task_categories": r.get("task_categories") or [],
                    }
                )
                if len(items) >= args.max_results:
                    break
            header = f"Hugging Face {args.kind} — {args.sort}"
            if args.query:
                header += f" · `{args.query}`"
            if args.tag:
                header += f" · #{args.tag}"
            if args.days:
                header += f" · 최근 {args.days}일"
            return _format(items, args.response_format, render_md=lambda x: _render_hf_md(x, header, args.kind))
        except Exception as e:
            return _handle_error(e, "huggingface_trending")
  • Maps friendly sort names ('trending', 'downloads', 'likes', 'recent') to Hugging Face API sort parameters ('trendingScore', 'downloads', 'likes', 'lastModified').
    _HF_SORT_MAP = {
        "trending": "trendingScore",
        "downloads": "downloads",
        "likes": "likes",
        "recent": "lastModified",
    }
  • Renders Hugging Face items in Markdown format. Shows downloads count, likes, lastModified date, pipeline_tag for models, sdk for spaces, and up to 6 tags.
    def _render_hf_md(items: list[dict[str, Any]], header: str, kind: str) -> str:
        if not items:
            return f"# {header}\n\n_결과 없음_"
        lines = [f"# {header}", f"_총 {len(items)}건_", ""]
        for i, it in enumerate(items, 1):
            bits: list[str] = []
            if it.get("downloads") is not None:
                bits.append(f"📥 {it['downloads']:,}")
            bits.append(f"❤️ {it.get('likes', 0):,}")
            if it.get("lastModified"):
                bits.append(f"업데이트 {_fmt_date(it['lastModified'])}")
            meta = " · ".join(bits)
            sub = ""
            if kind == "models" and it.get("pipeline_tag"):
                sub = f" · `{it['pipeline_tag']}`"
            if kind == "spaces" and it.get("sdk"):
                sub = f" · `{it['sdk']}`"
            tags = it.get("tags") or []
            tag_line = ", ".join(tags[:6])
            lines.append(
                f"## {i}. [{it['id']}]({it['url']}){sub}\n"
                f"- {meta}\n"
                + (f"- 태그: {tag_line}\n" if tag_line else "")
            )
        return "\n".join(lines)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnlyHint=true and destructiveHint=false. The description adds behavioral nuance by warning that the days filter 'catches old entries with recent edits, not just newly published ones.' This is valuable context beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise at four sentences, front-loaded with the main purpose, and efficiently explains parameters and a caveat. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

While annotations and output schema cover safety and return format, the description fails to mention max_results and response_format, which are important for controlling output. It also does not explain query or tag parameters. The description is adequate but not fully complete given 7 parameters.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description partially compensates by explaining kind, sort, and days. However, it omits explanations for query, tag, max_results, and response_format, leaving those parameters underspecified.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Browse Hugging Face Hub' with a specific verb and resource. It further explains the kind parameter (models/datasets/spaces) making the tool's scope unambiguous. Sibling tools like arxiv_recent or github_trending are clearly different domains, so no confusion.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on parameter usage, including a caution about the days filter and recommending sort='recent' + days=N for new content. It does not explicitly say when not to use this tool versus siblings, but the domain distinction is clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/salwks/mcp-techTrend'

If you have feedback or need assistance with the MCP directory API, please join our Discord server