Skip to main content
Glama
nanyang12138

AI Research MCP Server

by nanyang12138

get_trending_models

Discover trending AI models from Hugging Face to identify popular tools for tasks like text-generation or image-classification.

Instructions

Get trending AI models from Hugging Face

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
taskNoFilter by task (e.g., 'text-generation', 'image-classification')
sortNoSort criteriondownloads
limitNoMaximum number of results

Implementation Reference

  • The primary handler for the 'get_trending_models' tool. It manages caching, calls the HuggingFace client method, and formats the output using _format_models.
    async def _get_trending_models(
        self,
        task: Optional[str] = None,
        sort: str = "downloads",
        limit: int = 30,
    ) -> str:
        """Get trending models from Hugging Face."""
        cache_key = f"hf_models_{task}_{sort}"
        cached = self.cache.get(cache_key, 3600)
        if cached:
            models = cached
        else:
            models = await asyncio.to_thread(
                self.huggingface.get_trending_models,
                task=task,
                sort=sort,
                limit=limit,
            )
            self.cache.set(cache_key, models)
        
        return self._format_models(models)
  • Registration of the 'get_trending_models' tool, including name, description, and input schema definition.
    Tool(
        name="get_trending_models",
        description="Get trending AI models from Hugging Face",
        inputSchema={
            "type": "object",
            "properties": {
                "task": {
                    "type": "string",
                    "description": "Filter by task (e.g., 'text-generation', 'image-classification')",
                },
                "sort": {
                    "type": "string",
                    "enum": ["downloads", "likes", "trending", "created"],
                    "description": "Sort criterion",
                    "default": "downloads",
                },
                "limit": {
                    "type": "integer",
                    "description": "Maximum number of results",
                    "default": 30,
                },
            },
        },
    ),
  • Helper method in HuggingFaceClient that fetches and processes trending models using HfApi.list_models. Called by the server handler.
    def get_trending_models(
        self,
        task: Optional[str] = None,
        library: Optional[str] = None,
        sort: str = "downloads",
        limit: int = 50,
    ) -> List[Dict]:
        """Get trending models from Hugging Face.
        
        Args:
            task: Filter by task (e.g., 'text-generation', 'image-classification')
            library: Filter by library (e.g., 'pytorch', 'transformers')
            sort: Sort by 'downloads', 'likes', 'trending', or 'created'
            limit: Maximum number of results
            
        Returns:
            List of model dictionaries
        """
        try:
            models = self.api.list_models(
                filter=task,
                library=library,
                sort=sort,
                direction=-1,
                limit=limit,
            )
            
            results = []
            for model in models:
                # Get model info
                model_info = {
                    "id": model.id,
                    "author": model.author if hasattr(model, "author") else model.id.split("/")[0],
                    "model_name": model.modelId if hasattr(model, "modelId") else model.id.split("/")[-1],
                    "url": f"https://huggingface.co/{model.id}",
                    "downloads": model.downloads if hasattr(model, "downloads") else 0,
                    "likes": model.likes if hasattr(model, "likes") else 0,
                    "tags": model.tags if hasattr(model, "tags") else [],
                    "pipeline_tag": model.pipeline_tag if hasattr(model, "pipeline_tag") else None,
                    "library": model.library_name if hasattr(model, "library_name") else None,
                    "created_at": model.created_at.isoformat() if hasattr(model, "created_at") and model.created_at else None,
                    "last_modified": model.last_modified.isoformat() if hasattr(model, "last_modified") and model.last_modified else None,
                    "source": "huggingface",
                }
                results.append(model_info)
            
            return results
        except Exception as e:
            print(f"Error fetching models: {e}")
            return []
  • Helper function to format the list of models into a markdown string for the tool response.
    def _format_models(self, models: List[Dict]) -> str:
        """Format models as markdown."""
        if not models:
            return "*No models found.*"
        
        lines = []
        for i, model in enumerate(models, 1):
            model_id = model.get("id", "Unknown")
            url = model.get("url", "")
            downloads = model.get("downloads", 0)
            likes = model.get("likes", 0)
            task = model.get("pipeline_tag", "")
            
            lines.append(f"### {i}. [{model_id}]({url})")
            lines.append(f"📥 {downloads:,} downloads • ❤️ {likes} likes")
            if task:
                lines.append(f"Task: `{task}`")
            lines.append("")
        
        return "\n".join(lines)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden for behavioral disclosure. While 'Get' implies a read operation, it doesn't specify whether this requires authentication, has rate limits, returns paginated results, or provides error handling. For a tool fetching external data with no annotation coverage, this is insufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that states the core purpose without any wasted words. It's appropriately sized for a simple data-fetching tool and gets straight to the point.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 3 parameters, 100% schema coverage, and no output schema, the description provides basic purpose but lacks context about behavioral traits, usage scenarios, or output format. It's minimally adequate but leaves gaps that could hinder effective tool selection and invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, with all parameters well-documented in the schema itself. The description adds no additional parameter context beyond what's already in the schema, so it meets the baseline expectation without adding extra value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Get') and resource ('trending AI models from Hugging Face'), making the purpose immediately understandable. However, it doesn't differentiate this tool from sibling tools like 'get_trending_repos' or 'search_by_area', which appear to be related but serve different purposes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. With siblings like 'get_trending_repos' and 'search_by_area' that might overlap in domain, there's no indication of when this specific tool is appropriate or what distinguishes it from other search/fetch tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/nanyang12138/AI-Research-MCP'

If you have feedback or need assistance with the MCP directory API, please join our Discord server