Skip to main content
Glama

AI Research MCP Server

by nanyang12138

get_trending_models

Discover trending AI models from Hugging Face filtered by task type and sorted by popularity metrics to identify relevant machine learning models for your projects.

Instructions

Get trending AI models from Hugging Face

Input Schema

NameRequiredDescriptionDefault
taskNoFilter by task (e.g., 'text-generation', 'image-classification')
sortNoSort criteriondownloads
limitNoMaximum number of results

Input Schema (JSON Schema)

{ "properties": { "limit": { "default": 30, "description": "Maximum number of results", "type": "integer" }, "sort": { "default": "downloads", "description": "Sort criterion", "enum": [ "downloads", "likes", "trending", "created" ], "type": "string" }, "task": { "description": "Filter by task (e.g., 'text-generation', 'image-classification')", "type": "string" } }, "type": "object" }

Implementation Reference

  • MCP tool handler: fetches trending models using HuggingFaceClient, applies caching, and formats output as markdown.
    async def _get_trending_models( self, task: Optional[str] = None, sort: str = "downloads", limit: int = 30, ) -> str: """Get trending models from Hugging Face.""" cache_key = f"hf_models_{task}_{sort}" cached = self.cache.get(cache_key, 3600) if cached: models = cached else: models = await asyncio.to_thread( self.huggingface.get_trending_models, task=task, sort=sort, limit=limit, ) self.cache.set(cache_key, models) return self._format_models(models)
  • Input schema defining parameters for the get_trending_models tool: task, sort, limit.
    inputSchema={ "type": "object", "properties": { "task": { "type": "string", "description": "Filter by task (e.g., 'text-generation', 'image-classification')", }, "sort": { "type": "string", "enum": ["downloads", "likes", "trending", "created"], "description": "Sort criterion", "default": "downloads", }, "limit": { "type": "integer", "description": "Maximum number of results", "default": 30, }, }, },
  • Tool registration in list_tools() handler, defining name, description, and schema.
    Tool( name="get_trending_models", description="Get trending AI models from Hugging Face", inputSchema={ "type": "object", "properties": { "task": { "type": "string", "description": "Filter by task (e.g., 'text-generation', 'image-classification')", }, "sort": { "type": "string", "enum": ["downloads", "likes", "trending", "created"], "description": "Sort criterion", "default": "downloads", }, "limit": { "type": "integer", "description": "Maximum number of results", "default": 30, }, }, }, ),
  • Core helper function in HuggingFaceClient that queries HfApi.list_models() to retrieve trending models based on task, library, sort, and limit.
    def get_trending_models( self, task: Optional[str] = None, library: Optional[str] = None, sort: str = "downloads", limit: int = 50, ) -> List[Dict]: """Get trending models from Hugging Face. Args: task: Filter by task (e.g., 'text-generation', 'image-classification') library: Filter by library (e.g., 'pytorch', 'transformers') sort: Sort by 'downloads', 'likes', 'trending', or 'created' limit: Maximum number of results Returns: List of model dictionaries """ try: models = self.api.list_models( filter=task, library=library, sort=sort, direction=-1, limit=limit, ) results = [] for model in models: # Get model info model_info = { "id": model.id, "author": model.author if hasattr(model, "author") else model.id.split("/")[0], "model_name": model.modelId if hasattr(model, "modelId") else model.id.split("/")[-1], "url": f"https://huggingface.co/{model.id}", "downloads": model.downloads if hasattr(model, "downloads") else 0, "likes": model.likes if hasattr(model, "likes") else 0, "tags": model.tags if hasattr(model, "tags") else [], "pipeline_tag": model.pipeline_tag if hasattr(model, "pipeline_tag") else None, "library": model.library_name if hasattr(model, "library_name") else None, "created_at": model.created_at.isoformat() if hasattr(model, "created_at") and model.created_at else None, "last_modified": model.last_modified.isoformat() if hasattr(model, "last_modified") and model.last_modified else None, "source": "huggingface", } results.append(model_info) return results except Exception as e: print(f"Error fetching models: {e}") return []

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/nanyang12138/AI-Research-MCP'

If you have feedback or need assistance with the MCP directory API, please join our Discord server