Skip to main content
Glama

AI Research MCP Server

by nanyang12138

get_daily_papers

Retrieve featured AI research papers from Hugging Face to stay current with daily developments. Specify how many previous days to include for comprehensive coverage.

Instructions

Get today's featured AI papers from Hugging Face

Input Schema

NameRequiredDescriptionDefault
daysNoNumber of days to look back (1-7)

Input Schema (JSON Schema)

{ "properties": { "days": { "default": 1, "description": "Number of days to look back (1-7)", "type": "integer" } }, "type": "object" }

Implementation Reference

  • Handler function that executes the get_daily_papers tool: caches and calls HuggingFaceClient.get_daily_papers, then formats the result.
    async def _get_daily_papers(self, days: int = 1) -> str: """Get daily featured papers from Hugging Face.""" cache_key = f"hf_daily_{days}" cached = self.cache.get(cache_key, 3600 * 12) # 12 hour cache if cached: papers = cached else: papers = await asyncio.to_thread( self.huggingface.get_daily_papers, days=days, ) self.cache.set(cache_key, papers) return self._format_papers(papers)
  • MCP tool registration in list_tools(), including name, description, and input schema.
    Tool( name="get_daily_papers", description="Get today's featured AI papers from Hugging Face", inputSchema={ "type": "object", "properties": { "days": { "type": "integer", "description": "Number of days to look back (1-7)", "default": 1, }, }, }, ),
  • Core helper function in HuggingFaceClient that fetches daily papers from the Hugging Face API endpoint, processes them, and returns a list of paper dicts sorted by upvotes.
    def get_daily_papers(self, days: int = 1) -> List[Dict]: """Get daily papers from Hugging Face. Args: days: Number of days to look back (1-7) Returns: List of paper dictionaries """ papers = [] for day_offset in range(days): date = datetime.now(timezone.utc) - timedelta(days=day_offset) date_str = date.strftime("%Y-%m-%d") try: url = f"{self.papers_base_url}?date={date_str}" response = requests.get(url, timeout=10) response.raise_for_status() daily_papers = response.json() for paper in daily_papers: # Ensure published date has timezone info published_date = paper.get("publishedAt", date_str) if published_date and "T" not in published_date: # If it's just a date, add time and timezone published_date = f"{published_date}T00:00:00+00:00" papers.append({ "title": paper.get("title", ""), "authors": paper.get("authors", []), "summary": paper.get("summary", ""), "published": published_date, "url": f"https://huggingface.co/papers/{paper.get('id', '')}", "arxiv_id": paper.get("id", ""), "upvotes": paper.get("upvotes", 0), "num_comments": paper.get("numComments", 0), "thumbnail": paper.get("thumbnail", ""), "source": "huggingface", }) except requests.RequestException as e: print(f"Error fetching papers for {date_str}: {e}") continue # Sort by upvotes papers.sort(key=lambda x: x.get("upvotes", 0), reverse=True) return papers

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/nanyang12138/AI-Research-MCP'

If you have feedback or need assistance with the MCP directory API, please join our Discord server