Skip to main content
Glama
nanyang12138

AI Research MCP Server

by nanyang12138

get_daily_papers

Retrieve featured AI research papers from Hugging Face to stay updated on daily developments. Specify days to look back for comprehensive tracking.

Instructions

Get today's featured AI papers from Hugging Face

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
daysNoNumber of days to look back (1-7)

Implementation Reference

  • Handler function that executes the get_daily_papers tool logic: caches and formats papers fetched from HuggingFaceClient.get_daily_papers()
    async def _get_daily_papers(self, days: int = 1) -> str: """Get daily featured papers from Hugging Face.""" cache_key = f"hf_daily_{days}" cached = self.cache.get(cache_key, 3600 * 12) # 12 hour cache if cached: papers = cached else: papers = await asyncio.to_thread( self.huggingface.get_daily_papers, days=days, ) self.cache.set(cache_key, papers) return self._format_papers(papers)
  • Registration of the get_daily_papers tool in list_tools(), including input schema definition
    Tool( name="get_daily_papers", description="Get today's featured AI papers from Hugging Face", inputSchema={ "type": "object", "properties": { "days": { "type": "integer", "description": "Number of days to look back (1-7)", "default": 1, }, }, }, ),
  • Core helper function in HuggingFaceClient that fetches and processes daily papers from Hugging Face API endpoints.
    def get_daily_papers(self, days: int = 1) -> List[Dict]: """Get daily papers from Hugging Face. Args: days: Number of days to look back (1-7) Returns: List of paper dictionaries """ papers = [] for day_offset in range(days): date = datetime.now(timezone.utc) - timedelta(days=day_offset) date_str = date.strftime("%Y-%m-%d") try: url = f"{self.papers_base_url}?date={date_str}" response = requests.get(url, timeout=10) response.raise_for_status() daily_papers = response.json() for paper in daily_papers: # Ensure published date has timezone info published_date = paper.get("publishedAt", date_str) if published_date and "T" not in published_date: # If it's just a date, add time and timezone published_date = f"{published_date}T00:00:00+00:00" papers.append({ "title": paper.get("title", ""), "authors": paper.get("authors", []), "summary": paper.get("summary", ""), "published": published_date, "url": f"https://huggingface.co/papers/{paper.get('id', '')}", "arxiv_id": paper.get("id", ""), "upvotes": paper.get("upvotes", 0), "num_comments": paper.get("numComments", 0), "thumbnail": paper.get("thumbnail", ""), "source": "huggingface", }) except requests.RequestException as e: print(f"Error fetching papers for {date_str}: {e}") continue # Sort by upvotes papers.sort(key=lambda x: x.get("upvotes", 0), reverse=True) return papers

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/nanyang12138/AI-Research-MCP'

If you have feedback or need assistance with the MCP directory API, please join our Discord server