Skip to main content
Glama

AI Research MCP Server

by nanyang12138

generate_daily_summary

Create daily AI research summaries by aggregating papers, GitHub repositories, and Hugging Face models to track progress across multiple research areas.

Instructions

Generate a comprehensive daily summary of AI research activity

Input Schema

NameRequiredDescriptionDefault
include_papersNoInclude papers section
include_reposNoInclude GitHub repos section
include_modelsNoInclude Hugging Face models section

Input Schema (JSON Schema)

{ "properties": { "include_models": { "default": true, "description": "Include Hugging Face models section", "type": "boolean" }, "include_papers": { "default": true, "description": "Include papers section", "type": "boolean" }, "include_repos": { "default": true, "description": "Include GitHub repos section", "type": "boolean" } }, "type": "object" }

Implementation Reference

  • The core handler function _generate_daily_summary that fetches latest papers from HuggingFace and arXiv, trending repos from GitHub, popular models, and formats them into a markdown daily summary.
    async def _generate_daily_summary( self, include_papers: bool = True, include_repos: bool = True, include_models: bool = True, ) -> str: """Generate daily summary.""" sections = [] sections.append(f"# AI Research Daily Summary\n*Generated: {datetime.now().strftime('%Y-%m-%d %H:%M')}*\n") if include_papers: # Get papers from multiple sources hf_papers = await asyncio.to_thread(self.huggingface.get_daily_papers, days=1) arxiv_papers = await asyncio.to_thread(self.arxiv.get_latest_papers, days=1, max_results=20) all_papers = hf_papers + arxiv_papers sections.append(f"## 📄 Today's Featured Papers ({len(all_papers)})\n\n{self._format_papers(all_papers[:15])}") if include_repos: repos = await asyncio.to_thread(self.github.get_trending_repositories, period="daily") sections.append(f"## 🔥 Trending Repositories ({len(repos)})\n\n{self._format_repos(repos[:10])}") if include_models: models = await asyncio.to_thread(self.huggingface.get_llm_models, limit=15) sections.append(f"## 🤖 Popular Models ({len(models)})\n\n{self._format_models(models[:10])}") return "\n\n".join(sections)
  • Tool registration in the list_tools handler, defining the name, description, and input schema.
    Tool( name="generate_daily_summary", description="Generate a comprehensive daily summary of AI research activity", inputSchema={ "type": "object", "properties": { "include_papers": { "type": "boolean", "description": "Include papers section", "default": True, }, "include_repos": { "type": "boolean", "description": "Include GitHub repos section", "default": True, }, "include_models": { "type": "boolean", "description": "Include Hugging Face models section", "default": True, }, }, }, ),
  • Input schema defining optional boolean flags for including papers, repos, and models in the summary.
    inputSchema={ "type": "object", "properties": { "include_papers": { "type": "boolean", "description": "Include papers section", "default": True, }, "include_repos": { "type": "boolean", "description": "Include GitHub repos section", "default": True, }, "include_models": { "type": "boolean", "description": "Include Hugging Face models section", "default": True, }, }, },

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/nanyang12138/AI-Research-MCP'

If you have feedback or need assistance with the MCP directory API, please join our Discord server