Skip to main content
Glama

AI Research MCP Server

by nanyang12138

search_by_area

Search for AI research papers and repositories by specific research area to track recent developments in fields like LLM, vision, robotics, and bioinformatics.

Instructions

Search papers and repos by research area (llm, vision, robotics, bioinfo, etc.)

Input Schema

NameRequiredDescriptionDefault
areaYesResearch area: llm, vision, robotics, bioinfo, rl, graph, etc.
daysNoNumber of days to look back
include_papersNoInclude papers from arXiv
include_reposNoInclude GitHub repositories

Input Schema (JSON Schema)

{ "properties": { "area": { "description": "Research area: llm, vision, robotics, bioinfo, rl, graph, etc.", "type": "string" }, "days": { "default": 7, "description": "Number of days to look back", "type": "integer" }, "include_papers": { "default": true, "description": "Include papers from arXiv", "type": "boolean" }, "include_repos": { "default": true, "description": "Include GitHub repositories", "type": "boolean" } }, "required": [ "area" ], "type": "object" }

Implementation Reference

  • MCP server handler for 'search_by_area' tool, which searches papers and repositories by area using client methods.
    async def _search_by_area( self, area: str, days: int = 7, include_papers: bool = True, include_repos: bool = True, ) -> str: """Search by research area.""" results = [] if include_papers: papers = await asyncio.to_thread( self.arxiv.get_latest_by_area, area=area, days=days, ) results.append(f"## Papers ({len(papers)})\n\n{self._format_papers(papers)}") if include_repos: repos = await asyncio.to_thread( self.github.search_by_area, area=area, days=days, ) results.append(f"## Repositories ({len(repos)})\n\n{self._format_repos(repos)}") return f"# AI Research: {area.upper()}\n\n" + "\n\n".join(results)
  • Core implementation of search_by_area in GitHubClient, retrieves keywords by area and calls search_repositories.
    def search_by_area( self, area: str, min_stars: int = 50, days: int = 30, max_results: int = 25, ) -> List[Dict]: """Search repositories by research area. Args: area: Research area (e.g., 'llm', 'robotics', 'bioinfo') min_stars: Minimum number of stars days: Look back this many days max_results: Maximum number of results Returns: List of repository dictionaries """ keywords = self.AI_KEYWORDS.get(area.lower()) if not keywords: raise ValueError(f"Unknown area: {area}. Valid areas: {list(self.AI_KEYWORDS.keys())}") return self.search_repositories( keywords=keywords[:3], # Use top 3 keywords to avoid too restrictive search min_stars=min_stars, pushed_since=f"{days}d", sort_by="stars", max_results=max_results, )
  • Registration of the 'search_by_area' MCP tool including name, description, and input schema.
    Tool( name="search_by_area", description="Search papers and repos by research area (llm, vision, robotics, bioinfo, etc.)", inputSchema={ "type": "object", "properties": { "area": { "type": "string", "description": "Research area: llm, vision, robotics, bioinfo, rl, graph, etc.", }, "days": { "type": "integer", "description": "Number of days to look back", "default": 7, }, "include_papers": { "type": "boolean", "description": "Include papers from arXiv", "default": True, }, "include_repos": { "type": "boolean", "description": "Include GitHub repositories", "default": True, }, }, "required": ["area"], }, ),
  • Input schema definition for the 'search_by_area' tool.
    inputSchema={ "type": "object", "properties": { "area": { "type": "string", "description": "Research area: llm, vision, robotics, bioinfo, rl, graph, etc.", }, "days": { "type": "integer", "description": "Number of days to look back", "default": 7, }, "include_papers": { "type": "boolean", "description": "Include papers from arXiv", "default": True, }, "include_repos": { "type": "boolean", "description": "Include GitHub repositories", "default": True, }, }, "required": ["area"],
  • AI_KEYWORDS dictionary mapping research areas to keywords used in search_by_area.
    # Keywords by AI research area AI_KEYWORDS = { "core_ai": ["LLM", "transformer", "diffusion", "GPT", "neural network", "deep learning"], "multimodal": ["CLIP", "stable diffusion", "text-to-image", "vision-language"], "applications": ["RAG", "AI agent", "langchain", "prompt engineering", "RLHF"], "infrastructure": ["pytorch", "tensorflow", "vLLM", "model optimization"], "robotics": ["robotics", "robot learning", "embodied AI", "manipulation", "navigation"], "bioinfo": ["bioinformatics", "protein folding", "alphafold", "drug discovery", "genomics"], "science": ["AI4Science", "scientific machine learning", "physics-informed neural networks"], "rl": ["reinforcement learning", "multi-agent", "game AI", "AlphaGo"], "graph": ["graph neural network", "GNN", "molecular modeling"], "recsys": ["recommender systems", "personalization"], "timeseries": ["time series forecasting", "anomaly detection"], "emerging": ["federated learning", "neuromorphic computing", "quantum machine learning"], }

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/nanyang12138/AI-Research-MCP'

If you have feedback or need assistance with the MCP directory API, please join our Discord server