Scholar Feed MCP Server
Provides tools for searching and analyzing arXiv research papers, including full-text search, paper details retrieval, citation analysis, and trending paper discovery with LLM-powered novelty scoring.
Enables access to GitHub repositories associated with research papers, including fetching repository READMEs and file trees for code availability analysis.
Provides tools for extracting content from LaTeX source files of research papers, including results and experiments sections for detailed analysis.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@Scholar Feed MCP Serverfind recent papers on efficient LLM inference with high novelty scores"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
Scholar Feed MCP Server
Search 560,000+ CS/AI/ML research papers with LLM-powered novelty analysis from Claude Code, Cursor, or any MCP client.
Scholar Feed indexes arXiv papers daily and ranks them using a multi-signal scoring system (recency, citation velocity, institutional reputation, code availability). Each paper has an LLM-generated summary and novelty score.
Quick Start
npx scholar-feed-mcp initThis interactive wizard will:
Ask for your API key (get one at scholarfeed.org/settings)
Detect your MCP client (Claude Code, Cursor, or Claude Desktop)
Write the config and verify the connection
That's it. Try asking: "Search for recent papers on test-time compute scaling"
What You Can Do
Technology scouting — "What novel research on retrieval-augmented generation was published this month?"
Literature review — "Find papers similar to 2401.04088 and export their BibTeX"
Trend monitoring — "What's trending in cs.CV this week? Summarize the top 3."
Deep dives — "Run a deep research session on 'reasoning in large language models'"
Benchmark tracking — "Show me the MMLU leaderboard and compare GPT-4 vs LLaMA-3"
Author discovery — "Who are the top researchers working on efficient LLM inference?"
Manual Installation
Claude Code
claude mcp add scholar-feed -e SF_API_KEY=sf_your_key_here -- npx -y scholar-feed-mcpCursor (.cursor/mcp.json)
{
"mcpServers": {
"scholar-feed": {
"command": "npx",
"args": ["-y", "scholar-feed-mcp"],
"env": { "SF_API_KEY": "sf_your_key_here" }
}
}
}Claude Desktop (claude_desktop_config.json)
{
"mcpServers": {
"scholar-feed": {
"command": "npx",
"args": ["-y", "scholar-feed-mcp"],
"env": { "SF_API_KEY": "sf_your_key_here" }
}
}
}Project-scoped (.mcp.json)
{
"mcpServers": {
"scholar-feed": {
"command": "npx",
"args": ["-y", "scholar-feed-mcp"],
"env": { "SF_API_KEY": "${SF_API_KEY}" }
}
}
}Windows note: Use "command": "cmd" and "args": ["/c", "npx", "-y", "scholar-feed-mcp"].
Available Tools (23)
Core Search & Discovery
Tool | Description | Key Parameters |
| Full-text keyword search with filters |
|
| Get full paper details by arXiv ID |
|
| Find similar papers via embedding + bibliographic coupling |
|
| Citation graph (outgoing refs or incoming citations) |
|
| Today's trending papers by composite score |
|
| Look up multiple papers at once |
|
Paper Content
Tool | Description | Key Parameters |
| Extract results/experiments from LaTeX source |
|
| Get GitHub repo README + file tree |
|
| Export BibTeX for papers |
|
| Structured benchmark results from a paper |
|
Benchmarks & Methods
Tool | Description | Key Parameters |
| Find datasets/benchmarks by name |
|
| SOTA leaderboard for a dataset |
|
| Score distribution stats (min, max, median, etc.) |
|
| Raw score data points over time |
|
| Search by technique name (LoRA, YOLO, DPO, etc.) |
|
| Side-by-side model comparison across benchmarks |
|
Authors
Tool | Description | Key Parameters |
| Find researchers by topic or name |
|
| Detailed author profile (h-index, topics, top papers) |
|
| All papers by an author (paginated) |
|
Research
Tool | Description | Key Parameters |
| Aggregated landscape stats for a topic |
|
| Multi-round research synthesis (30-120s) |
|
| Follow-up question on a completed research report |
|
Utility
Tool | Description | Key Parameters |
| Verify API key, show plan and usage | — |
Novelty Score
Every paper has an llm_novelty_score from 0.0 to 1.0:
Range | Meaning | Example |
0.7+ | Paradigm shift or broad SOTA | New architecture that changes the field |
0.5-0.7 | Novel method with strong results | New training technique with clear gains |
0.3-0.5 | Incremental improvement | Applying known method to new domain |
<0.3 | Survey, dataset, or minor extension | Literature review, benchmark release |
Use novelty_min: 0.5 in search_papers to filter for genuinely novel work.
Rate Limits
Endpoint | Limit |
| 60/min |
| 30/min |
| 60/min |
| 20/min |
| 30/min |
| 30/min |
| 10/min |
| 20/min |
| 10/min |
| 20/min |
| 5/min |
| 5/min |
| 30/min |
| 30/min |
| 30/min |
| 30/min |
| 30/min |
| 20/min |
| 20/min |
| 60/min |
| 30/min |
| 10/min |
| 30/min |
Responses include X-RateLimit-Limit, X-RateLimit-Remaining, and X-RateLimit-Reset headers.
Example Response
search_papers with q: "attention mechanism" returns:
{
"papers": [
{
"arxiv_id": "2401.04088",
"title": "Attention Is All You Need (But Not All You Get)",
"authors": ["A. Researcher", "B. Scientist"],
"year": 2024,
"categories": ["cs.LG", "cs.AI"],
"primary_category": "cs.LG",
"arxiv_url": "https://arxiv.org/abs/2401.04088",
"has_code": true,
"github_url": "https://github.com/example/repo",
"citation_count": 42,
"rank_score": 0.73,
"llm_summary": "Proposes a sparse attention variant that reduces compute by 60% while matching dense attention accuracy on 5 benchmarks.",
"llm_novelty_score": 0.55
}
],
"total": 1847,
"page": 1,
"limit": 20,
"next_cursor": "eyJzIjogMC43MywgImlkIjogIjI0MDEuMDQwODgifQ=="
}Pass next_cursor back to get the next page (keyset pagination — more stable than page numbers for large result sets).
Verify Installation
After setup, ask your AI assistant to run check_connection. You should see:
{
"status": "ok",
"plan": "free",
"key_name": "my-key",
"usage_today": 0
}Environment Variables
Variable | Required | Default | Description |
| Yes | — | Your Scholar Feed API key (starts with |
| No | Production URL | Override API base URL |
Development
npm install
npm run build # Build to build/
npm run dev # Watch mode
npm run typecheck # Type check without emitting
npm test # Run testsContributing
See CONTRIBUTING.md for guidelines.
Troubleshooting
"SF_API_KEY environment variable is required"
Your MCP client isn't passing the env var. Double-check the env block in your config matches the examples above.
"Authentication failed: your SF_API_KEY is invalid" The key may have been revoked. Generate a new one at scholarfeed.org/settings.
Tool calls time out or fail silently
Ensure Node.js 18+ is installed (node --version). Older versions lack the native fetch API.
Stale npx cache
If you're stuck on an old version after an update: npx --yes scholar-feed-mcp@latest
Windows: "command not found"
Use "command": "cmd" with "args": ["/c", "npx", "-y", "scholar-feed-mcp"] in your MCP config.
License
Resources
Unclaimed servers have limited discoverability.
Looking for Admin?
If you are the server author, to access and configure the admin panel.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/YGao2005/scholar-feed-mcp'
If you have feedback or need assistance with the MCP directory API, please join our Discord server