Skip to main content
Glama
YGao2005

Scholar Feed MCP Server

by YGao2005

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault
SF_API_KEYYesYour Scholar Feed API key (starts with sf_)
SF_API_BASE_URLNoOverride API base URL

Capabilities

Features and capabilities supported by this server

CapabilityDetails
tools
{
  "listChanged": true
}

Tools

Functions exposed to the LLM to take actions

NameDescription
check_connectionA

Verify your Scholar Feed API key is working. Returns connection status, subscription plan, key name, and today's API usage count.

search_papersA

Search Scholar Feed's 560k+ CS/AI/ML paper corpus by keyword. Returns papers with LLM-generated summaries, novelty scores, and structured extraction data (method, task, contribution type). Supports filtering by category, novelty, recency, method, task, dataset, contribution type, and whether papers have benchmark results.

get_paperA

Get full details for a single paper by arXiv ID. Returns title, authors, year, LLM summary, novelty score, links, and structured extraction data (method_name, contribution_type, task_category, datasets, baselines). Use fields='abstract' to include the abstract. Use get_paper_results for benchmark scores, or fetch_fulltext with sections='all' for the full paper content.

find_similarB

Find papers similar to a given paper. Uses precomputed bibliographic coupling + embedding similarity (updated daily).

get_citationsB

Get the citation graph for a paper. 'citing' = outgoing references this paper cites; 'cited_by' = incoming citations from other papers.

whats_trendingA

Get today's trending CS/AI papers ranked by a composite score of recency, citation velocity, and institutional reputation. Papers from the last 7 days.

fetch_fulltextA

Extract paper content from an arXiv paper's LaTeX source. Two modes: 'results' (default) returns 800 chars of results/experiments + 3 table captions. 'all' returns full paper sections (abstract, introduction, related work, method, results, conclusion) at up to 3000 chars each + 5 table captions. ~62% of arXiv papers have LaTeX source. May take a few seconds.

batch_lookupA

Look up multiple papers at once by arXiv ID. Returns details for found papers and lists not-found IDs.

fetch_repoA

Get the GitHub repository summary for a paper — README content and file tree. Only works for papers with an associated code URL.

export_bibtexB

Export BibTeX entries for one or more arXiv papers. Returns formatted BibTeX text ready for use in LaTeX documents or reference managers.

deep_researchA

Run a deep research session on a topic. Searches 512k+ CS/AI papers, synthesizes findings with an LLM into a structured report with clusters, gap analysis, and evidence chains. Takes 60-300 seconds depending on depth. Note: may take 60-300s. The 'quick' depth (~60s) is most reliable. Returns the full structured report as JSON.

refine_researchA

Ask a follow-up question on a completed deep_research report. Finds new papers not seen in the original report and synthesizes a focused follow-up analysis. Requires the report_id from a previous deep_research call. Takes 20-60 seconds.

get_paper_resultsA

Get structured benchmark results for a paper. Returns quantitative results extracted from the paper: datasets evaluated, metrics, numeric scores, model comparisons, and baselines. Use this after get_paper to see how a paper performed on benchmarks.

get_leaderboardA

Get the SOTA leaderboard for a dataset/benchmark (e.g. ImageNet, MMLU, GSM8K, SWE-bench). Returns top methods/models ranked by score. Only includes papers with absolute numeric results. Powered by 59k+ extracted benchmark results across 20k+ datasets.

search_benchmarksA

Search for datasets/benchmarks by name. Returns matching benchmark names with paper counts and available metrics. Use this to find the exact benchmark name before calling get_leaderboard. Covers 20k+ datasets from 24k+ papers.

search_by_methodA

Search papers by method or technique name (e.g. 'LoRA', 'YOLO', 'DPO', 'attention'). Unlike keyword search, this searches the structured method_name field extracted from 78k+ papers. Returns papers that introduce or evaluate the method, with benchmark result counts.

compare_methodsA

Compare 2-10 models/methods side-by-side across shared benchmarks. Finds datasets where at least 2 of the specified models have been evaluated, enabling direct score comparison. Example: compare GPT-4, LLaMA-3, and Mistral across MMLU, GSM8K, etc.

discover_authorsA

Discover researchers by topic (semantic search) or name. For research topics like 'efficient LLM inference' or 'graph neural networks', uses embedding similarity to find relevant authors. For short name queries, uses fuzzy name matching. Returns h-index, paper counts, research topics, and rank scores.

get_authorA

Get detailed author profile by ID (from discover_authors results). Returns h-index, total citations, global rank, research topics, novelty scores, and their top 10 papers by rank score.

get_author_papersA

Get all papers by an author (paginated, sorted by rank score). Use discover_authors to find the author_id first. Returns the same paper fields as search_papers.

get_benchmark_timelineA

Get raw benchmark score data points over time for a dataset+metric. Returns individual (paper, date, score, value_string) entries ordered chronologically. No trend lines or interpretation — raw scatter data. Use search_benchmarks first to find the exact dataset and metric names.

get_benchmark_statsA

Get score distribution statistics for a dataset+metric across all papers. Returns min, max, median, mean, p25, p75, stddev, and count. Use this to contextualize a paper's claims — e.g., 'For MMLU accuracy, the median is 72.5% across 45 papers, range 33%-95%.' No judgment or outlier flags — just raw statistics.

get_research_landscapeA

Get aggregated research landscape statistics for a topic. Uses semantic search to find relevant papers, then returns count-based aggregates: methods used (with paper counts), benchmarks evaluated (with paper counts), active authors, contribution type distribution, publication velocity by month, and novelty score distribution. All data is factual counts — no rankings or editorial labels.

Prompts

Interactive templates invoked by user choice

NameDescription

No prompts

Resources

Contextual data attached and managed by the client

NameDescription

No resources

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/YGao2005/scholar-feed-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server