Enables searching for academic papers on Google Scholar by keywords and author names, with features for paginated results and filtering by specific year ranges.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@Scholar MCPfind recent papers on large language models from 2024"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
Scholar MCP
Local MCP server that searches Google Scholar. Scrapes results with requests + BeautifulSoup -- no API keys, no paid services.
Tools
search_papers_by_topic-- search by keywords, optional year range, paginatedget_author_papers-- find papers by author name, paginated
Install
Clone and install:
git clone https://github.com/ProPriyam/Scholar-MCP.git
cd Scholar-MCP
pip install -e .Or run directly without cloning (needs uv):
uvx --from git+https://github.com/ProPriyam/Scholar-MCP scholar-mcpClient setup
All configs use python -m scholar_mcp.server to start the server. This avoids PATH issues that pip install can cause on Windows.
VS Code
Add to .vscode/mcp.json:
{
"servers": {
"scholarMcp": {
"type": "stdio",
"command": "python",
"args": ["-m", "scholar_mcp.server"],
"env": {
"PYTHONUNBUFFERED": "1"
}
}
}
}OpenCode
Add to opencode.json in your project root:
{
"$schema": "https://opencode.ai/config.json",
"mcp": {
"scholar_mcp": {
"type": "local",
"command": ["python", "-m", "scholar_mcp.server"],
"enabled": true,
"environment": {
"PYTHONUNBUFFERED": "1"
}
}
}
}Claude Code
claude mcp add --transport stdio --scope project scholar-mcp -- python -m scholar_mcp.serverConfiguration
All optional. Set as environment variables.
Variable | Default | Description |
| Chrome-like UA | User-Agent header for requests |
|
| HTTP timeout in seconds |
|
| Minimum delay between requests (seconds) |
|
| Retry attempts on failure |
|
| Backoff multiplier between retries |
| none | HTTP/HTTPS proxy URL |
|
| Max results per request |
Notes
This scrapes Google Scholar HTML. It can break if Google changes their markup or blocks requests.