screaming-frog-mcp
Provides integration with GitHub repositories for installation and configuration of the MCP server through git-based package management.
screaming-frog-mcp
MCP server that lets Claude run Screaming Frog SEO Spider headless crawls, export data, and manage crawl storage — without anyone opening the GUI.
Type a URL into Claude. Screaming Frog runs in the background. You get the data back. That's it.
Forked from bzsasson/screaming-frog-mcp v0.1.0 with bug fixes. The original had issues that made it unusable in practice — pipe deadlocks that hung crawls, false GUI detection that blocked everything after the first run, a delete command that could wipe your entire crawl database. All fixed.
What's fixed
Bug | Fix |
Pipe deadlock | stdout/stderr redirected to log files instead of PIPE. Crawls no longer hang when SF produces large output. |
GUI detection | Uses |
Stale crawl cleanup | SF leaves a temp |
Delete safety |
|
Export dir leak | Failed exports left temp directories on disk. Now cleaned up. |
Input validation | Stricter character allowlists for CLI arguments and db_id. |
Requirements
Screaming Frog SEO Spider with a paid license — headless crawls require a license
Python 3.10+
uv (recommended) or pip
Installation
Mac
uvx --from git+https://github.com/marykovziridze/screaming-frog-mcp screaming-frog-mcpAdd to ~/Library/Application Support/Claude/claude_desktop_config.json:
{
"mcpServers": {
"screaming-frog": {
"command": "uvx",
"args": ["--from", "git+https://github.com/marykovziridze/screaming-frog-mcp", "screaming-frog-mcp"]
}
}
}Windows
Install uv first:
powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"Add to C:\Users\[name]\AppData\Roaming\Claude\claude_desktop_config.json:
{
"mcpServers": {
"screaming-frog": {
"command": "uvx",
"args": ["--from", "git+https://github.com/marykovziridze/screaming-frog-mcp", "screaming-frog-mcp"],
"env": {
"SF_CLI_PATH": "C:\\Program Files (x86)\\Screaming Frog SEO Spider\\ScreamingFrogSEOSpiderCli.exe"
}
}
}
}Restart Claude Desktop after editing the config.
Tools
Tool | What it does |
| Verify SF is installed and licensed |
| Start a headless crawl |
| Check crawl progress |
| List saved crawls in SF's database |
| Export crawl data as CSV |
| Read and filter exported CSV data |
| Delete a saved crawl |
| Show disk usage of crawl storage |
Configuration
Variable | Default | Notes |
| Mac: auto-detected | Set manually on Windows or custom installs |
Known limitations
Windows stale crawl path — auto-cleanup works on Mac. On Windows, if crawls fail after an interruption, check for a
crawl.seospiderfile in your SF install directory and delete it manually.No crawl progress percentage — SF's headless CLI doesn't report progress mid-crawl. You know when it starts and when it finishes.
Large sites — tested on sites up to ~160 pages. Not stress-tested on 10k+ page sites.
License
MIT — see LICENSE
Credits
Original MCP server by Boaz Sasson.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/marykovziridze/screaming-frog-mcp'
If you have feedback or need assistance with the MCP directory API, please join our Discord server