Skip to main content
Glama

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault
SF_CLI_PATHNoPath to the Screaming Frog CLI executable. Defaults to auto-detected on Mac; set manually on Windows or custom installs.Mac: auto-detected

Capabilities

Features and capabilities supported by this server

CapabilityDetails
tools
{
  "listChanged": false
}
prompts
{
  "listChanged": false
}
resources
{
  "subscribe": false,
  "listChanged": false
}
experimental
{}

Tools

Functions exposed to the LLM to take actions

NameDescription
sf_checkA

Verify that Screaming Frog SEO Spider is installed and the CLI is accessible. Returns version info and license status.

crawl_siteA

Start a background Screaming Frog crawl that saves to SF's internal database.

Args: url: The URL to crawl (e.g. https://example.com) config_file: Optional path to a .seospiderconfig file for crawl settings (including crawl limits) label: Optional label for identifying this crawl (e.g. 'freshgovjobs')

Returns: A crawl_id to use with crawl_status to check progress. The crawl runs in the background - use crawl_status to poll.

Note: To limit the number of URLs crawled, export a .seospiderconfig from the SF GUI with the desired crawl limit, then pass it via config_file.

crawl_statusB

Check the status of a running or completed crawl.

Args: crawl_id: The crawl_id returned by crawl_site

list_crawlsA

List all crawls saved in Screaming Frog's internal database. Returns crawl names, Database IDs, and sizes. Use the Database ID with export_crawl or delete_crawl.

export_crawlA

Load a saved crawl from SF's database and export data as CSV files.

Args: db_id: The Database ID from list_crawls (e.g. '1234' or a crawl identifier) export_tabs: Comma-separated export tabs (default: Internal:All,Response Codes:All,Page Titles:All,Meta Description:All,H1:All,H2:All,Images:All,Canonicals:All,Directives:All). See the export-reference resource for all options. bulk_export: Optional bulk export types (e.g. 'All Inlinks,All Outlinks') save_report: Optional reports to save (e.g. 'Crawl Overview')

Returns: An export_id and list of generated CSV files. Use read_crawl_data to read them.

read_crawl_dataA

Read CSV data from an export. Use after export_crawl.

Args: export_id: The export_id from export_crawl file: CSV filename to read (from the file list in export_crawl output) limit: Max rows to return (default 100) offset: Number of rows to skip (for pagination) filter_column: Optional column name to filter by filter_value: Optional value to match in the filter column (case-insensitive substring)

Returns: CSV data as formatted text with column headers.

delete_crawlA

Delete a crawl from Screaming Frog's internal database to free disk space.

Args: db_id: The Database ID from list_crawls

WARNING: This permanently deletes the crawl data. It cannot be undone.

storage_summaryA

Show disk usage of Screaming Frog's internal crawl storage. Returns total size and per-crawl breakdown of ProjectInstanceData.

Prompts

Interactive templates invoked by user choice

NameDescription

No prompts

Resources

Contextual data attached and managed by the client

NameDescription
get_export_referenceComplete reference of all Screaming Frog export options.

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/marykovziridze/screaming-frog-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server