Skip to main content
Glama

ScrapeGraph MCP Server

Official

smartcrawler_initiate

Initiate intelligent web crawling to extract structured data or convert pages to markdown. Set parameters like URL, crawl depth, and domain constraints for multi-page extraction tasks.

Instructions

Initiate a SmartCrawler request for intelligent multi-page web crawling. SmartCrawler supports two modes: - AI Extraction Mode (10 credits per page): Extracts structured data based on your prompt - Markdown Conversion Mode (2 credits per page): Converts pages to clean markdown Args: url: Starting URL to crawl prompt: AI prompt for data extraction (required for AI mode) extraction_mode: "ai" for AI extraction or "markdown" for markdown conversion (default: "ai") depth: Maximum link traversal depth (optional) max_pages: Maximum number of pages to crawl (optional) same_domain_only: Whether to crawl only within the same domain (optional) Returns: Dictionary containing the request ID for async processing

Input Schema

NameRequiredDescriptionDefault
depthNo
extraction_modeNoai
max_pagesNo
promptNo
same_domain_onlyNo
urlYes

Input Schema (JSON Schema)

{ "properties": { "depth": { "default": null, "title": "Depth", "type": "integer" }, "extraction_mode": { "default": "ai", "title": "Extraction Mode", "type": "string" }, "max_pages": { "default": null, "title": "Max Pages", "type": "integer" }, "prompt": { "default": null, "title": "Prompt", "type": "string" }, "same_domain_only": { "default": null, "title": "Same Domain Only", "type": "boolean" }, "url": { "title": "Url", "type": "string" } }, "required": [ "url" ], "title": "smartcrawler_initiateArguments", "type": "object" }
Install Server

Other Tools from ScrapeGraph MCP Server

Related Tools

    MCP directory API

    We provide all the information about MCP servers via our MCP API.

    curl -X GET 'https://glama.ai/api/mcp/v1/servers/ScrapeGraphAI/scrapegraph-mcp'

    If you have feedback or need assistance with the MCP directory API, please join our Discord server