Skip to main content
Glama

scrape_multiple_webpages

Concurrently scrape multiple URLs using the Scrapy MCP Server. Extract data efficiently with configurable methods (auto, simple, Scrapy, Selenium) and handling for complex web structures.

Instructions

Scrape multiple webpages concurrently.

This tool allows you to scrape multiple URLs at once, which is much faster than scraping them one by one. All URLs will be processed concurrently.

Input Schema

NameRequiredDescriptionDefault
requestYes

Input Schema (JSON Schema)

{ "$defs": { "MultipleScrapeRequest": { "description": "Request model for scraping multiple URLs.", "properties": { "extract_config": { "anyOf": [ { "additionalProperties": true, "type": "object" }, { "type": "null" } ], "default": null, "description": "Configuration for data extraction", "title": "Extract Config" }, "method": { "default": "auto", "description": "Scraping method: auto, simple, scrapy, selenium", "title": "Method", "type": "string" }, "urls": { "description": "List of URLs to scrape", "items": { "type": "string" }, "title": "Urls", "type": "array" } }, "required": [ "urls" ], "title": "MultipleScrapeRequest", "type": "object" } }, "properties": { "request": { "$ref": "#/$defs/MultipleScrapeRequest", "title": "Request" } }, "required": [ "request" ], "type": "object" }

Implementation Reference

  • Main handler function decorated with @app.tool(). Validates inputs, calls web_scraper.scrape_multiple_urls for concurrent scraping, processes results into BatchScrapeResponse with individual ScrapeResponse objects.
    @app.tool() async def scrape_multiple_webpages( urls: Annotated[ List[str], Field( ..., description="""要抓取的 URL 列表,每个 URL 必须包含协议前缀(http://或https://),支持并发抓取以提高效率。 示例:["https://example.com", "https://another.com"]""", ), ], method: Annotated[ str, Field( default="auto", description="""抓取方法选择: auto(自动选择最佳方法)、 simple(快速HTTP请求,适合静态页面)、 scrapy(Scrapy框架,适合批量处理和大规模抓取)、 selenium(浏览器渲染,支持JavaScript和动态内容)""", ), ], extract_config: Annotated[ Optional[Dict[str, Any]], Field( default=None, description="""统一的数据提取配置字典,应用于所有URL。 格式:配置字典,键为字段名,值为选择器或配置对象。 示例:{"title": "h1", "links": {"selector": "a", "multiple": true, "attr": "href"}}""", ), ], ) -> BatchScrapeResponse: """ Scrape multiple webpages concurrently. This tool allows you to scrape multiple URLs at once, which is much faster than scraping them one by one. All URLs will be processed concurrently. """ try: # Validate inputs if not urls: raise ValueError("URLs list cannot be empty") for url in urls: parsed = urlparse(url) if not parsed.scheme or not parsed.netloc: raise ValueError(f"Invalid URL format: {url}") if method not in ["auto", "simple", "scrapy", "selenium"]: raise ValueError("Method must be one of: auto, simple, scrapy, selenium") logger.info(f"Scraping {len(urls)} webpages with method: {method}") # Validate extract_config if provided parsed_extract_config = None if extract_config: if not isinstance(extract_config, dict): return BatchScrapeResponse( success=False, total_urls=len(urls), successful_count=0, failed_count=len(urls), results=[], summary={"error": "extract_config must be a dictionary"}, ) parsed_extract_config = extract_config results = await web_scraper.scrape_multiple_urls( urls=urls, method=method, extract_config=parsed_extract_config, ) # Convert results to ScrapeResponse objects scrape_responses = [] for i, result in enumerate(results): url = urls[i] if "error" in result: response = ScrapeResponse( success=False, url=url, method=method, error=result["error"] ) else: response = ScrapeResponse( success=True, url=url, method=method, data=result ) scrape_responses.append(response) successful_count = sum(1 for r in scrape_responses if r.success) failed_count = len(scrape_responses) - successful_count return BatchScrapeResponse( success=True, total_urls=len(urls), successful_count=successful_count, failed_count=failed_count, results=scrape_responses, summary={ "total": len(urls), "successful": successful_count, "failed": failed_count, "method_used": method, }, ) except Exception as e: logger.error(f"Error scraping multiple webpages: {str(e)}") return BatchScrapeResponse( success=False, total_urls=len(urls), successful_count=0, failed_count=len(urls), results=[], summary={"error": str(e)}, )
  • Pydantic model defining the output schema for batch scraping response, including overall success, counts, individual results, and summary.
    class BatchScrapeResponse(BaseModel): """Response model for batch scraping operations.""" success: bool = Field(..., description="整体操作是否成功") total_urls: int = Field(..., description="总URL数量") successful_count: int = Field(..., description="成功抓取的数量") failed_count: int = Field(..., description="失败的数量") results: List[ScrapeResponse] = Field(..., description="每个URL的抓取结果") summary: Dict[str, Any] = Field(..., description="批量操作摘要信息")
  • Pydantic model used for individual scrape results within batch response.
    class ScrapeResponse(BaseModel): """Response model for scraping operations.""" success: bool = Field(..., description="操作是否成功") url: str = Field(..., description="被抓取的URL") method: str = Field(..., description="使用的抓取方法") data: Optional[Dict[str, Any]] = Field(default=None, description="抓取到的数据") metadata: Optional[Dict[str, Any]] = Field(default=None, description="页面元数据") error: Optional[str] = Field(default=None, description="错误信息(如果有)") timestamp: datetime = Field(default_factory=datetime.now, description="抓取时间戳")
  • FastMCP tool registration decorator applied to the scrape_multiple_webpages handler function.
    @app.tool()
  • Call to underlying web_scraper.scrape_multiple_urls helper for the actual concurrent scraping logic (imported from .scraper).
    results = await web_scraper.scrape_multiple_urls( urls=urls, method=method, extract_config=parsed_extract_config, )

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/ThreeFish-AI/scrapy-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server