Skip to main content
Glama

scrape_multiple_webpages

Concurrently scrape multiple URLs using the Scrapy MCP Server. Extract data efficiently with configurable methods (auto, simple, Scrapy, Selenium) and handling for complex web structures.

Instructions

Scrape multiple webpages concurrently.

This tool allows you to scrape multiple URLs at once, which is much faster than scraping them one by one. All URLs will be processed concurrently.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
requestYes

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • Main handler function decorated with @app.tool(). Validates inputs, calls web_scraper.scrape_multiple_urls for concurrent scraping, processes results into BatchScrapeResponse with individual ScrapeResponse objects.
    @app.tool()
    async def scrape_multiple_webpages(
        urls: Annotated[
            List[str],
            Field(
                ...,
                description="""要抓取的 URL 列表,每个 URL 必须包含协议前缀(http://或https://),支持并发抓取以提高效率。
                    示例:["https://example.com", "https://another.com"]""",
            ),
        ],
        method: Annotated[
            str,
            Field(
                default="auto",
                description="""抓取方法选择:
                    auto(自动选择最佳方法)、
                    simple(快速HTTP请求,适合静态页面)、
                    scrapy(Scrapy框架,适合批量处理和大规模抓取)、
                    selenium(浏览器渲染,支持JavaScript和动态内容)""",
            ),
        ],
        extract_config: Annotated[
            Optional[Dict[str, Any]],
            Field(
                default=None,
                description="""统一的数据提取配置字典,应用于所有URL。
                    格式:配置字典,键为字段名,值为选择器或配置对象。
                    示例:{"title": "h1", "links": {"selector": "a", "multiple": true, "attr": "href"}}""",
            ),
        ],
    ) -> BatchScrapeResponse:
        """
        Scrape multiple webpages concurrently.
    
        This tool allows you to scrape multiple URLs at once, which is much faster
        than scraping them one by one. All URLs will be processed concurrently.
        """
        try:
            # Validate inputs
            if not urls:
                raise ValueError("URLs list cannot be empty")
    
            for url in urls:
                parsed = urlparse(url)
                if not parsed.scheme or not parsed.netloc:
                    raise ValueError(f"Invalid URL format: {url}")
    
            if method not in ["auto", "simple", "scrapy", "selenium"]:
                raise ValueError("Method must be one of: auto, simple, scrapy, selenium")
    
            logger.info(f"Scraping {len(urls)} webpages with method: {method}")
    
            # Validate extract_config if provided
            parsed_extract_config = None
            if extract_config:
                if not isinstance(extract_config, dict):
                    return BatchScrapeResponse(
                        success=False,
                        total_urls=len(urls),
                        successful_count=0,
                        failed_count=len(urls),
                        results=[],
                        summary={"error": "extract_config must be a dictionary"},
                    )
                parsed_extract_config = extract_config
    
            results = await web_scraper.scrape_multiple_urls(
                urls=urls,
                method=method,
                extract_config=parsed_extract_config,
            )
    
            # Convert results to ScrapeResponse objects
            scrape_responses = []
            for i, result in enumerate(results):
                url = urls[i]
                if "error" in result:
                    response = ScrapeResponse(
                        success=False, url=url, method=method, error=result["error"]
                    )
                else:
                    response = ScrapeResponse(
                        success=True, url=url, method=method, data=result
                    )
                scrape_responses.append(response)
    
            successful_count = sum(1 for r in scrape_responses if r.success)
            failed_count = len(scrape_responses) - successful_count
    
            return BatchScrapeResponse(
                success=True,
                total_urls=len(urls),
                successful_count=successful_count,
                failed_count=failed_count,
                results=scrape_responses,
                summary={
                    "total": len(urls),
                    "successful": successful_count,
                    "failed": failed_count,
                    "method_used": method,
                },
            )
    
        except Exception as e:
            logger.error(f"Error scraping multiple webpages: {str(e)}")
            return BatchScrapeResponse(
                success=False,
                total_urls=len(urls),
                successful_count=0,
                failed_count=len(urls),
                results=[],
                summary={"error": str(e)},
            )
  • Pydantic model defining the output schema for batch scraping response, including overall success, counts, individual results, and summary.
    class BatchScrapeResponse(BaseModel):
        """Response model for batch scraping operations."""
    
        success: bool = Field(..., description="整体操作是否成功")
        total_urls: int = Field(..., description="总URL数量")
        successful_count: int = Field(..., description="成功抓取的数量")
        failed_count: int = Field(..., description="失败的数量")
        results: List[ScrapeResponse] = Field(..., description="每个URL的抓取结果")
        summary: Dict[str, Any] = Field(..., description="批量操作摘要信息")
  • Pydantic model used for individual scrape results within batch response.
    class ScrapeResponse(BaseModel):
        """Response model for scraping operations."""
    
        success: bool = Field(..., description="操作是否成功")
        url: str = Field(..., description="被抓取的URL")
        method: str = Field(..., description="使用的抓取方法")
        data: Optional[Dict[str, Any]] = Field(default=None, description="抓取到的数据")
        metadata: Optional[Dict[str, Any]] = Field(default=None, description="页面元数据")
        error: Optional[str] = Field(default=None, description="错误信息(如果有)")
        timestamp: datetime = Field(default_factory=datetime.now, description="抓取时间戳")
  • FastMCP tool registration decorator applied to the scrape_multiple_webpages handler function.
    @app.tool()
  • Call to underlying web_scraper.scrape_multiple_urls helper for the actual concurrent scraping logic (imported from .scraper).
    results = await web_scraper.scrape_multiple_urls(
        urls=urls,
        method=method,
        extract_config=parsed_extract_config,
    )
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It mentions concurrency and speed but lacks critical behavioral details such as rate limits, error handling (e.g., if some URLs fail), authentication needs, or what 'processed concurrently' entails (e.g., thread count, timeouts). For a tool with potential complexity and no annotation coverage, this is a significant gap.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded: the first sentence states the core purpose, and the following sentences add useful context about concurrency and speed. Every sentence earns its place with no wasted words, making it easy to scan and understand quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (concurrent web scraping with configurable methods), no annotations, 0% schema description coverage, but an output schema exists, the description is incomplete. It covers the high-level purpose and benefit but misses details on parameters, behavioral traits, and error handling. The output schema reduces the need to explain return values, but other gaps remain significant.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate for undocumented parameters. It mentions 'URLs' but doesn't explain the 'extract_config' or 'method' parameters beyond what the schema provides (e.g., what 'auto' means, how configuration works). With 1 parameter (a nested object with 3 sub-parameters) and no schema descriptions, the description adds minimal value over the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Scrape multiple webpages concurrently.' It specifies the verb (scrape) and resource (multiple webpages) with the key feature of concurrency. However, it doesn't explicitly differentiate from sibling tools like 'scrape_webpage' (singular) or 'scrape_with_stealth' (stealth-focused), missing full sibling differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by stating it's 'much faster than scraping them one by one,' suggesting it should be used for batch scraping. However, it doesn't provide explicit guidance on when to use this tool versus alternatives like 'scrape_webpage' for single URLs or 'scrape_with_stealth' for stealth needs, nor does it mention any exclusions or prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/ThreeFish-AI/scrapy-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server