Skip to main content
Glama

fetch_batch

Fetch multiple URLs concurrently using HTTP/2 multiplexing and connection pooling to retrieve web content efficiently with timing results.

Instructions

Fetch multiple URLs in parallel with HTTP/2 multiplexing.

Uses connection pooling and multiplexing for maximum efficiency. All URLs are fetched concurrently.

Returns: Results for each URL with timing.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
urlsYes

Implementation Reference

  • The core implementation of `fetch_batch` which utilizes `ThreadPoolExecutor` for parallel URL fetching.
    def fetch_batch(self, urls: List[str], parallel: int = 5) -> List[NabResult]:
        """Fetch multiple URLs in parallel.
    
        Args:
            urls: List of URLs to fetch.
            parallel: Maximum concurrent fetches.
    
        Returns:
            List of NabResult in the same order as input URLs.
            Failed fetches are included with empty markdown and status=0.
        """
        results: dict[int, NabResult] = {}
    
        with ThreadPoolExecutor(max_workers=parallel) as pool:
            future_to_idx = {pool.submit(self._safe_fetch, url): i for i, url in enumerate(urls)}
            for future in as_completed(future_to_idx):
                idx = future_to_idx[future]
                results[idx] = future.result()
    
        return [results[i] for i in range(len(urls))]

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/MikkoParkkola/nab'

If you have feedback or need assistance with the MCP directory API, please join our Discord server