Skip to main content
Glama
alexyangjie

Multi Fetch MCP Server

by alexyangjie

fetch_multi

Fetch multiple URLs simultaneously to retrieve web content or error messages, enabling parallel processing of web requests for efficient data collection.

Instructions

Fetches multiple URLs in parallel and returns an array of results. Each element corresponds to an input fetch request and includes either the fetched content or an error message.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
requestsYesList of fetch requests to process in parallel

Implementation Reference

  • The handler for the 'fetch_multi' tool, which processes a list of fetch requests in parallel.
    if name == "fetch_multi":
        try:
            multi = FetchMulti.model_validate(arguments)
        except Exception as e:
            raise McpError(ErrorData(code=INVALID_PARAMS, message=str(e)))
    
        async def fetch_single(req: Fetch) -> dict:
            url = str(req.url)
            try:
                if not ignore_robots_txt:
                    await check_may_autonomously_fetch_url(url, user_agent_autonomous, proxy_url)
                content, prefix = await fetch_url(
                    url, user_agent_autonomous, force_raw=req.raw, proxy_url=proxy_url
                )
                original_length = len(content)
                if req.start_index >= original_length:
                    content_text = "<error>No more content available.</error>"
                else:
                    truncated = content[req.start_index : req.start_index + req.max_length]
                    if not truncated:
                        content_text = "<error>No more content available.</error>"
                    else:
                        content_text = truncated
                        actual_content_length = len(truncated)
                        remaining_content = original_length - (req.start_index + actual_content_length)
                        if actual_content_length == req.max_length and remaining_content > 0:
                            next_start = req.start_index + actual_content_length
                            content_text += f"\n\n<error>Content truncated. Call the fetch tool with a start_index of {next_start} to get more content.</error>"
                return {"url": url, "prefix": prefix, "content": content_text}
            except McpError as e:
                return {"url": url, "error": str(e)}
    
        tasks = [fetch_single(req) for req in multi.requests]
        results = await asyncio.gather(*tasks)
        return [TextContent(type="text", text=json.dumps(results))]
  • Pydantic model defining the input schema for the 'fetch_multi' tool.
    class FetchMulti(BaseModel):
        """Parameters for fetching multiple URLs in parallel."""
        requests: list[Fetch] = Field(
            ..., description="List of fetch requests to process in parallel"
        )
  • Tool registration for 'fetch_multi' in the server tool list.
    Tool(
        name="fetch_multi",
        description="""Fetches multiple URLs in parallel and returns an array of results. Each element corresponds to an input fetch request and includes either the fetched content or an error message.""",
        inputSchema=FetchMulti.model_json_schema(),
    ),
Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/alexyangjie/mcp-server-multi-fetch'

If you have feedback or need assistance with the MCP directory API, please join our Discord server