fetch_multi
Fetches multiple URLs simultaneously and returns an array of contents or error messages for each request.
Instructions
Fetches multiple URLs in parallel and returns an array of results. Each element corresponds to an input fetch request and includes either the fetched content or an error message.
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| requests | Yes | List of fetch requests to process in parallel |
Implementation Reference
- Pydantic schema for the fetch_multi tool, defining input as a list of Fetch requests.
class FetchMulti(BaseModel): """Parameters for fetching multiple URLs in parallel.""" requests: list[Fetch] = Field( ..., description="List of fetch requests to process in parallel" ) - src/mcp_server_multi_fetch/server.py:242-246 (registration)Registration of fetch_multi as a Tool with its schema, inside list_tools().
Tool( name="fetch_multi", description="""Fetches multiple URLs in parallel and returns an array of results. Each element corresponds to an input fetch request and includes either the fetched content or an error message.""", inputSchema=FetchMulti.model_json_schema(), ), - Handler for fetch_multi: validates input, runs parallel fetch_single tasks via asyncio.gather, handles truncation and errors per URL, and returns JSON array of results.
if name == "fetch_multi": try: multi = FetchMulti.model_validate(arguments) except Exception as e: raise McpError(ErrorData(code=INVALID_PARAMS, message=str(e))) async def fetch_single(req: Fetch) -> dict: url = str(req.url) try: if not ignore_robots_txt: await check_may_autonomously_fetch_url(url, user_agent_autonomous, proxy_url) content, prefix = await fetch_url( url, user_agent_autonomous, force_raw=req.raw, proxy_url=proxy_url ) original_length = len(content) if req.start_index >= original_length: content_text = "<error>No more content available.</error>" else: truncated = content[req.start_index : req.start_index + req.max_length] if not truncated: content_text = "<error>No more content available.</error>" else: content_text = truncated actual_content_length = len(truncated) remaining_content = original_length - (req.start_index + actual_content_length) if actual_content_length == req.max_length and remaining_content > 0: next_start = req.start_index + actual_content_length content_text += f"\n\n<error>Content truncated. Call the fetch tool with a start_index of {next_start} to get more content.</error>" return {"url": url, "prefix": prefix, "content": content_text} except McpError as e: return {"url": url, "error": str(e)} tasks = [fetch_single(req) for req in multi.requests] results = await asyncio.gather(*tasks) return [TextContent(type="text", text=json.dumps(results))] - src/mcp_server_multi_fetch/server.py:266-276 (registration)Registration of fetch_multi as a Prompt in list_prompts().
Prompt( name="fetch_multi", description="Fetch multiple URLs in parallel and return their contents as an array of results", arguments=[ PromptArgument( name="requests", description="JSON array of fetch requests, each with url, max_length, start_index, and raw", required=True, ), ], ),