fetch_batch
Fetch multiple URLs in parallel, returning each page as a Markdown section or JSON entry. Failures per URL are reported inline without stopping the batch. Use for reading 2+ URLs in one round-trip, such as top search results.
Instructions
Fetch a list of URLs in parallel. Per-URL failures do not raise.
Best for:
- 2+ URLs you want to read in one round-trip.
- Reading the top N results of a previous `search` call.
Not recommended for:
- A single URL -> `fetch` (no list-wrapping overhead).
- "Search and then read" -> `research` collapses both into one tool call.
- PDFs/DOCX -> `read_doc` per file.
Returns:
- markdown (default): each page rendered as a Markdown section, separated
by horizontal rules; failed URLs become inline error notes.
- json: list[dict], one entry per URL, with `error` set on failures.
Common mistakes:
- Passing a single URL inside a 1-element list — use `fetch` directly.
- Assuming an exception means the whole batch failed; check each item's
`error` field instead.
Args:
urls: List of absolute http(s) URLs.
render: Same as `fetch`.
format: "markdown" or "json".Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| urls | Yes | ||
| render | No | auto | |
| format | No | markdown |
Output Schema
| Name | Required | Description | Default |
|---|---|---|---|
| result | Yes |