parallel_read_url
Extract clean content from multiple web pages in parallel to efficiently compare or gather information. Input up to five URLs for optimal performance, enabling simultaneous analysis of diverse sources.
Instructions
Read multiple web pages in parallel to extract clean content efficiently. For best results, provide multiple URLs that you need to extract simultaneously. This is useful for comparing content across multiple sources or gathering information from multiple pages at once. 💡 Use this when you need to analyze multiple sources simultaneously for efficiency.
Input Schema
Name | Required | Description | Default |
---|---|---|---|
timeout | No | Timeout in milliseconds for all URL reads | |
urls | Yes | Array of URL configurations to read in parallel (maximum 5 URLs for optimal performance) |
Input Schema (JSON Schema)
{
"$schema": "http://json-schema.org/draft-07/schema#",
"additionalProperties": false,
"properties": {
"timeout": {
"default": 30000,
"description": "Timeout in milliseconds for all URL reads",
"type": "number"
},
"urls": {
"description": "Array of URL configurations to read in parallel (maximum 5 URLs for optimal performance)",
"items": {
"additionalProperties": false,
"properties": {
"url": {
"description": "The complete URL of the webpage or PDF file to read and convert",
"format": "uri",
"type": "string"
},
"withAllImages": {
"default": false,
"description": "Set to true to extract and return all images found on the page as structured data",
"type": "boolean"
},
"withAllLinks": {
"default": false,
"description": "Set to true to extract and return all hyperlinks found on the page as structured data",
"type": "boolean"
}
},
"required": [
"url"
],
"type": "object"
},
"maxItems": 5,
"type": "array"
}
},
"required": [
"urls"
],
"type": "object"
}