extract_images
Extract all image URLs from web pages to collect visual assets for development projects. Handles both static and dynamic content.
Instructions
Extract all image URLs from a web page
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | URL to scrape | |
| useBrowser | No | Use browser for dynamic content |
Implementation Reference
- src/tools/web-scraping.ts:74-92 (registration)Registration of the 'extract_images' tool including its name, description, and input schema in the webScrapingTools array.{ name: 'extract_images', description: 'Extract all image URLs from a web page', inputSchema: { type: 'object', properties: { url: { type: 'string', description: 'URL to scrape', }, useBrowser: { type: 'boolean', description: 'Use browser for dynamic content', default: false, }, }, required: ['url'], }, },
- src/tools/web-scraping.ts:311-318 (handler)Handler dispatcher in handleWebScrapingTool for 'extract_images', delegating to dynamic or static scraper based on useBrowser flag.case 'extract_images': { if (config.useBrowser) { const data = await dynamicScraper.scrapeDynamicContent(config); return data.images; } else { return await staticScraper.extractImages(config); } }
- src/scrapers/static-scraper.ts:131-137 (handler)StaticScraper.extractImages handler that invokes scrapeHTML and returns the extracted images./** * Extract images from HTML */ async extractImages(config: ScrapingConfig): Promise<string[]> { const data = await this.scrapeHTML(config); return data.images || []; }
- src/scrapers/static-scraper.ts:46-57 (helper)Core logic for extracting image URLs using Cheerio within the scrapeHTML method.const images: string[] = []; $('img[src]').each((_, element) => { const src = $(element).attr('src'); if (src) { try { const url = new URL(src, config.url); images.push(url.href); } catch { // Invalid URL, skip } } });