get_webpage_source
Fetch the raw HTML source code and page data from any webpage by providing its URL. Part of the Spider MCP server for web searching and scraping using crawler technology.
Instructions
Fetch the raw HTML source code and page information of a webpage.
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | The URL of the webpage to get source from. Must be a valid HTTP/HTTPS link. |
Implementation Reference
- src/mcp/server.js:254-280 (handler)MCP tool handler for 'get_webpage_source': validates URL input, fetches webpage data via searchService.scrapeWebpage, and returns structured page information.async function handleGetWebpageSource(args) { const { url } = args; if (!url || typeof url !== 'string') { throw new Error('URL parameter is required and must be a string'); } try { new URL(url); } catch (error) { throw new Error('Invalid URL format'); } const searchService = (await import('../services/searchService.js')).default; const result = await searchService.scrapeWebpage(url); return { tool: 'get_webpage_source', url, title: result.title, description: result.description, keywords: result.keywords, content: result.content, links: result.links, timestamp: result.timestamp }; }
- Core scraping logic using axios to fetch HTML and cheerio to parse title, meta description/keywords, plain text content (truncated), and top links.async scrapeWebpage(url) { try { const response = await axios.get(url, { headers: { 'User-Agent': this.getRandomUserAgent(), 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8', 'Accept-Language': 'en-US,en;q=0.5', 'Accept-Encoding': 'gzip, deflate', 'Connection': 'keep-alive' }, timeout: 15000 }); const $ = cheerio.load(response.data); // Extract page info const title = $('title').text().trim(); const description = $('meta[name="description"]').attr('content') || ''; const keywords = $('meta[name="keywords"]').attr('content') || ''; // Extract main content const content = $('body').text() .replace(/\s+/g, ' ') .trim() .substring(0, 2000); // limit content length // Extract links const links = []; $('a[href]').each((index, element) => { if (index < 50) { // limit number of links const href = $(element).attr('href'); const text = $(element).text().trim(); if (href && text && href.startsWith('http')) { links.push({ url: href, text }); } } }); logger.info(`Webpage scraped successfully: ${url}`); return { url, title, description, keywords, content, links, timestamp: new Date().toISOString() }; } catch (error) { logger.error(`Webpage scraping error for ${url}:`, error); throw new Error(`Failed to scrape webpage: ${error.message}`); } }
- src/mcp/server.js:60-73 (schema)Tool schema defining the 'get_webpage_source' name, description, and required 'url' input parameter.{ name: 'get_webpage_source', description: 'Fetch the raw HTML source code and page information of a webpage.', inputSchema: { type: 'object', properties: { url: { type: 'string', description: 'The URL of the webpage to get source from. Must be a valid HTTP/HTTPS link.' } }, required: ['url'] } },
- src/mcp/server.js:117-121 (registration)Registration of the ListToolsRequest handler that returns the tool definitions including 'get_webpage_source' via generateTools().server.setRequestHandler(ListToolsRequestSchema, async () => { return { tools: generateTools() }; });
- src/mcp/server.js:138-140 (registration)Dispatch/registration case in the CallToolRequest handler that routes to the specific tool handler.case 'get_webpage_source': result = await handleGetWebpageSource(args); break;