Skip to main content
Glama

get_webpage_source

Fetch raw HTML source code and page information from any webpage by providing a valid URL. This tool enables web scraping and content extraction without requiring official APIs.

Instructions

Fetch the raw HTML source code and page information of a webpage.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
urlYesThe URL of the webpage to get source from. Must be a valid HTTP/HTTPS link.

Implementation Reference

  • The main handler function for the 'get_webpage_source' tool. Validates the input URL, imports and calls the SearchService to scrape the webpage, and returns structured data including title, description, keywords, extracted text content, links, and timestamp.
    async function handleGetWebpageSource(args) { const { url } = args; if (!url || typeof url !== 'string') { throw new Error('URL parameter is required and must be a string'); } try { new URL(url); } catch (error) { throw new Error('Invalid URL format'); } const searchService = (await import('../services/searchService.js')).default; const result = await searchService.scrapeWebpage(url); return { tool: 'get_webpage_source', url, title: result.title, description: result.description, keywords: result.keywords, content: result.content, links: result.links, timestamp: result.timestamp }; }
  • Input schema definition for the 'get_webpage_source' tool, specifying the required 'url' parameter as a string.
    name: 'get_webpage_source', description: 'Fetch the raw HTML source code and page information of a webpage.', inputSchema: { type: 'object', properties: { url: { type: 'string', description: 'The URL of the webpage to get source from. Must be a valid HTTP/HTTPS link.' } }, required: ['url'] } },
  • Tool dispatch/registration in the CallToolRequestSchema handler switch statement, routing calls to 'get_webpage_source' to the handleGetWebpageSource function.
    case 'get_webpage_source': result = await handleGetWebpageSource(args); break;
  • Supporting utility method in SearchService class that fetches raw HTML with axios, parses with cheerio, extracts title, meta tags, truncated body text, and links. Called by the tool handler to perform the actual scraping.
    async scrapeWebpage(url) { try { const response = await axios.get(url, { headers: { 'User-Agent': this.getRandomUserAgent(), 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8', 'Accept-Language': 'en-US,en;q=0.5', 'Accept-Encoding': 'gzip, deflate', 'Connection': 'keep-alive' }, timeout: 15000 }); const $ = cheerio.load(response.data); // Extract page info const title = $('title').text().trim(); const description = $('meta[name="description"]').attr('content') || ''; const keywords = $('meta[name="keywords"]').attr('content') || ''; // Extract main content const content = $('body').text() .replace(/\s+/g, ' ') .trim() .substring(0, 2000); // limit content length // Extract links const links = []; $('a[href]').each((index, element) => { if (index < 50) { // limit number of links const href = $(element).attr('href'); const text = $(element).text().trim(); if (href && text && href.startsWith('http')) { links.push({ url: href, text }); } } }); logger.info(`Webpage scraped successfully: ${url}`); return { url, title, description, keywords, content, links, timestamp: new Date().toISOString() }; } catch (error) { logger.error(`Webpage scraping error for ${url}:`, error); throw new Error(`Failed to scrape webpage: ${error.message}`); } }
  • Registration for listing tools via ListToolsRequestSchema, which returns the tools array including 'get_webpage_source' schema from generateTools().
    server.setRequestHandler(ListToolsRequestSchema, async () => { return { tools: generateTools() }; });

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Bosegluon2/spider-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server