Skip to main content
Glama
fetchSERP

FetchSERP MCP Server

Official
by fetchSERP

scrape_webpage_js

Extract data from web pages by executing custom JavaScript scripts on the desired URL. Ideal for dynamic content scraping and tailored data retrieval tasks.

Instructions

Scrape a web page with custom JS

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
js_scriptYesThe javascript code to execute on the page
urlYesThe url to scrape

Implementation Reference

  • Input schema definition for the 'scrape_webpage_js' tool, specifying required parameters 'url' and 'js_script'.
    {
      name: 'scrape_webpage_js',
      description: 'Scrape a web page with custom JS',
      inputSchema: {
        type: 'object',
        properties: {
          url: {
            type: 'string',
            description: 'The url to scrape',
          },
          js_script: {
            type: 'string',
            description: 'The javascript code to execute on the page',
          },
        },
        required: ['url', 'js_script'],
      },
    },
  • index.js:284-301 (registration)
    Registration of the 'scrape_webpage_js' tool in the listTools response.
    {
      name: 'scrape_webpage_js',
      description: 'Scrape a web page with custom JS',
      inputSchema: {
        type: 'object',
        properties: {
          url: {
            type: 'string',
            description: 'The url to scrape',
          },
          js_script: {
            type: 'string',
            description: 'The javascript code to execute on the page',
          },
        },
        required: ['url', 'js_script'],
      },
    },
  • Handler logic for 'scrape_webpage_js': destructures arguments, calls makeRequest to FetchSERP API endpoint '/api/v1/scrape_js' with POST method, passing URL and JS script in body.
    case 'scrape_webpage_js':
      const { url, js_script, ...jsParams } = args;
      return await this.makeRequest('/api/v1/scrape_js', 'POST', { url, ...jsParams }, { url, js_script }, token);
  • Helper method 'makeRequest' that performs authenticated HTTP requests to the FetchSERP API, used by all tools including scrape_webpage_js.
    async makeRequest(endpoint, method = 'GET', params = {}, body = null, token = null) {
      const fetchserpToken = token || process.env.FETCHSERP_API_TOKEN;
      
      if (!fetchserpToken) {
        throw new McpError(
          ErrorCode.InvalidRequest,
          'FETCHSERP_API_TOKEN is required'
        );
      }
    
      const url = new URL(`${API_BASE_URL}${endpoint}`);
      
      // Add query parameters for GET requests
      if (method === 'GET' && Object.keys(params).length > 0) {
        Object.entries(params).forEach(([key, value]) => {
          if (value !== undefined && value !== null) {
            if (Array.isArray(value)) {
              value.forEach(v => url.searchParams.append(`${key}[]`, v));
            } else {
              url.searchParams.append(key, value.toString());
            }
          }
        });
      }
    
      const fetchOptions = {
        method,
        headers: {
          'Authorization': `Bearer ${fetchserpToken}`,
          'Content-Type': 'application/json',
        },
      };
    
      if (body && method !== 'GET') {
        fetchOptions.body = JSON.stringify(body);
      }
    
      const response = await fetch(url.toString(), fetchOptions);
      
      if (!response.ok) {
        const errorText = await response.text();
        throw new McpError(
          ErrorCode.InternalError,
          `API request failed: ${response.status} ${response.statusText} - ${errorText}`
        );
      }
    
      return await response.json();
    }

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/fetchSERP/fetchserp-mcp-server-node'

If you have feedback or need assistance with the MCP directory API, please join our Discord server