Skip to main content
Glama
HasData

hasdata-mcp

Official

web_scraping_web_scraping: POST /

hasdata_web_scraping_web_scraping_scrapeWebPage

Scrape any public URL using managed proxies, JS rendering, custom headers, and wait conditions. Extract structured data via CSS or AI rules, capture screenshots, block resources. Returns HTML, markdown, or JSON for direct integration.

Instructions

Scrape Web Page

Universal web scraper that fetches any public URL through managed proxies (datacenter or residential, geo-targeted) with optional JS rendering, custom headers, wait conditions, jsScenario actions (click, scroll, fill, waitFor), screenshots, resource/ad/URL blocking, and extractRules/aiExtractRules for LLM-driven structured extraction. Returns HTML, text, markdown, and/or JSON along with status code, extracted emails and links, CSS-selector extractions, and AI-structured fields per schema. Use as a fallback/universal fetcher for sites without a dedicated API, for scraping JS-heavy SPAs, bypassing bot protections, capturing screenshots, or producing clean markdown/structured JSON to feed downstream parsers, RAG pipelines, or data warehouses.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
urlYesThe URL of the web page to scrape.
headersNoOptional custom headers to send with the request.
proxyTypeNoType of proxy to use.
proxyCountryNoOptional proxy country code.
blockResourcesNoWhether to block loading of resources like images and stylesheets.
blockAdsNoWhether to block ads.
blockUrlsNoList of URLs to block.
waitNoTime in milliseconds to wait after the page load.
waitForNoCSS selector to wait for before scraping.
jsScenarioNoEnables custom JavaScript interactions on the target webpage during scraping. It's an array where each object defines a specific action or step. These actions can include clicking elements, waiting for elements, executing custom scripts, and more. Key actions within this field include: - `evaluate`: Run custom JavaScript code on the page. - `click`: Click on an element specified by a CSS selector. - `wait`: Pause for a set duration (in milliseconds). - `waitFor`: Delay until a specific element appears. - `waitForAndClick`: Combine waiting for an element and then clicking it. - `scrollX`, `scrollY`: Scroll to specified positions on the page. - `fill`: Enter values into input fields identified by CSS selectors. Actions are executed sequentially.
extractRulesNoRules for extracting specific data from the page. For example: `{ "title": "h1", "link_href": "a#link @href", "page_text": "body" }`
screenshotNoWhether to take a screenshot of the page.
jsRenderingNoEnable JavaScript rendering.
extractEmailsNoExtract emails from the page.
extractLinksNoExtract links from the page.
includeOnlyTagsNoThe `includeOnlyTags` parameter accepts an array of valid CSS selectors. When specified, only the elements matching these selectors will be included in the response content. Each value must be a valid `querySelectorAll` selector. Useful for extracting specific parts of the document.
excludeTagsNoThe `excludeTags` parameter accepts an array of valid CSS selectors. Elements matching these selectors will be removed from the final output. Each value must be a valid `querySelectorAll` selector. This can be used to remove ads, scripts, or other unwanted sections.
removeBase64ImagesNoIf set to `true`, any images embedded as base64-encoded strings will be removed from the output. Useful for reducing response size or when base64 images are not needed.
outputFormatNoThe outputFormat parameter specifies the desired response format: `html`, `text`, `markdown`, or `json`. If only one of `html`, `text`, or `markdown` is provided, the API returns the response in that format. If multiple formats are specified, the API returns a JSON response with keys for each requested format. If `json` is included with any other format, the API returns a JSON response with keys for the other specified formats.
aiExtractRulesNoDefines custom rules for AI-based data extraction using LLMs. This enables the system to extract structured data directly from the HTML of the page. Each key in the object represents a desired output field name, and the value specifies its type and optional description to guide the AI. Supported types: - `string`: plain text value - `number`: numeric value - `boolean`: true/false - `list`: an array of values - `item`: a nested object with its own structure defined under `output`
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It comprehensively details behaviors: proxy types, geo-targeting, JS rendering, custom headers, wait conditions, jsScenario actions, blocking capabilities, multiple output formats, and AI extraction. Lacks specifics on error handling or rate limits, but overall transparent for a scraping tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, dense paragraph that covers many features and use cases. It is front-loaded with the core purpose. While not extremely concise, it efficiently conveys essential information without unnecessary repetition.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (20 parameters, no output schema, no annotations), the description is quite complete. It covers proxy options, JS rendering, blocking, extraction rules, output formats, and use cases. Missing details on error handling or rate limits, but for a scraping tool with extensive parameters, it provides sufficient context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with detailed parameter descriptions. The description adds high-level context (e.g., 'Universal web scraper', 'LLM-driven structured extraction') but does not significantly enhance understanding of individual parameters beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the tool as a 'Universal web scraper' that fetches any public URL, with specific capabilities like managed proxies, JS rendering, and extraction. It distinguishes itself from sibling tools (dedicated site scrapers) by serving as a fallback/universal fetcher.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use this tool: 'Use as a fallback/universal fetcher for sites without a dedicated API, for scraping JS-heavy SPAs, bypassing bot protections, capturing screenshots, or producing clean markdown/structured JSON.' This contrasts with sibling tools that target specific sites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/HasData/hasdata-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server