Skip to main content
Glama
ScrapeGraphAI

ScrapeGraph MCP Server

Official

scrape

Read-onlyIdempotent

Fetch raw HTML content from any URL with optional JavaScript rendering for dynamic websites and Single Page Applications.

Instructions

Fetch raw page content from any URL with optional JavaScript rendering.

This tool performs basic web scraping to retrieve the raw HTML content of a webpage. Optionally enable JavaScript rendering for Single Page Applications (SPAs) and sites with heavy client-side rendering. Lower cost than AI extraction (1 credit/page). Read-only operation with no side effects.

Args: website_url (str): The complete URL of the webpage to scrape. - Must include protocol (http:// or https://) - Returns raw HTML content of the page - Works with both static and dynamic websites - Examples: * https://example.com/page * https://api.example.com/docs * https://news.site.com/article/123 * https://app.example.com/dashboard (may need render_heavy_js=true) - Supported protocols: HTTP, HTTPS - Invalid examples: * example.com (missing protocol) * ftp://example.com (unsupported protocol)

render_heavy_js (Optional[bool]): Enable full JavaScript rendering for dynamic content.
    - Default: false (faster, lower cost, works for most static sites)
    - Set to true for sites that require JavaScript execution to display content
    - When to use true:
      * Single Page Applications (React, Angular, Vue.js)
      * Sites with dynamic content loading via AJAX
      * Content that appears only after JavaScript execution
      * Interactive web applications
      * Sites where initial HTML is mostly empty
    - When to use false (default):
      * Static websites and blogs
      * Server-side rendered content
      * Traditional HTML pages
      * News articles and documentation
      * When you need faster processing
    - Performance impact:
      * false: 2-5 seconds processing time
      * true: 15-30 seconds processing time (waits for JS execution)
    - Cost: Same (1 credit) regardless of render_heavy_js setting

Returns: Dictionary containing: - html_content: The raw HTML content of the page as a string - page_title: Extracted page title if available - status_code: HTTP response status code (200 for success) - final_url: Final URL after any redirects - content_length: Size of the HTML content in bytes - processing_time: Time taken to fetch and process the page - javascript_rendered: Whether JavaScript rendering was used - credits_used: Number of credits consumed (always 1)

Raises: ValueError: If website_url is malformed or missing protocol HTTPError: If the webpage returns an error status (404, 500, etc.) TimeoutError: If the page takes too long to load ConnectionError: If the website cannot be reached

Use Cases: - Getting raw HTML for custom parsing - Checking page structure before using other tools - Fetching content for offline processing - Debugging website content issues - Pre-processing before AI extraction

Note: - This tool returns raw HTML without any AI processing - Use smartscraper for structured data extraction - Use markdownify for clean, readable content - Consider render_heavy_js=true if initial results seem incomplete

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
website_urlYes
render_heavy_jsNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • MCP tool handler for the 'scrape' tool. Fetches raw page content via ScapeGraph API, handling API key retrieval and error wrapping.
    @mcp.tool(annotations={"readOnlyHint": True, "destructiveHint": False, "idempotentHint": True})
    def scrape(
        website_url: str,
        ctx: Context,
        render_heavy_js: Optional[bool] = None
    ) -> Dict[str, Any]:
        """
        Fetch raw page content from any URL with optional JavaScript rendering.
    
        This tool performs basic web scraping to retrieve the raw HTML content of a webpage.
        Optionally enable JavaScript rendering for Single Page Applications (SPAs) and sites with
        heavy client-side rendering. Lower cost than AI extraction (1 credit/page).
        Read-only operation with no side effects.
    
        Args:
            website_url (str): The complete URL of the webpage to scrape.
                - Must include protocol (http:// or https://)
                - Returns raw HTML content of the page
                - Works with both static and dynamic websites
                - Examples:
                  * https://example.com/page
                  * https://api.example.com/docs
                  * https://news.site.com/article/123
                  * https://app.example.com/dashboard (may need render_heavy_js=true)
                - Supported protocols: HTTP, HTTPS
                - Invalid examples:
                  * example.com (missing protocol)
                  * ftp://example.com (unsupported protocol)
    
            render_heavy_js (Optional[bool]): Enable full JavaScript rendering for dynamic content.
                - Default: false (faster, lower cost, works for most static sites)
                - Set to true for sites that require JavaScript execution to display content
                - When to use true:
                  * Single Page Applications (React, Angular, Vue.js)
                  * Sites with dynamic content loading via AJAX
                  * Content that appears only after JavaScript execution
                  * Interactive web applications
                  * Sites where initial HTML is mostly empty
                - When to use false (default):
                  * Static websites and blogs
                  * Server-side rendered content
                  * Traditional HTML pages
                  * News articles and documentation
                  * When you need faster processing
                - Performance impact:
                  * false: 2-5 seconds processing time
                  * true: 15-30 seconds processing time (waits for JS execution)
                - Cost: Same (1 credit) regardless of render_heavy_js setting
    
        Returns:
            Dictionary containing:
            - html_content: The raw HTML content of the page as a string
            - page_title: Extracted page title if available
            - status_code: HTTP response status code (200 for success)
            - final_url: Final URL after any redirects
            - content_length: Size of the HTML content in bytes
            - processing_time: Time taken to fetch and process the page
            - javascript_rendered: Whether JavaScript rendering was used
            - credits_used: Number of credits consumed (always 1)
    
        Raises:
            ValueError: If website_url is malformed or missing protocol
            HTTPError: If the webpage returns an error status (404, 500, etc.)
            TimeoutError: If the page takes too long to load
            ConnectionError: If the website cannot be reached
    
        Use Cases:
            - Getting raw HTML for custom parsing
            - Checking page structure before using other tools
            - Fetching content for offline processing
            - Debugging website content issues
            - Pre-processing before AI extraction
    
        Note:
            - This tool returns raw HTML without any AI processing
            - Use smartscraper for structured data extraction
            - Use markdownify for clean, readable content
            - Consider render_heavy_js=true if initial results seem incomplete
        """
        try:
            api_key = get_api_key(ctx)
            client = ScapeGraphClient(api_key)
            return client.scrape(website_url=website_url, render_heavy_js=render_heavy_js)
        except httpx.HTTPError as http_err:
            return {"error": str(http_err)}
        except ValueError as val_err:
            return {"error": str(val_err)}
  • Core implementation in ScapeGraphClient class that makes HTTP POST request to ScapeGraph API /scrape endpoint to fetch raw page content.
    def scrape(self, website_url: str, render_heavy_js: Optional[bool] = None) -> Dict[str, Any]:
        """
        Basic scrape endpoint to fetch page content.
    
        Args:
            website_url: URL to scrape
            render_heavy_js: Whether to render heavy JS (optional)
    
        Returns:
            Dictionary containing the scraped result
        """
        url = f"{self.BASE_URL}/scrape"
        payload: Dict[str, Any] = {"website_url": website_url}
        if render_heavy_js is not None:
            payload["render_heavy_js"] = render_heavy_js
    
        response = self.client.post(url, headers=self.headers, json=payload)
        response.raise_for_status()
        return response.json()
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, covering safety and idempotency. The description adds valuable context beyond this: it specifies cost (1 credit/page), performance impact (2-5 sec vs. 15-30 sec), and that it's 'lower cost than AI extraction,' which helps the agent make informed decisions without contradicting annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with sections (Args, Returns, Raises, Use Cases, Note) and front-loaded key information. However, it is lengthy with detailed examples and lists; while informative, some redundancy (e.g., repeating cost info) slightly reduces efficiency, though every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (web scraping with JS rendering), the description is highly complete. It covers purpose, usage, parameters, returns (though output schema exists), error handling, and sibling differentiation. With annotations and output schema provided, the description adds comprehensive context without gaps, making it fully adequate for agent use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description fully compensates by detailing both parameters. For website_url, it explains requirements (protocol inclusion), examples, supported/unsupported protocols, and behavior. For render_heavy_js, it provides default values, usage scenarios, performance impacts, and cost implications, adding significant meaning beyond the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Fetch raw page content from any URL with optional JavaScript rendering' and 'performs basic web scraping to retrieve the raw HTML content of a webpage.' It distinguishes from siblings by mentioning alternatives like smartscraper for structured data extraction and markdownify for clean content, making the scope specific.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool vs. alternatives: 'Use smartscraper for structured data extraction' and 'Use markdownify for clean, readable content.' It also details when to enable JavaScript rendering for SPAs vs. static sites, offering clear context and exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/ScrapeGraphAI/scrapegraph-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server