Skip to main content
Glama
tatn

MCP Server Fetch Python

by tatn

get-rendered-html

Fetch fully rendered HTML from a URL, including JavaScript-generated content, for web pages requiring client-side rendering. Ideal for SPAs and dynamic web applications.

Instructions

Fetches fully rendered HTML content using a headless browser, including JavaScript-generated content. Essential for modern web applications, single-page applications (SPAs), or any content that requires client-side rendering to be complete.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
urlYesURL of the target web page (ordinary HTML including JavaScript, etc.).

Implementation Reference

  • Core handler function that executes the tool logic: launches a headless Chromium browser using Playwright, navigates to the provided URL, retrieves the fully rendered HTML content, and returns it as a string.
    async def get_parsed_html_string_by_playwright(url:str)->str:
        
        async with async_playwright() as p:
            browser = await p.chromium.launch()
            page = await browser.new_page()
            await page.goto(url)
            parsed_html = await page.content()
            await browser.close()
            return parsed_html
  • Dispatch logic in the main @server.call_tool() handler that invokes the specific implementation for 'get-rendered-html'.
    elif name == "get-rendered-html":
        parsed_html = await get_parsed_html_string_by_playwright(url)
        result_string = str(parsed_html)
  • Registers the 'get-rendered-html' tool in the MCP server's list_tools(), including name, description, and input schema.
     types.Tool(
        name="get-rendered-html",
        description="Fetches fully rendered HTML content using a headless browser, including JavaScript-generated content. Essential for modern web applications, single-page applications (SPAs), or any content that requires client-side rendering to be complete.",  # noqa: E501
        inputSchema={
            "type": "object",
            "properties": {
                "url": {"type": "string", "description":"URL of the target web page (ordinary HTML including JavaScript, etc.)."}  # noqa: E501
            },
            "required": ["url"],
        },
    ),
  • JSON schema defining the tool's input: an object with a required 'url' string property.
    inputSchema={
        "type": "object",
        "properties": {
            "url": {"type": "string", "description":"URL of the target web page (ordinary HTML including JavaScript, etc.)."}  # noqa: E501
        },
        "required": ["url"],
    },
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions that the tool uses a headless browser and fetches JavaScript-generated content, which adds some context beyond the basic 'fetch' operation. However, it lacks details on performance characteristics (e.g., speed, timeouts), error handling, or output format, leaving significant gaps in understanding how the tool behaves in practice.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized with two sentences that are front-loaded with key information (purpose and method). Every sentence contributes meaningfully, though the second sentence could be slightly more concise by combining the examples of use cases.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (involving headless browsers and dynamic content) and the absence of both annotations and an output schema, the description is incomplete. It explains what the tool does and when to use it but lacks details on behavioral traits (e.g., performance, errors) and output format, which are critical for effective use. However, it covers the core purpose adequately for a basic understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, with the single parameter 'url' clearly documented in the schema. The description doesn't add any parameter-specific information beyond what's in the schema (e.g., no examples, constraints, or format details), so it meets the baseline for high schema coverage without compensating with extra value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('fetches fully rendered HTML content') and resources ('using a headless browser'), and distinguishes it from likely siblings by emphasizing JavaScript-generated content and client-side rendering. However, it doesn't explicitly name or differentiate from the actual sibling tools (get-markdown, get-markdown-from-media, get-raw-text).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('modern web applications, single-page applications (SPAs), or any content that requires client-side rendering'), which implicitly suggests alternatives for static content. It doesn't explicitly state when not to use it or name specific alternative tools, but the context is sufficiently detailed to guide usage decisions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/tatn/mcp-server-fetch-python'

If you have feedback or need assistance with the MCP directory API, please join our Discord server