Skip to main content
Glama
tatn

MCP Server Fetch TypeScript

by tatn

get_rendered_html

Fetch fully rendered HTML content from web pages requiring JavaScript execution, including single-page applications and dynamic content.

Instructions

Fetches fully rendered HTML content using a headless browser, including JavaScript-generated content. Essential for modern web applications, single-page applications (SPAs), or any content that requires client-side rendering to be complete.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
urlYesURL of the target web page that requires JavaScript execution or dynamic content rendering.

Implementation Reference

  • src/index.ts:68-81 (registration)
    Registers the get_rendered_html tool with its description and input schema (requires 'url').
    {
      name: "get_rendered_html",
      description: "Fetches fully rendered HTML content using a headless browser, including JavaScript-generated content. Essential for modern web applications, single-page applications (SPAs), or any content that requires client-side rendering to be complete.",
      inputSchema: {
        type: "object",
        properties: {
          url: {
            type: "string",
            description: "URL of the target web page that requires JavaScript execution or dynamic content rendering."
          }
        },
        required: ["url"]
      }
    },
  • Tool handler case in CallToolRequestSchema that fetches rendered HTML via getHtmlString and returns it as text content.
    case "get_rendered_html": {
      return {
        content: [{
          type: "text",
          text: (await getHtmlString(url))
        }]
      };
    }
  • Executes the tool logic: launches headless Chromium browser with Playwright, navigates to URL, waits for DOM content loaded, retrieves full HTML content, handles errors and cleanup.
    async function getHtmlString(request_url: string): Promise<string> {
      let browser: Browser | null = null;
      let page: Page | null = null;
      try {
        browser = await chromium.launch({
          headless: true,
          // args: ['--single-process'], 
        });
        const context = await browser.newContext();
        page = await context.newPage();
    
        await page.goto(request_url, {
          waitUntil: 'domcontentloaded',
          timeout: TIMEOUT,
        });
        const htmlString = await page.content();
        return htmlString;
      } catch (error) {
        console.error(`Failed to fetch HTML for ${request_url}:`, error);
        return ""; 
      } finally {
        if (page) {
          try {
            await page.close();
          } catch (e) {
            console.error("Error closing page:", e);
          }
        }
        if (browser) {
          try {
            await browser.close();
          } catch (error) {
            console.error('Error closing browser:', error);
          }
        }
      }
    }
  • Input schema defining the required 'url' parameter as a string.
    inputSchema: {
      type: "object",
      properties: {
        url: {
          type: "string",
          description: "URL of the target web page that requires JavaScript execution or dynamic content rendering."
        }
      },
      required: ["url"]
    }
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It explains the tool's approach ('using a headless browser') and scope ('including JavaScript-generated content'), which adds value beyond the input schema. However, it omits details like performance characteristics, error handling, or output format, leaving gaps for a mutation-like operation (fetching rendered content).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized with two sentences that are front-loaded and zero waste. The first sentence states the core functionality, and the second provides essential usage context, making it efficient and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (rendering dynamic content) and lack of annotations and output schema, the description is adequate but incomplete. It covers the purpose and usage context but misses details like what the returned HTML includes (e.g., full DOM, specific elements), potential limitations, or error scenarios, which are crucial for such an operation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with the 'url' parameter well-documented in the schema itself. The description adds marginal context by implying the URL must target pages needing JavaScript execution, but doesn't provide additional syntax, format, or validation details beyond what the schema already states.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('fetches fully rendered HTML content') and resources ('using a headless browser'), distinguishing it from sibling tools that fetch markdown or raw text. However, it doesn't explicitly differentiate from potential non-sibling alternatives like basic HTML fetchers, keeping it from a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('modern web applications, single-page applications (SPAs), or any content that requires client-side rendering'), which implicitly distinguishes it from sibling tools that handle markdown or raw text. It lacks explicit exclusions or named alternatives, preventing a score of 5.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/tatn/mcp-server-fetch-typescript'

If you have feedback or need assistance with the MCP directory API, please join our Discord server