Skip to main content
Glama
everford

Fetcher MCP

fetch_url

Retrieve web page content from a specified URL using a headless browser, with options to extract main content, disable media, and manage timeouts for efficient data collection.

Instructions

Retrieve web page content from a specified URL

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
debugNoWhether to enable debug mode (showing browser window), overrides the --debug command line flag if specified
disableMediaNoWhether to disable media resources (images, stylesheets, fonts, media), default is true
extractContentNoWhether to intelligently extract the main content, default is true
maxLengthNoMaximum length of returned content (in characters), default is no limit
navigationTimeoutNoMaximum time to wait for additional navigation in milliseconds, default is 10000 (10 seconds)
returnHtmlNoWhether to return HTML content instead of Markdown, default is false
timeoutNoPage loading timeout in milliseconds, default is 30000 (30 seconds)
urlYesURL to fetch
waitForNavigationNoWhether to wait for additional navigation after initial page load (useful for sites with anti-bot verification), default is false
waitUntilNoSpecifies when navigation is considered complete, options: 'load', 'domcontentloaded', 'networkidle', 'commit', default is 'load'

Implementation Reference

  • The main handler function for the fetch_url tool. It launches a Chromium browser using Playwright, navigates to the URL with specified options, processes the content using WebContentProcessor, and returns the content.
    export async function fetchUrl(args: any) {
      const url = String(args?.url || "");
      if (!url) {
        console.error(`[Error] URL parameter missing`);
        throw new Error("URL parameter is required");
      }
    
      const options: FetchOptions = {
        timeout: Number(args?.timeout) || 30000,
        waitUntil: String(args?.waitUntil || "load") as 'load' | 'domcontentloaded' | 'networkidle' | 'commit',
        extractContent: args?.extractContent !== false,
        maxLength: Number(args?.maxLength) || 0,
        returnHtml: args?.returnHtml === true,
        waitForNavigation: args?.waitForNavigation === true,
        navigationTimeout: Number(args?.navigationTimeout) || 10000,
        disableMedia: args?.disableMedia !== false,
        debug: args?.debug
      };
    
      // 确定是否启用调试模式(优先使用参数指定的值,否则使用命令行标志)
      const useDebugMode = options.debug !== undefined ? options.debug : isDebugMode;
      
      if (useDebugMode) {
        console.log(`[Debug] Debug mode enabled for URL: ${url}`);
      }
    
      const processor = new WebContentProcessor(options, '[FetchURL]');
      let browser = null;
      let page = null;
    
      try {
        browser = await chromium.launch({ headless: !useDebugMode });
        const context = await browser.newContext({
          javaScriptEnabled: true,
          ignoreHTTPSErrors: true,
          userAgent: 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36'
        });
    
        await context.route('**/*', async (route) => {
          const resourceType = route.request().resourceType();
          if (options.disableMedia && ['image', 'stylesheet', 'font', 'media'].includes(resourceType)) {
            await route.abort();
          } else {
            await route.continue();
          }
        });
    
        page = await context.newPage();
        
        const result = await processor.processPageContent(page, url);
        
        return {
          content: [{ type: "text", text: result.content }]
        };
      } finally {
        if (!useDebugMode) {
          if (page) await page.close().catch(e => console.error(`[Error] Failed to close page: ${e.message}`));
          if (browser) await browser.close().catch(e => console.error(`[Error] Failed to close browser: ${e.message}`));
        } else {
          console.log(`[Debug] Browser and page kept open for debugging. URL: ${url}`);
        }
      }
    }
  • The schema definition for the fetch_url tool, including inputSchema with parameters like url, timeout, etc.
    export const fetchUrlTool = {
      name: "fetch_url",
      description: "Retrieve web page content from a specified URL",
      inputSchema: {
        type: "object",
        properties: {
          url: {
            type: "string",
            description: "URL to fetch",
          },
          timeout: {
            type: "number",
            description:
              "Page loading timeout in milliseconds, default is 30000 (30 seconds)",
          },
          waitUntil: {
            type: "string",
            description:
              "Specifies when navigation is considered complete, options: 'load', 'domcontentloaded', 'networkidle', 'commit', default is 'load'",
          },
          extractContent: {
            type: "boolean",
            description:
              "Whether to intelligently extract the main content, default is true",
          },
          maxLength: {
            type: "number",
            description:
              "Maximum length of returned content (in characters), default is no limit",
          },
          returnHtml: {
            type: "boolean",
            description:
              "Whether to return HTML content instead of Markdown, default is false",
          },
          waitForNavigation: {
            type: "boolean",
            description:
              "Whether to wait for additional navigation after initial page load (useful for sites with anti-bot verification), default is false",
          },
          navigationTimeout: {
            type: "number",
            description:
              "Maximum time to wait for additional navigation in milliseconds, default is 10000 (10 seconds)",
          },
          disableMedia: {
            type: "boolean",
            description:
              "Whether to disable media resources (images, stylesheets, fonts, media), default is true",
          },
          debug: {
            type: "boolean",
            description:
              "Whether to enable debug mode (showing browser window), overrides the --debug command line flag if specified",
          },
        },
        required: ["url"],
      }
    };
  • Registration of the fetch_url tool: exported in tools array and mapped in toolHandlers object by name to its handler function.
    import { fetchUrlTool, fetchUrl } from './fetchUrl.js';
    import { fetchUrlsTool, fetchUrls } from './fetchUrls.js';
    
    // Export tool definitions
    export const tools = [
      fetchUrlTool,
      fetchUrlsTool
    ];
    
    // Export tool implementations
    export const toolHandlers = {
      [fetchUrlTool.name]: fetchUrl,
      [fetchUrlsTool.name]: fetchUrls
    };
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure but only states the basic action. It fails to mention critical traits such as rate limits, authentication needs, potential for blocking or CAPTCHAs, error handling, or what 'retrieve' entails (e.g., using a headless browser, returning structured data). The description is too minimal for a tool with 10 parameters and complex web interactions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose without unnecessary words. It earns its place by clearly stating what the tool does, making it highly concise and well-structured for quick understanding.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (10 parameters, web scraping functionality) and lack of annotations and output schema, the description is incomplete. It doesn't address behavioral aspects, error cases, or return values, leaving significant gaps for an agent to understand how to use it effectively in real-world scenarios.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds no parameter-specific information beyond what the input schema provides. Since schema description coverage is 100%, with detailed descriptions for all 10 parameters, the baseline score of 3 is appropriate. The description doesn't compensate but doesn't need to, as the schema fully documents parameters like 'debug', 'extractContent', and 'waitUntil'.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose as retrieving web page content from a URL, using specific verbs ('retrieve') and resources ('web page content', 'specified URL'). It distinguishes the core function but doesn't explicitly differentiate from the sibling tool 'fetch_urls', which appears to be a plural/multiple URL version.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'fetch_urls' or other web scraping methods. It lacks context about prerequisites, limitations, or typical use cases, leaving the agent with no usage direction beyond the basic purpose.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/everford/fetcher-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server