Skip to main content
Glama
code-alchemist01

Development Tools MCP Server

extract_links

Extract all links from a web page to analyze content structure, discover resources, or gather URLs for development workflows. Supports both static and dynamic content extraction.

Instructions

Extract all links from a web page

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
urlYesURL to scrape
useBrowserNoUse browser for dynamic content

Implementation Reference

  • Main handler dispatch for 'extract_links' tool within handleWebScrapingTool function. Chooses between dynamic and static scraper based on useBrowser flag and returns the links.
    case 'extract_links': {
      if (config.useBrowser) {
        const data = await dynamicScraper.scrapeDynamicContent(config);
        return data.links;
      } else {
        return await staticScraper.extractLinks(config);
      }
    }
  • Input schema definition for the 'extract_links' tool.
    name: 'extract_links',
    description: 'Extract all links from a web page',
    inputSchema: {
      type: 'object',
      properties: {
        url: {
          type: 'string',
          description: 'URL to scrape',
        },
        useBrowser: {
          type: 'boolean',
          description: 'Use browser for dynamic content',
          default: false,
        },
      },
      required: ['url'],
    },
  • Registration of the 'extract_links' tool in the webScrapingTools array, which is later included in the MCP server's allTools.
    {
      name: 'extract_links',
      description: 'Extract all links from a web page',
      inputSchema: {
        type: 'object',
        properties: {
          url: {
            type: 'string',
            description: 'URL to scrape',
          },
          useBrowser: {
            type: 'boolean',
            description: 'Use browser for dynamic content',
            default: false,
          },
        },
        required: ['url'],
      },
    },
  • Static scraper implementation of extractLinks: invokes scrapeHTML to parse page and returns the links array.
    async extractLinks(config: ScrapingConfig): Promise<string[]> {
      const data = await this.scrapeHTML(config);
      return data.links || [];
    }
  • Core logic for extracting links using Cheerio: iterates over <a> tags, resolves relative URLs, and collects unique hrefs.
    const links: string[] = [];
    $('a[href]').each((_, element) => {
      const href = $(element).attr('href');
      if (href) {
        try {
          const url = new URL(href, config.url);
          links.push(url.href);
        } catch {
          // Invalid URL, skip
        }
      }
    });
  • src/server.ts:18-25 (registration)
    Top-level registration: includes webScrapingTools (containing extract_links) into the allTools list provided to the MCP server for tool listing.
    const allTools = [
      ...codeAnalysisTools,
      ...codeQualityTools,
      ...dependencyAnalysisTools,
      ...lintingTools,
      ...webScrapingTools,
      ...apiDiscoveryTools,
    ];
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. While 'extract' implies a read operation, it doesn't specify whether this requires network access, what happens with dynamic content (only hinted at by the 'useBrowser' parameter), rate limits, authentication needs, or what format the extracted links are returned in. The description is minimal and lacks important operational context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, clear sentence that states exactly what the tool does without any unnecessary words. It's perfectly front-loaded and wastes no space on redundant information, making it highly efficient for an agent to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a web scraping tool with no annotations and no output schema, the description is insufficient. It doesn't explain what constitutes a 'link', how results are structured, whether there are limitations (like maximum links extracted), or how it handles different types of web pages. Given the complexity of web scraping and the lack of structured behavioral information, more context is needed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds no parameter information beyond what's already in the schema (which has 100% coverage). It doesn't explain what 'extract all links' means in practice, how links are identified, or provide context about the 'useBrowser' parameter's implications. With complete schema coverage, the baseline is 3, but the description doesn't enhance understanding of parameter usage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('extract') and target resource ('all links from a web page'), making the purpose immediately understandable. However, it doesn't explicitly differentiate from sibling tools like 'extract_images', 'extract_tables', or 'extract_text', which all perform extraction operations on web pages but target different content types.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. With multiple sibling extraction tools (extract_images, extract_tables, extract_text, extract_after_click, extract_api_schema) and scraping tools (scrape_by_selector, scrape_dynamic_content, scrape_html), there's no indication of when link extraction is appropriate versus other extraction methods or when to choose this over general scraping tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/code-alchemist01/development-tools-mcp-Server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server