Skip to main content
Glama
yatotm

Tavily MCP Load Balancer

by yatotm

tavily-map

Create structured website maps to analyze site structure, discover content, and understand navigation paths for site audits and architecture reviews.

Instructions

A powerful web mapping tool that creates a structured map of website URLs, allowing you to discover and analyze site structure, content organization, and navigation paths. Perfect for site audits, content discovery, and understanding website architecture.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
allow_externalNoWhether to allow following links that go to external domains
categoriesNoFilter URLs using predefined categories like documentation, blog, api, etc
instructionsNoNatural language instructions for the crawler
limitNoTotal number of links the crawler will process before stopping
max_breadthNoMax number of links to follow per level of the tree (i.e., per page)
max_depthNoMax depth of the mapping. Defines how far from the base URL the crawler can explore
select_domainsNoRegex patterns to select crawling to specific domains or subdomains (e.g., ^docs\.example\.com$)
select_pathsNoRegex patterns to select only URLs with specific path patterns (e.g., /docs/.*, /api/v1.*)
urlYesThe root URL to begin the mapping

Implementation Reference

  • The main handler for the 'tavily-map' tool call in the MCP stdio server. Parses arguments, invokes TavilyClient.map(), formats output with formatMapResults, and returns MCP content.
    case 'tavily-map': {
      const mapParams: TavilyMapParams = {
        url: args?.url as string,
        max_depth: (args?.max_depth as number) || 1,
        max_breadth: (args?.max_breadth as number) || 20,
        limit: (args?.limit as number) || 50,
        instructions: args?.instructions as string,
        select_paths: Array.isArray(args?.select_paths) ? args.select_paths : [],
        select_domains: Array.isArray(args?.select_domains) ? args.select_domains : [],
        allow_external: (args?.allow_external as boolean) || false,
        categories: Array.isArray(args?.categories) ? args.categories : []
      };
    
      const mapResult = await this.tavilyClient.map(mapParams);
    
      return {
        content: [
          {
            type: 'text',
            text: formatMapResults(mapResult),
          },
        ],
      };
    }
  • src/index.ts:557-635 (registration)
    Tool registration in listTools handler, defining name, description, and input schema for 'tavily-map'.
    {
      name: "tavily-map",
      description: "A powerful web mapping tool that creates a structured map of website URLs, allowing you to discover and analyze site structure, content organization, and navigation paths. Perfect for site audits, content discovery, and understanding website architecture.",
      inputSchema: {
        type: "object",
        properties: {
          url: {
            type: "string",
            description: "The root URL to begin the mapping"
          },
          max_depth: {
            type: "integer",
            description: "Max depth of the mapping. Defines how far from the base URL the crawler can explore",
            default: 1,
            minimum: 1
          },
          max_breadth: {
            type: "integer",
            description: "Max number of links to follow per level of the tree (i.e., per page)",
            default: 20,
            minimum: 1
          },
          limit: {
            type: "integer",
            description: "Total number of links the crawler will process before stopping",
            default: 50,
            minimum: 1
          },
          instructions: {
            type: "string",
            description: "Natural language instructions for the crawler"
          },
          select_paths: {
            type: "array",
            items: { type: "string" },
            description: "Regex patterns to select only URLs with specific path patterns (e.g., /docs/.*, /api/v1.*)",
            default: []
          },
          select_domains: {
            type: "array",
            items: { type: "string" },
            description: "Regex patterns to select crawling to specific domains or subdomains (e.g., ^docs\\.example\\.com$)",
            default: []
          },
          allow_external: {
            type: "boolean",
            description: "Whether to allow following links that go to external domains",
            default: false
          },
          categories: {
            type: "array",
            items: {
              type: "string",
              enum: ["Careers", "Blog", "Documentation", "About", "Pricing", "Community", "Developers", "Contact", "Media"]
            },
            description: "Filter URLs using predefined categories like documentation, blog, api, etc",
            default: []
          },
          extract_depth: {
            type: "string",
            enum: ["basic", "advanced"],
            description: "Advanced extraction retrieves more data, including tables and embedded content, with higher success but may increase latency",
            default: "basic"
          },
          format: {
            type: "string",
            enum: ["markdown", "text"],
            description: "The format of the extracted web page content. markdown returns content in markdown format. text returns plain text and may increase latency.",
            default: "markdown"
          },
          include_favicon: {
            type: "boolean",
            description: "Whether to include the favicon URL for each result",
            default: false
          }
        },
        required: ["url"]
      }
    } as Tool,
  • Core implementation in TavilyClient: sends HTTP POST request to Tavily API's /map endpoint with parameters and API key.
    async map(params: TavilyMapParams): Promise<any> {
      return this.makeRequest(this.baseURLs.map, params);
    }
  • Type definition for input parameters of the tavily-map tool.
    export interface TavilyMapParams {
      url: string;
      max_depth?: number;
      max_breadth?: number;
      limit?: number;
      instructions?: string;
      select_paths?: string[];
      select_domains?: string[];
      allow_external?: boolean;
      categories?: string[];
    }
  • Formats the raw API response into a human-readable list of mapped URLs for the tool output.
    export function formatMapResults(response: any): string {
      try {
        if (!response || typeof response !== 'object') {
          return 'Error: Invalid map response format';
        }
    
        const output = [];
        output.push(`Site Map Results:`);
        output.push(`Base URL: ${sanitizeText(response.base_url, 500)}`);
        output.push('\nMapped Pages:');
    
        if (Array.isArray(response.results)) {
          response.results.slice(0, 50).forEach((page: any, index: number) => {
            const pageUrl = typeof page === 'string' ? page : page.url || page;
            output.push(`\n[${index + 1}] URL: ${sanitizeText(pageUrl, 500)}`);
          });
        }
    
        const result = output.join('\n');
    
        // 限制总输出大小
        if (result.length > 20000) {
          return result.substring(0, 20000) + '\n\n... [站点地图过长,已截断]';
        }
    
        return result;
      } catch (error) {
        console.error('Error formatting map results:', error);
        return 'Error: Failed to format map results';
      }
    }
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It describes the tool as 'powerful' and for 'discovering and analyzing,' which implies it performs read-only operations, but doesn't specify behavioral traits like rate limits, authentication needs, or potential impacts on target websites. The description adds value by explaining the mapping purpose but lacks detailed behavioral disclosure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded: it starts with the core purpose, then elaborates on use cases. Every sentence earns its place by adding value without redundancy, making it efficient and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (9 parameters, no annotations, no output schema), the description is somewhat complete but has gaps. It explains the tool's purpose and use cases well, but without annotations or output schema, it doesn't cover behavioral aspects or return values, leaving the agent to infer details from the schema alone.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, so the schema already documents all 9 parameters thoroughly. The description doesn't add specific parameter semantics beyond what the schema provides, such as explaining how 'categories' interact with mapping or the implications of 'max_depth.' Baseline 3 is appropriate since the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'creates a structured map of website URLs' with specific verbs like 'discover and analyze site structure, content organization, and navigation paths.' It distinguishes from siblings like 'search' or 'extract' by focusing on mapping and structural analysis rather than general search or content extraction.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool: 'Perfect for site audits, content discovery, and understanding website architecture.' It doesn't explicitly mention when not to use it or name alternatives among siblings, but the context strongly implies it's for structural mapping rather than other tasks like searching or crawling without mapping.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/yatotm/tavily-mcp-loadbalancer'

If you have feedback or need assistance with the MCP directory API, please join our Discord server