Skip to main content
Glama

tavily-extract

Extract and process raw web content from URLs for data collection, content analysis, and research tasks with configurable depth and image inclusion.

Instructions

A powerful web content extraction tool that retrieves and processes raw content from specified URLs, ideal for data collection, content analysis, and research tasks.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
urlsYesList of URLs to extract content from
extract_depthNoDepth of extraction - 'basic' or 'advanced', if usrls are linkedin use 'advanced' or if explicitly told to use advancedbasic
include_imagesNoInclude a list of images extracted from the urls in the response

Implementation Reference

  • The core handler function that executes the tavily-extract tool logic by making a POST request to Tavily's extract API endpoint using axios, handling parameters like urls and extract_depth, and managing errors such as invalid API key or rate limits.
    async extract(params: any): Promise<TavilyResponse> {
      try {
        const response = await this.axiosInstance.post(this.baseURLs.extract, {
          ...params,
          api_key: API_KEY
        });
        return response.data;
      } catch (error: any) {
        if (error.response?.status === 401) {
          throw new Error('Invalid API key');
        } else if (error.response?.status === 429) {
          throw new Error('Usage limit exceeded');
        }
        throw error;
      }
    }
  • Defines the input schema for validation of tavily-extract tool arguments, specifying required 'urls' array and optional 'extract_depth' and 'include_images' parameters.
    inputSchema: {
      type: "object",
      properties: {
        urls: { 
          type: "array",
          items: { type: "string" },
          description: "List of URLs to extract content from"
        },
        extract_depth: { 
          type: "string",
          enum: ["basic","advanced"],
          description: "Depth of extraction - 'basic' or 'advanced', if usrls are linkedin use 'advanced' or if explicitly told to use advanced",
          default: "basic"
        },
        include_images: { 
          type: "boolean", 
          description: "Include a list of images extracted from the urls in the response",
          default: false,
        }
      },
      required: ["urls"]
    }
  • src/index.ts:191-216 (registration)
    Registers the tavily-extract tool in the ListTools response, providing name, description, and input schema.
    {
      name: "tavily-extract",
      description: "A powerful web content extraction tool that retrieves and processes raw content from specified URLs, ideal for data collection, content analysis, and research tasks.",
      inputSchema: {
        type: "object",
        properties: {
          urls: { 
            type: "array",
            items: { type: "string" },
            description: "List of URLs to extract content from"
          },
          extract_depth: { 
            type: "string",
            enum: ["basic","advanced"],
            description: "Depth of extraction - 'basic' or 'advanced', if usrls are linkedin use 'advanced' or if explicitly told to use advanced",
            default: "basic"
          },
          include_images: { 
            type: "boolean", 
            description: "Include a list of images extracted from the urls in the response",
            default: false,
          }
        },
        required: ["urls"]
      }
    },
  • Dispatch case in the CallToolRequest handler that routes tavily-extract calls to the extract method with parsed arguments.
    case "tavily-extract":
      response = await this.extract({
        urls: args.urls,
        extract_depth: args.extract_depth,
        include_images: args.include_images
      });
      break;
  • Defines the API base URLs, including the extract endpoint used by the tavily-extract tool.
    private baseURLs = {
      search: 'https://api.tavily.com/search',
      extract: 'https://api.tavily.com/extract',
      crawl: 'https://api.tavily.com/crawl',
      map: 'https://api.tavily.com/map'
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions 'retrieves and processes raw content' but lacks details on rate limits, authentication needs, error handling, or what 'processes' entails (e.g., formatting, cleaning). For a web extraction tool with potential complexities, this leaves significant gaps in understanding its behavior beyond basic functionality.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose ('retrieves and processes raw content from specified URLs') and adds value with ideal use cases. There's no wasted wording, though it could be slightly more structured by separating functional description from usage contexts.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (web content extraction with processing), lack of annotations, and no output schema, the description is incomplete. It doesn't explain what 'processes' means, the format or structure of returned content, potential limitations (e.g., site restrictions), or how it differs from siblings. For a tool with three parameters and no structured behavioral hints, more context is needed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, clearly documenting all three parameters. The description adds no parameter-specific information beyond what the schema provides, such as examples or contextual usage tips. However, since the schema is comprehensive, a baseline score of 3 is appropriate as the description doesn't need to compensate for gaps.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose as 'retrieves and processes raw content from specified URLs' with specific verbs and resources. It distinguishes itself from potential siblings by focusing on extraction rather than crawling, mapping, or searching, though it doesn't explicitly name alternatives. The description is specific but could be more precise about what 'processes' entails.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides minimal guidance with 'ideal for data collection, content analysis, and research tasks,' but offers no explicit when-to-use rules, exclusions, or comparisons to sibling tools like tavily-crawl, tavily-map, or tavily-search. There's no mention of prerequisites, limitations, or scenarios where this tool is preferred over alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/jackedelic/tavily-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server