Skip to main content
Glama
fatwang2

Search1API MCP Server

crawl

Extract content from any URL. Retrieve parsed text and structured data for further use.

Instructions

Extract content from URL

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
urlYesURL to crawl

Implementation Reference

  • The main handler function for the crawl tool. Validates args, makes an API request to the CRAWL endpoint, and returns the crawled results as JSON.
    export async function handleCrawl(args: unknown, apiKey?: string) {
      if (!isValidCrawlArgs(args)) {
        throw new McpError(
          ErrorCode.InvalidParams,
          "Invalid crawl arguments"
        );
      }
    
      const { url } = args;
    
      log("Starting crawl for:", url);
    
      try {
        const startTime = Date.now();
        const response = await makeRequest<CrawlResponse>(
          API_CONFIG.ENDPOINTS.CRAWL,
          { url },
          apiKey
        );
        const endTime = Date.now();
        log(`Crawl completed successfully in ${endTime - startTime}ms`);
    
        return {
          content: [{
            type: "text",
            mimeType: "application/json",
            text: JSON.stringify(response.results, null, 2)
          }]
        };
      } catch (error) {
        log("Crawl error:", error);
        return {
          content: [{
            type: "text",
            mimeType: "text/plain",
            text: `Crawl API error: ${formatError(error)}`
          }],
          isError: true
        };
      }
    }
  • Type definitions: CrawlResult (title, link, content), CrawlResponse (crawlParameters + results), and CrawlArgs (url).
    export interface CrawlResult {
      title: string;
      link: string;
      content: string;
    }
    
    export interface CrawlResponse {
      crawlParameters: {
        url: string;
      };
      results: CrawlResult;
    }
  • Validation function isValidCrawlArgs that checks args is a non-null object with a non-empty url string.
    export function isValidCrawlArgs(args: unknown): args is CrawlArgs {
      if (typeof args !== 'object' || args === null) {
        return false;
      }
    
      const { url } = args as CrawlArgs;
    
      if (typeof url !== 'string' || url.trim().length === 0) {
        return false;
      }
    
      return true;
    }
  • Tool registration: CRAWL_TOOL constant with name 'crawl', description 'Extract content from URL', and input schema requiring a url string.
    // Crawl tool definition
    export const CRAWL_TOOL: Tool = {
      name: "crawl",
      description: "Extract content from URL",
      inputSchema: {
        type: "object",
        properties: {
          url: {
            type: "string",
            description: "URL to crawl"
          }
        },
        required: ["url"]
      }
    };
  • Dispatcher function handleToolCall that routes 'crawl' tool name to handleCrawl.
    export async function handleToolCall(toolName: string, args: unknown, apiKey?: string) {
      log(`Handling tool call: ${toolName}`);
    
      switch (toolName) {
        case SEARCH_TOOL.name:
          return await handleSearch(args, apiKey);
    
        case CRAWL_TOOL.name:
          return await handleCrawl(args, apiKey);
    
        case SITEMAP_TOOL.name:
          return await handleSitemap(args, apiKey);
    
        case NEWS_TOOL.name:
          return await handleNews(args, apiKey);
    
        case REASONING_TOOL.name:
          return await handleReasoning(args, apiKey);
    
        case TRENDING_TOOL.name:
          return await handleTrending(args, apiKey);
    
        default:
          log(`Unknown tool: ${toolName}`);
          throw new McpError(
            ErrorCode.InvalidParams,
            `Unknown tool: ${toolName}`
          );
      }
    }
  • API config: ENDPOINTS.CRAWL is set to '/crawl' on the base API URL.
      BASE_URL: 'https://api.search1api.com',
      DEFAULT_QUERY: 'latest news in the world',
      ENDPOINTS: {
        SEARCH: '/search',
        CRAWL: '/crawl',
        SITEMAP: '/sitemap',
        NEWS: '/news',
        REASONING: '/v1/chat/completions',
        TRENDING: '/trending'
      }
    } as const;
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description bears full responsibility for behavioral disclosure. It only says 'Extract content from URL', omitting important details like content type (HTML, text), error handling, rate limits, or output format.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is very concise (3 words) but at the cost of missing crucial information. It is appropriately sized for a simple tool but could include more context without losing conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema and the tool's complexity, the description is incomplete. It fails to specify what content is extracted, how results are returned, or any limitations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% for the single parameter, which already describes 'url' as 'URL to crawl'. The description adds no additional semantics beyond what the schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action (extract content) and the resource (URL). It is specific enough to identify the tool's purpose, though it does not explicitly differentiate from siblings like search or sitemap.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool over alternatives such as search or sitemap. The description lacks any contextual advice or exclusion criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/fatwang2/search1api-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server