Skip to main content
Glama
127,390 tools. Last updated 2026-05-05 15:21

"A search for information about deep crawl web crawling and site auditing tools" matching MCP tools:

  • Submit a sitemap URL to Google Search Console to register or refresh a sitemap entry, triggering Google to re-crawl the feedpath. Safe to call repeatedly; re-submitting re-queues a crawl without duplicates. Requires verified site ownership.
    Apache 2.0
  • Retrieve detailed information about a specific company in Teamwork Desk by its ID. Use for auditing records, troubleshooting ticket associations, or integrating company data into automation workflows.
    MIT
  • Search the web for current information, news, articles, and websites to find up-to-date content, research topics, or answer questions about recent events.
    Apache 2.0

Matching MCP Servers

  • A
    license
    B
    quality
    C
    maintenance
    Enables deep web search across multiple providers including Google, Bing, Brave, DuckDuckGo, and Perplexity, with support for comprehensive AI-powered research using intelligent multi-engine queries.
    Last updated
    2
    20
    9
    MIT

Matching MCP Connectors

  • Retrieve detailed information about a specific Teamwork Desk ticket type by its ID. Use for auditing type usage, troubleshooting categorization, or automation integration.
    MIT
  • Get detailed information about a support agent by their ID. Use for auditing records, troubleshooting ticket assignments, or integrating agent data into workflows.
    MIT
  • Search Jina AI's official blog for articles about AI, machine learning, neural search, embeddings, and Jina products to find documentation, tutorials, announcements, and technical deep-dives.
    Apache 2.0
  • Get detailed information about a specific status in Teamwork Desk by its ID for auditing, troubleshooting, or automation integration.
    MIT
  • Crawl websites from a starting URL to extract structured content, controlling depth, breadth, and focus areas for targeted data collection.
  • Fetch and parse a target domain's robots.txt to retrieve sitemaps, per-user-agent allow/disallow rules, crawl-delay, and host directive. Use before crawling to honor published site rules.
    MIT
  • Crawl a website to gather content from multiple pages. Returns a job ID for async polling. Best for whole-site extraction; for single pages use scrape, for URL discovery use map.
  • Initiate web or Google Drive searches to find new sources for research topics. Choose between fast or deep search modes to gather relevant information for your notebook.
    MIT
  • Search web or Google Drive to find new sources for research topics, supporting both fast and deep research modes to gather information efficiently.
    MIT
  • Discover all URLs on a website quickly and cheaply. Use this tool to survey a site before deciding what to crawl deeply. Optionally filter results by search term, limit output, or control sitemap usage. Returns URLs with optional titles and descriptions.