Skip to main content
Glama

MCP Webscan Server

by bsmi021

MCP Webscan Server

A Model Context Protocol (MCP) server for web content scanning and analysis. This server provides tools for fetching, analyzing, and extracting information from web pages.

Features

  • Page Fetching: Convert web pages to Markdown for easy analysis
  • Link Extraction: Extract and analyze links from web pages
  • Site Crawling: Recursively crawl websites to discover content
  • Link Checking: Identify broken links on web pages
  • Pattern Matching: Find URLs matching specific patterns
  • Sitemap Generation: Generate XML sitemaps for websites

Installation

Installing via Smithery

To install Webscan for Claude Desktop automatically via Smithery:

npx -y @smithery/cli install mcp-server-webscan --client claude

Manual Installation

# Clone the repository git clone <repository-url> cd mcp-server-webscan # Install dependencies npm install # Build the project npm run build

Usage

Starting the Server

npm start

The server runs on stdio transport, making it compatible with MCP clients like Claude Desktop.

Available Tools

  1. fetch-page
    • Fetches a web page and converts it to Markdown.
    • Parameters:
      • url (required): URL of the page to fetch.
      • selector (optional): CSS selector to target specific content.
  2. extract-links
    • Extracts all links from a web page with their text.
    • Parameters:
      • url (required): URL of the page to analyze.
      • baseUrl (optional): Base URL to filter links.
      • limit (optional, default: 100): Maximum number of links to return.
  3. crawl-site
    • Recursively crawls a website up to a specified depth.
    • Parameters:
      • url (required): Starting URL to crawl.
      • maxDepth (optional, default: 2): Maximum crawl depth (0-5).
  4. check-links
    • Checks for broken links on a page.
    • Parameters:
      • url (required): URL to check links for.
  5. find-patterns
    • Finds URLs matching a specific pattern.
    • Parameters:
      • url (required): URL to search in.
      • pattern (required): JavaScript-compatible regex pattern to match URLs against.
  6. generate-site-map
    • Generates a simple XML sitemap by crawling.
    • Parameters:
      • url (required): Root URL for sitemap crawl.
      • maxDepth (optional, default: 2): Maximum crawl depth for discovering URLs (0-5).
      • limit (optional, default: 1000): Maximum number of URLs to include in the sitemap.

Example Usage with Claude Desktop

  1. Configure the server in your Claude Desktop settings:
{ "mcpServers": { "webscan": { "command": "node", "args": ["path/to/mcp-server-webscan/build/index.js"], // Corrected path "env": { "NODE_ENV": "development", "LOG_LEVEL": "info" // Example: Set log level via env var } } } }
  1. Use the tools in your conversations:
Could you fetch the content from https://example.com and convert it to Markdown?

Development

Prerequisites

  • Node.js >= 18
  • npm

Project Structure (Post-Refactor)

mcp-server-webscan/ ├── src/ │ ├── config/ │ │ └── ConfigurationManager.ts │ ├── services/ │ │ ├── CheckLinksService.ts │ │ ├── CrawlSiteService.ts │ │ ├── ExtractLinksService.ts │ │ ├── FetchPageService.ts │ │ ├── FindPatternsService.ts │ │ ├── GenerateSitemapService.ts │ │ └── index.ts │ ├── tools/ │ │ ├── checkLinksTool.ts │ │ ├── checkLinksToolParams.ts │ │ ├── crawlSiteTool.ts │ │ ├── crawlSiteToolParams.ts │ │ ├── extractLinksTool.ts │ │ ├── extractLinksToolParams.ts │ │ ├── fetchPageTool.ts │ │ ├── fetchPageToolParams.ts │ │ ├── findPatterns.ts │ │ ├── findPatternsToolParams.ts │ │ ├── generateSitemapTool.ts │ │ ├── generateSitemapToolParams.ts │ │ └── index.ts │ ├── types/ │ │ ├── checkLinksTypes.ts │ │ ├── crawlSiteTypes.ts │ │ ├── extractLinksTypes.ts │ │ ├── fetchPageTypes.ts │ │ ├── findPatternsTypes.ts │ │ ├── generateSitemapTypes.ts │ │ └── index.ts │ ├── utils/ │ │ ├── errors.ts │ │ ├── index.ts │ │ ├── logger.ts │ │ ├── markdownConverter.ts │ │ └── webUtils.ts │ ├── initialize.ts │ └── index.ts # Main server entry point ├── build/ # Compiled JavaScript (Corrected) ├── node_modules/ ├── .clinerules ├── .gitignore ├── Dockerfile ├── LICENSE ├── mcp-consistant-servers-guide.md ├── package.json ├── package-lock.json ├── README.md ├── RFC-2025-001-Refactor.md ├── smithery.yaml └── tsconfig.json

Building

npm run build

Development Mode

npm run dev

Running evals

The evals package loads an mcp client that then runs the index.ts file, so there is no need to rebuild between tests. You can load environment variables by prefixing the npx command. Full documentation can be found here.

OPENAI_API_KEY=your-key npx mcp-eval src/evals/evals.ts src/tools/extractLinksTool.ts

Error Handling

The server implements comprehensive error handling:

  • Invalid parameters
  • Network errors
  • Content parsing errors
  • URL validation

All errors are properly formatted according to the MCP specification.

Contributing

  1. Fork the repository
  2. Create your feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add some amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

License

MIT License - see the LICENSE file for details

Deploy Server
A
security – no known vulnerabilities
A
license - permissive license
A
quality - confirmed to work

remote-capable server

The server can be hosted and run remotely because it primarily relies on remote services or has no dependency on the local environment.

ページ取得、リンク抽出、サイトクロールなどのツールを使用して Web ページから情報を取得、分析、抽出することにより、Web コンテンツのスキャンと分析を可能にします。

  1. 特徴
    1. インストール
      1. Smithery経由でインストール
      2. 手動インストール
    2. 使用法
      1. サーバーの起動
      2. 利用可能なツール
    3. Claude Desktop の使用例
      1. 発達
        1. 前提条件
        2. プロジェクト構造(リファクタリング後)
        3. 建物
        4. 開発モード
      2. エラー処理
        1. 貢献
          1. ライセンス

            Related MCP Servers

            • A
              security
              A
              license
              A
              quality
              Provides functionality to fetch web content in various formats, including HTML, JSON, plain text, and Markdown.
              Last updated -
              4
              468
              591
              MIT License
            • A
              security
              A
              license
              A
              quality
              Enables text extraction from web pages and PDFs, and execution of predefined commands, enhancing content processing and automation capabilities.
              Last updated -
              MIT License
            • A
              security
              F
              license
              A
              quality
              Provides functionality to fetch web content in various formats, including HTML, JSON, plain text, and Markdown.
              Last updated -
              4
              108,518
              2
            • -
              security
              F
              license
              -
              quality
              Provides functionality to fetch and transform web content in various formats (HTML, JSON, plain text, and Markdown) through simple API calls.
              Last updated -
              108,518
              1

            View all related MCP servers

            MCP directory API

            We provide all the information about MCP servers via our MCP API.

            curl -X GET 'https://glama.ai/api/mcp/v1/servers/bsmi021/mcp-server-webscan'

            If you have feedback or need assistance with the MCP directory API, please join our Discord server