local-only server
The server can only run on the client’s local machine because it depends on local resources.
Integrations
Allows formatting scraped web content into structured markdown, with support for including images and saving formatted results to files
Uses Puppeteer to perform web scraping with capabilities like smart scrolling for single-page applications and content analysis to determine optimal scraping approaches
🔍 Prysm MCP Server
The Prysm MCP (Model Context Protocol) Server enables AI assistants like Claude and others to scrape web content with high accuracy and flexibility.
✨ Features
- 🎯 Multiple Scraping Modes: Choose from focused (speed), balanced (default), or deep (thorough) modes
- 🧠 Content Analysis: Analyze URLs to determine the best scraping approach
- 📄 Format Flexibility: Format results as markdown, HTML, or JSON
- 🖼️ Image Support: Optionally extract and even download images
- 🔍 Smart Scrolling: Configure scroll behavior for single-page applications
- 📱 Responsive: Adapts to different website layouts and structures
- 💾 File Output: Save formatted results to your preferred directory
🚀 Quick Start
Installation
Integration Guides
We provide detailed integration guides for popular MCP-compatible applications:
- Cursor Integration Guide
- Claude Desktop Integration Guide
- Windsurf Integration Guide
- Cline Integration Guide
- Roo Code Integration Guide
- Open WebUI Integration Guide
Usage
There are multiple ways to set up Prysm MCP Server:
Using mcp.json Configuration
Create a mcp.json
file in the appropriate location according to the above guides.
🛠️ Tools
The server provides the following tools:
scrapeFocused
Fast web scraping optimized for speed (fewer scrolls, main content only).
Available Parameters:
url
(required): URL to scrapemaxScrolls
(optional): Maximum number of scroll attempts (default: 5)scrollDelay
(optional): Delay between scrolls in ms (default: 1000)scrapeImages
(optional): Whether to include images in resultsdownloadImages
(optional): Whether to download images locallymaxImages
(optional): Maximum images to extractoutput
(optional): Output directory for downloaded images
scrapeBalanced
Balanced web scraping approach with good coverage and reasonable speed.
Available Parameters:
- Same as
scrapeFocused
with different defaults maxScrolls
default: 10scrollDelay
default: 2000- Adds
timeout
parameter to limit total scraping time (default: 30000ms)
scrapeDeep
Maximum extraction web scraping (slower but thorough).
Available Parameters:
- Same as
scrapeFocused
with different defaults maxScrolls
default: 20scrollDelay
default: 3000maxImages
default: 100
formatResult
Format scraped data into different structured formats (markdown, HTML, JSON).
Available Parameters:
data
(required): The scraped data to formatformat
(required): Output format - "markdown", "html", or "json"includeImages
(optional): Whether to include images in output (default: true)output
(optional): File path to save the formatted result
You can also save formatted results to a file by specifying an output path:
⚙️ Configuration
Output Directory
By default, when saving formatted results, files will be saved to ~/prysm-mcp/output/
. You can customize this in two ways:
- Environment Variables: Set environment variables to your preferred directories:
- Tool Parameter: Specify output paths directly when calling the tools:
- MCP Configuration: In your MCP configuration file (e.g.,
.cursor/mcp.json
), you can set these environment variables:
If PRYSM_IMAGE_OUTPUT_DIR
is not specified, it will default to a subfolder named images
inside the PRYSM_OUTPUT_DIR
.
If you provide only a relative path or filename, it will be saved relative to the configured output directory.
Path Handling Rules
The formatResult
tool handles paths in the following ways:
- Absolute paths: Used exactly as provided (
/home/user/file.md
) - Relative paths: Saved relative to the configured output directory (
subfolder/file.md
) - Filename only: Saved in the configured output directory (
output.md
) - Directory path: If the path points to a directory, a filename is auto-generated based on content and timestamp
🏗️ Development
Running via npx
You can run the server directly with npx without installing:
📋 License
MIT
🙏 Credits
Developed by Pink Pixel
Powered by the Model Context Protocol and Puppeteer
You must be authenticated.
A Model Context Protocol server enabling AI assistants to scrape web content with high accuracy and flexibility, supporting multiple scraping modes and content formatting options.
Related Resources
Appeared in Searches
- Scraping a marketplace to generate a JSON file with name, description, and link
- Search Engine Optimization (SEO) Resources
- Using tools to analyze a customer's website for technical stack, traffic, and search queries
- Accessing a webcheck server to retrieve JSON data for website analysis
- Tools and techniques for scraping website data, creating event calendars, and building YouTube playlists