The mcp-reddit server enables scraping and querying Reddit content without API keys, with local data persistence and media downloads.
Scraping Capabilities:
Scrape posts, comments, and media from subreddits with configurable limits (up to 100 posts by default)
Scrape user profile post history and activity
Fetch specific posts by URL with complete comment threads
Download images, videos, and galleries (videos with audio require ffmpeg)
Querying & Search:
Query previously scraped posts and comments with filters for post type (text, image, video, gallery, link), minimum score/upvotes, and keywords
Full-text search across all stored posts and comments
Retrieve top-scoring posts from scraped sources
List all scraped subreddits and users
Data Storage:
All content persists locally in
~/.mcp-reddit/data/(customizable) for offline access and repeated queries
Deployment:
Run locally via stdio (for Claude Desktop/Code) or expose as HTTP/SSE server for remote clients
No authentication required—works by scraping old.reddit.com and Libreddit mirrors
Scrapes posts, comments, and media from subreddits and user profiles without requiring API keys, with support for local data persistence, filtering, and search across scraped content.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@Reddit Scraperscrape the top 20 posts from r/technology"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
mcp-reddit
MCP server for scraping Reddit - no API keys required.
Scrapes posts, comments, and media from subreddits and user profiles using old.reddit.com and Libreddit mirrors.
Features
No API keys - Scrapes directly, no Reddit API credentials needed
Media downloads - Images, videos with audio (requires ffmpeg)
Local persistence - Query scraped data offline
Rich filtering - By post type, score, keywords
Comments included - Full thread scraping
Installation
Or with uvx:
Usage Modes
Local (stdio) - Default
For local MCP clients like Claude Desktop and Claude Code:
Remote (HTTP/SSE)
For remote MCP clients that connect via URL:
Options:
--http- Run in HTTP/SSE mode instead of stdio--host- Host to bind to (default: 0.0.0.0)--port- Port to listen on (default: 8000, orPORTenv var)
The server exposes:
GET /sse- SSE endpoint for MCP connectionPOST /messages/- Message endpointGET /health- Health check
Configuration
Add to your Claude Desktop or Claude Code settings:
Claude Desktop (~/Library/Application Support/Claude/claude_desktop_config.json)
Claude Desktop doesn't inherit your shell PATH, so you need the full path to uvx:
Then use the full path in your config:
Replace /Users/YOUR_USERNAME/.local/bin/uvx with the output from which uvx.
Claude Code
Or manually in ~/.claude.json:
Available Tools
Tool | Description |
| Scrape posts from a subreddit |
| Scrape posts from a user's profile |
| Fetch a specific post by URL (supports media download) |
| Query stored posts with filters |
| Query stored comments |
| Search across all scraped data |
| Get highest scoring posts |
| List all scraped subreddits/users |
Example Usage
Data Storage
Data is stored in ~/.mcp-reddit/data/ by default.
Set MCP_REDDIT_DATA_DIR environment variable to customize:
Optional: Video with Audio
To download Reddit videos with audio, install ffmpeg:
Credits
Built on top of reddit-universal-scraper by @ksanjeev284 - a full-featured Reddit scraper with analytics dashboard, REST API, and plugin system.
License
MIT