Skip to main content
Glama
yigitkonur

Research Powerpack MCP

by yigitkonur

🧭 Quick Navigation

⚡ Get Started✨ Key Features🎮 Usage & Examples⚙️ API Key Setup🆚 Why This Slaps


research-powerpack-mcp is the research assistant your AI wishes it had. Stop asking your LLM to guess about things it doesn't know. This MCP server acts like a senior researcher, searching the web, mining Reddit discussions, scraping documentation, and synthesizing everything into perfectly structured context so your AI can actually give you answers worth a damn.

How it slaps:

  • You: "What's the best database for my use case?"

  • AI + Powerpack: Searches Google, mines Reddit threads, scrapes docs, synthesizes findings.

  • You: Get an actually informed answer with real community opinions and citations.

  • Result: Ship better decisions. Skip the 47 browser tabs.


💥 Why This Slaps Other Methods

Manually researching is a vibe-killer. research-powerpack-mcp makes other methods look ancient.

We're not just fetching random pages. We're building high-signal, low-noise context with CTR-weighted ranking, smart comment allocation, and intelligent token distribution that prevents massive responses from breaking your LLM's context window.


🚀 Get Started in 60 Seconds

1. Install

npm install research-powerpack-mcp

2. Configure Your MCP Client

Client

Config File

Docs

🖥️ Claude Desktop

claude_desktop_config.json

Setup

⌨️ Claude Code

~/.claude.json or CLI

Setup

🎯 Cursor

.cursor/mcp.json

Setup

🏄 Windsurf

MCP settings

Setup

Claude Desktop

Add to your claude_desktop_config.json:

{ "mcpServers": { "research-powerpack": { "command": "npx", "args": ["research-powerpack-mcp"], "env": { "SERPER_API_KEY": "your_key", "REDDIT_CLIENT_ID": "your_id", "REDDIT_CLIENT_SECRET": "your_secret", "SCRAPEDO_API_KEY": "your_key", "OPENROUTER_API_KEY": "your_key" } } } }

or quick install (for MacOS):

cat ~/Library/Application\ Support/Claude/claude_desktop_config.json | jq '.mcpServers["research-powerpack"] = { "command": "npx", "args": ["research-powerpack-mcp@latest"], "disabled": false, "env": { "OPENROUTER_API_KEY": "xxx", "REDDIT_CLIENT_ID": "xxx", "REDDIT_CLIENT_SECRET": "xxx", "RESEARCH_MODEL": "xxxx", "SCRAPEDO_API_KEY": "xxx", "SERPER_API_KEY": "xxxx" } }' | tee ~/Library/Application\ Support/Claude/claude_desktop_config.json

Claude Code (CLI)

One command to rule them all:

claude mcp add research-powerpack npx \ --scope user \ --env SERPER_API_KEY=your_key \ --env REDDIT_CLIENT_ID=your_id \ --env REDDIT_CLIENT_SECRET=your_secret \ --env OPENROUTER_API_KEY=your_key \ --env OPENROUTER_BASE_URL=https://openrouter.ai/api/v1 \ --env RESEARCH_MODEL=x-ai/grok-4.1-fast \ -- research-powerpack-mcp

Or manually add to ~/.claude.json:

{ "mcpServers": { "research-powerpack": { "command": "npx", "args": ["research-powerpack-mcp"], "env": { "SERPER_API_KEY": "your_key", "REDDIT_CLIENT_ID": "your_id", "REDDIT_CLIENT_SECRET": "your_secret", "OPENROUTER_API_KEY": "your_key", "OPENROUTER_BASE_URL": "https://openrouter.ai/api/v1", "RESEARCH_MODEL": "x-ai/grok-4.1-fast" } } } }

Cursor/Windsurf

Add to .cursor/mcp.json or equivalent:

{ "mcpServers": { "research-powerpack": { "command": "npx", "args": ["research-powerpack-mcp"], "env": { "SERPER_API_KEY": "your_key" } } } }

✨ Zero Crash Promise: Missing API keys? No problem. The server always starts. Tools just return helpful setup instructions instead of exploding.


✨ Feature Breakdown: The Secret Sauce

Feature

What It Does

Why You Care

🔍 Batch Search

100 keywords parallel

Search Google for up to 100 queries simultaneously

Cover every angle of a topic in one shot

📊 CTR Ranking

Smart URL scoring

Identifies URLs that appear across multiple searches

Surfaces high-consensus authoritative sources

💬 Reddit Mining

Real human opinions

Google-powered Reddit search + native API fetching

Get actual user experiences, not marketing fluff

🎯 Smart Allocation

Token-aware budgets

1,000 comment budget distributed across posts

Deep dive on 2 posts or quick scan on 50

🌐 Universal Scraping

Works on everything

Auto-fallback: basic → JS render → geo-targeting

Handles SPAs, paywalls, and geo-restricted content

🧠 Deep Research

AI-powered synthesis

Batch research with web search and citations

Get comprehensive answers to complex questions

🧩 Modular Design

Use what you need

Each tool works independently

Pay only for the APIs you actually use


🎮 Tool Reference

Batch web search using Google via Serper API. Search up to 100 keywords in parallel.

Parameter

Type

Required

Description

keywords

string[]

Yes

Search queries (1-100). Use distinct keywords for maximum coverage.

Supports Google operators: site:, -exclusion, "exact phrase", filetype:

{ "keywords": [ "best IDE 2025", "VS Code alternatives", "Cursor vs Windsurf comparison" ] }

search_reddit

Search Reddit via Google with automatic site:reddit.com filtering.

Parameter

Type

Required

Description

queries

string[]

Yes

Search queries (max 10)

date_after

string

No

Filter results after date (YYYY-MM-DD)

Search operators: intitle:keyword, "exact phrase", OR, -exclude

{ "queries": [ "best mechanical keyboard 2025", "intitle:keyboard recommendation" ], "date_after": "2024-01-01" }

get_reddit_post

Fetch Reddit posts with smart comment allocation (1,000 comment budget distributed automatically).

Parameter

Type

Required

Default

Description

urls

string[]

Yes

Reddit post URLs (2-50)

fetch_comments

boolean

No

true

Whether to fetch comments

max_comments

number

No

auto

Override comment allocation

Smart Allocation:

  • 2 posts → ~500 comments/post (deep dive)

  • 10 posts → ~100 comments/post

  • 50 posts → ~20 comments/post (quick scan)

{ "urls": [ "https://reddit.com/r/programming/comments/abc123/post_title", "https://reddit.com/r/webdev/comments/def456/another_post" ] }

Universal URL content extraction with automatic fallback modes.

Parameter

Type

Required

Default

Description

urls

string[]

Yes

URLs to scrape (3-50)

timeout

number

No

30

Timeout per URL (seconds)

use_llm

boolean

No

false

Enable AI extraction

what_to_extract

string

No

Extraction instructions for AI

Automatic Fallback: Basic → JS rendering → JS + US geo-targeting

{ "urls": ["https://example.com/article1", "https://example.com/article2"], "use_llm": true, "what_to_extract": "Extract the main arguments and key statistics" }

deep_research

AI-powered batch research with web search and citations.

Parameter

Type

Required

Description

questions

object[]

Yes

Research questions (2-10)

questions[].question

string

Yes

The research question

questions[].file_attachments

object[]

No

Files to include as context

Token Allocation: 32,000 tokens distributed across questions:

  • 2 questions → 16,000 tokens/question (deep dive)

  • 10 questions → 3,200 tokens/question (rapid multi-topic)

{ "questions": [ { "question": "What are the current best practices for React Server Components in 2025?" }, { "question": "Compare Bun vs Node.js for production workloads with benchmarks." } ] }

⚙️ Environment Variables & Tool Availability

Research Powerpack uses a modular architecture. Tools are automatically enabled based on which API keys you provide:

ENV Variable

Tools Enabled

Free Tier

SERPER_API_KEY

web_search, search_reddit

2,500 queries/mo

REDDIT_CLIENT_ID + SECRET

get_reddit_post

Unlimited

SCRAPEDO_API_KEY

scrape_links

1,000 credits/mo

OPENROUTER_API_KEY

deep_research + AI in scrape_links

Pay-as-you-go

RESEARCH_MODEL

Model for deep_research

Default: perplexity/sonar-deep-research

LLM_EXTRACTION_MODEL

Model for AI extraction in scrape_links

Default: openrouter/gpt-oss-120b:nitro

Configuration Examples

# Search-only mode (just web_search and search_reddit) SERPER_API_KEY=xxx # Reddit research mode (search + fetch posts) SERPER_API_KEY=xxx REDDIT_CLIENT_ID=xxx REDDIT_CLIENT_SECRET=xxx # Full research mode (all 5 tools) SERPER_API_KEY=xxx REDDIT_CLIENT_ID=xxx REDDIT_CLIENT_SECRET=xxx SCRAPEDO_API_KEY=xxx OPENROUTER_API_KEY=xxx

🔑 API Key Setup Guides

What you get

  • Fast Google search results via API

  • Enables web_search and search_reddit tools

Setup Steps

  1. Go to serper.dev

  2. Click "Get API Key" (top right)

  3. Sign up with email or Google

  4. Copy your API key from the dashboard

  5. Add to your config:

    SERPER_API_KEY=your_key_here

Pricing

  • Free: 2,500 queries/month

  • Paid: $50/month for 50,000 queries

What you get

  • Full Reddit API access

  • Fetch posts and comments with upvote sorting

  • Enables get_reddit_post tool

Setup Steps

  1. Go to reddit.com/prefs/apps

  2. Scroll down and click "create another app..."

  3. Fill in:

    • Name: research-powerpack (or any name)

    • App type: Select "script" (important!)

    • Redirect URI: http://localhost:8080

  4. Click "create app"

  5. Copy your credentials:

    • Client ID: The string under your app name

    • Client Secret: The "secret" field

  6. Add to your config:

    REDDIT_CLIENT_ID=your_client_id REDDIT_CLIENT_SECRET=your_client_secret

What you get

  • JavaScript rendering support

  • Geo-targeting and CAPTCHA handling

  • Enables scrape_links tool

Setup Steps

  1. Go to scrape.do

  2. Click "Start Free"

  3. Sign up with email

  4. Copy your API key from the dashboard

  5. Add to your config:

    SCRAPEDO_API_KEY=your_key_here

Credit Usage

  • Basic scrape: 1 credit

  • JavaScript rendering: 5 credits

  • Geo-targeting: +25 credits

What you get

  • Access to 100+ AI models via one API

  • Enables deep_research tool

  • Enables AI extraction in scrape_links

Setup Steps

  1. Go to openrouter.ai

  2. Sign up with Google/GitHub/email

  3. Go to openrouter.ai/keys

  4. Click "Create Key"

  5. Copy the key (starts with sk-or-...)

  6. Add to your config:

    OPENROUTER_API_KEY=sk-or-v1-xxxxx
# Default (optimized for research) RESEARCH_MODEL=perplexity/sonar-deep-research # Fast and capable RESEARCH_MODEL=x-ai/grok-4.1-fast # High quality RESEARCH_MODEL=anthropic/claude-3.5-sonnet # Budget-friendly RESEARCH_MODEL=openai/gpt-4o-mini
# Default (fast and cost-effective for extraction) LLM_EXTRACTION_MODEL=openrouter/gpt-oss-120b:nitro # High quality extraction LLM_EXTRACTION_MODEL=anthropic/claude-3.5-sonnet # Budget-friendly LLM_EXTRACTION_MODEL=openai/gpt-4o-mini

Note: RESEARCH_MODEL and LLM_EXTRACTION_MODEL are independent. You can use a powerful model for deep research and a faster/cheaper model for content extraction, or vice versa.


Research a Technology Decision

1. web_search → ["React vs Vue 2025", "Next.js vs Nuxt comparison"] 2. search_reddit → ["best frontend framework 2025", "Next.js production experience"] 3. get_reddit_post → [URLs from step 2] 4. scrape_links → [Documentation and blog URLs from step 1] 5. deep_research → [Synthesize findings into specific questions]

Competitive Analysis

1. web_search → ["competitor name review", "competitor vs alternatives"] 2. scrape_links → [Competitor websites, review sites] 3. search_reddit → ["competitor name experience", "switching from competitor"] 4. get_reddit_post → [URLs from step 3]

Debug an Obscure Error

1. web_search → ["exact error message", "error + framework name"] 2. search_reddit → ["error message", "framework + error type"] 3. get_reddit_post → [URLs with solutions] 4. scrape_links → [Stack Overflow answers, GitHub issues]

🔥 Enable Full Power Mode

For the best research experience, configure all four API keys:

SERPER_API_KEY=your_serper_key # Free: 2,500 queries/month REDDIT_CLIENT_ID=your_reddit_id # Free: Unlimited REDDIT_CLIENT_SECRET=your_reddit_secret SCRAPEDO_API_KEY=your_scrapedo_key # Free: 1,000 credits/month OPENROUTER_API_KEY=your_openrouter_key # Pay-as-you-go

This unlocks:

  • 5 research tools working together

  • AI-powered content extraction in scrape_links

  • Deep research with web search and citations

  • Complete Reddit mining (search → fetch → analyze)

Total setup time: ~10 minutes. Total free tier value: ~$50/month equivalent.


🛠️ Development

# Clone git clone https://github.com/yigitkonur/research-powerpack-mcp.git cd research-powerpack-mcp # Install npm install # Development npm run dev # Build npm run build # Type check npm run typecheck

🏗️ Architecture (v3.4.0+)

The codebase uses a YAML-driven configuration system with aggressive LLM optimization (v3.5.0+):

Core Architecture

Component

File

Purpose

Tool Definitions

src/config/yaml/tools.yaml

Single source of truth for all tool metadata

Handler Registry

src/tools/registry.ts

Declarative tool registration + executeTool wrapper

YAML Loader

src/config/loader.ts

Parses YAML, generates MCP-compatible definitions (cached)

Concurrency Utils

src/utils/concurrency.ts

Bounded parallel execution (pMap/pMapSettled)

Shared Utils

src/tools/utils.ts

Common utility functions

Adding a new tool:

  1. Add tool definition to tools.yaml

  2. Create handler in src/tools/

  3. Register in src/tools/registry.ts

See docs/refactoring/04-migration-guide.md for detailed instructions.

Performance & Stability (v3.5.1+)

All parallel operations use bounded concurrency to prevent CPU spikes and API rate limits:

Operation

Before

After

Reddit search queries

50 concurrent

8 concurrent

Web scraping batches

30 concurrent

10 concurrent

Deep research questions

Unbounded

3 concurrent

Reddit post fetching

10 concurrent

5 concurrent

File attachments

Unbounded

5 concurrent

Additional optimizations:

  • YAML config cached in memory (no repeated disk reads)

  • Async file I/O (no event loop blocking)

  • Pre-compiled regex patterns for hot paths

  • Reddit auth token deduplication (prevents concurrent token requests)

LLM Optimization (v3.5.0+)

All tools include aggressive guidance to force LLMs to use them optimally:

Feature

Description

Configurable Limits

All min/max values in YAML (limits section)

BAD vs GOOD Examples

Every tool shows anti-patterns and perfect usage

Aggressive Phrasing

Changed from "you can" to "you MUST"

Visual Formatting

Emoji headers, section dividers, icons for visual scanning

Templates

Structured formats for questions, extractions, file descriptions

Key Enhancements:

  • search_reddit: Minimum 10 queries (was 3), 10-category formula

  • deep_research: 7-section question template, file attachment requirements

  • scrape_links: Extraction template with OR statements, use_llm=true push

  • web_search: Minimum 3 keywords, search operator examples

  • file_attachments: Numbered 5-section description template

See docs/refactoring/07-llm-optimization-summary.md for full details.


🔥 Common Issues & Quick Fixes

Problem

Solution

Tool returns "API key not configured"

Add the required ENV variable to your MCP config. The error message tells you exactly which key is missing.

Reddit posts returning empty

Check your REDDIT_CLIENT_ID and REDDIT_CLIENT_SECRET. Make sure you created a "script" type app.

Scraping fails on JavaScript sites

This is expected for first attempt. The tool auto-retries with JS rendering. If still failing, the site may be blocking scrapers.

Deep research taking too long

Use a faster model like x-ai/grok-4.1-fast instead of perplexity/sonar-deep-research.

Token limit errors

Reduce the number of URLs/questions per request. The tool distributes a fixed token budget.


Built with 🔥 because manually researching for your AI is a soul-crushing waste of time.

MIT © Yiğit Konur

Install Server
A
security – no known vulnerabilities
F
license - not found
A
quality - confirmed to work

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/yigitkonur/research-powerpack-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server