Skip to main content
Glama
yigitkonur

Research Powerpack MCP

by yigitkonur

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault
RESEARCH_MODELNoAI model to use for deep_research tool. Recommended: perplexity/sonar-deep-research (default), x-ai/grok-4.1-fast, anthropic/claude-3.5-sonnet, or openai/gpt-4o-mini.perplexity/sonar-deep-research
SERPER_API_KEYNoSerper API key for Google search functionality. Enables web_search and search_reddit tools. Free tier: 2,500 queries/month.
REDDIT_CLIENT_IDNoReddit OAuth client ID for Reddit API access. Enables get_reddit_post tool. Free and unlimited.
SCRAPEDO_API_KEYNoScrape.do API key for web scraping with JavaScript rendering and geo-targeting. Enables scrape_links tool. Free tier: 1,000 credits/month.
OPENROUTER_API_KEYNoOpenRouter API key for AI model access. Enables deep_research tool and AI extraction in scrape_links. Pay-as-you-go pricing.
OPENROUTER_BASE_URLNoBase URL for OpenRouter API.https://openrouter.ai/api/v1
LLM_EXTRACTION_MODELNoAI model to use for content extraction in scrape_links when use_llm is enabled. Recommended: openrouter/gpt-oss-120b:nitro (default), anthropic/claude-3.5-sonnet, or openai/gpt-4o-mini.openrouter/gpt-oss-120b:nitro
REDDIT_CLIENT_SECRETNoReddit OAuth client secret for Reddit API access. Enables get_reddit_post tool. Free and unlimited.

Capabilities

Server capabilities have not been inspected yet.

Tools

Functions exposed to the LLM to take actions

NameDescription
search_reddit

๐Ÿ”ฅ REDDIT SEARCH - MINIMUM 10 QUERIES, RECOMMENDED 20+

This tool is designed for consensus analysis through MULTIPLE diverse queries. Using 1-3 queries = wasting the tool's power. You MUST use 10+ queries minimum.

Budget: 10 results per query, all run in parallel.

  • 10 queries = 100 results

  • 20 queries = 200 results (RECOMMENDED)

  • 50 queries = 500 results (comprehensive)

10-Category Query Formula - Each query targets a DIFFERENT angle. NO OVERLAP!

  1. Direct topic: "[topic] [platform]" Example: "YouTube Music Mac app"

  2. Recommendations: "best/recommended [topic]" Example: "best YouTube Music client Mac"

  3. Specific tools: Project names, GitHub repos Example: "YTMDesktop", "th-ch youtube-music"

  4. Comparisons: "[A] vs [B]" Example: "YouTube Music vs Spotify Mac desktop"

  5. Alternatives: "[topic] alternative/replacement" Example: "YouTube Music Mac alternative"

  6. Subreddits: "r/[subreddit] [topic]" - different communities have different perspectives Example: "r/macapps YouTube Music", "r/opensource YouTube Music"

  7. Problems/Issues: "[topic] issues/crashes/problems" Example: "YouTube Music Mac crashes", "YTM desktop performance problems"

  8. Year-specific: Add "2024" or "2025" for recent discussions Example: "best YouTube Music Mac 2024"

  9. Features: "[topic] [specific feature]" Example: "YouTube Music offline Mac", "YTM lyrics desktop"

  10. Developer/GitHub: "[topic] GitHub/open source/electron" Example: "youtube-music electron GitHub", "YTM desktop open source"

Search Operators:

  • intitle: - Search in post titles only

  • "exact phrase" - Match exact phrase

  • OR - Match either term

  • -exclude - Exclude term

  • All queries auto-add site:reddit.com

Example showing all 10 categories: โŒ BAD: {"queries": ["best YouTube Music app"]} โ†’ 1 vague query, misses 90% of consensus โœ… GOOD: {"queries": ["YouTube Music Mac app", "best YTM client Mac", "YTMDesktop Mac", "YouTube Music vs Spotify Mac", "YouTube Music Mac alternative", "r/macapps YouTube Music", "YTM Mac crashes", "YouTube Music Mac 2024", "YTM offline Mac", "youtube-music GitHub", ...expand to 20 queries]} โ†’ comprehensive multi-angle coverage

Pro Tips:

  1. Use ALL 10 categories - Each reveals different community perspectives

  2. Target specific subreddits - Different communities have different expertise

  3. Include year numbers - "2024", "2025" filters for recent discussions

  4. Add comparison keywords - "vs", "versus" find decision threads

  5. Include problem keywords - "issue", "bug", "crash" find real experiences

  6. Vary phrasing - "best", "top", "recommended" capture different discussions

  7. Use technical terms - "electron", "GitHub", "API" find developer perspectives

  8. NO DUPLICATES - Each query must target a unique angle

Workflow: search_reddit โ†’ sequentialthinking (evaluate results) โ†’ get_reddit_post OR search again โ†’ sequentialthinking โ†’ synthesize

REMEMBER: More queries = better consensus detection = higher quality results!

get_reddit_post

๐Ÿ”ฅ FETCH REDDIT POSTS - 2-50 URLs, RECOMMENDED 10-20+

This tool fetches Reddit posts with smart comment allocation. Using 2-5 posts = missing community consensus. Use 10-20+ for broad perspective.

Comment Budget: 1,000 total comments distributed automatically across posts.

  • 2 posts: ~500 comments/post (deep dive)

  • 10 posts: ~100 comments/post (balanced)

  • 20 posts: ~50 comments/post (RECOMMENDED: broad)

  • 50 posts: ~20 comments/post (max coverage)

Comment allocation is AUTOMATIC - you don't need to calculate!

When to use different post counts:

2-5 posts: Deep dive on specific discussions

  • Use when: You found THE perfect thread and want all comments

  • Trade-off: Deep but narrow perspective

10-15 posts: Balanced depth + breadth (GOOD)

  • Use when: Want good comment depth across multiple discussions

  • Trade-off: Good balance of depth and coverage

20-30 posts: Broad community perspective (RECOMMENDED)

  • Use when: Want to see consensus across many discussions

  • Trade-off: Less comments per post but more diverse opinions

40-50 posts: Maximum coverage

  • Use when: Researching controversial topic, need all perspectives

  • Trade-off: Fewer comments per post but comprehensive coverage

Example: โŒ BAD: {"urls": ["single_url"]} โ†’ 1 perspective, could be biased/outdated โœ… GOOD: {"urls": [20 URLs from diverse subreddits: programming, webdev, node, golang, devops, etc.]} โ†’ comprehensive community perspective

Pro Tips:

  1. Use 10-20+ posts - More posts = broader community perspective

  2. Mix subreddits - Different communities have different expertise and perspectives

  3. Include various discussion types - Best practices, comparisons, problems, solutions

  4. Let comment allocation auto-adjust - Don't override max_comments unless needed

  5. Use after search_reddit - Get URLs from search, then fetch full content here

CRITICAL: Comments often contain the BEST insights, solutions, and real-world experiences. Always set fetch_comments=true unless you only need post titles.

Workflow: search_reddit (find posts) โ†’ get_reddit_post (fetch full content + comments)

deep_research

๐Ÿ”ฅ DEEP RESEARCH - 2-10 QUESTIONS, RECOMMENDED 5+

This tool runs 2-10 questions IN PARALLEL with AI-powered research. Using 1-2 questions = wasting the parallel research capability!

Token Budget: 32,000 tokens distributed across questions.

  • 2 questions: 16,000 tokens each (deep dive)

  • 5 questions: 6,400 tokens each (RECOMMENDED: balanced)

  • 10 questions: 3,200 tokens each (comprehensive multi-topic)

All questions research in PARALLEL - no time penalty for more questions!

When to use this tool:

  • Multi-perspective analysis on related topics

  • Researching a domain from multiple angles

  • Validating understanding across different aspects

  • Comparing approaches/technologies side-by-side

  • Deep technical questions requiring comprehensive research

Question Template - Each question MUST include these sections:

  1. ๐ŸŽฏ WHAT I NEED: Clearly state what you're trying to achieve or understand

  2. ๐Ÿค” WHY I'M RESEARCHING: What decision does this inform? What problem are you solving?

  3. ๐Ÿ“š WHAT I ALREADY KNOW: Share current understanding so research fills gaps, not repeats basics

  4. ๐Ÿ”ง HOW I'LL USE THIS: Practical application - implementation, debugging, architecture

  5. โ“ SPECIFIC QUESTIONS (2-5): Break down into specific, pointed sub-questions

  6. ๐ŸŒ PRIORITY SOURCES: (optional) Preferred docs/sites to prioritize

  7. โšก FOCUS AREAS: (optional) What matters most - performance, security, etc.

ATTACH FILES when asking about code - THIS IS MANDATORY:

  • ๐Ÿ› Bugs/errors โ†’ Attach the failing code

  • โšก Performance issues โ†’ Attach the slow code paths

  • โ™ป๏ธ Refactoring โ†’ Attach current implementation

  • ๐Ÿ” Code review โ†’ Attach code to review

  • ๐Ÿ—๏ธ Architecture โ†’ Attach relevant modules

Research without code context for code questions is generic and unhelpful!

Example: โŒ BAD: {"questions": [{"question": "Research React hooks"}]} โ†’ 1 vague question, no template, no context, wastes 90% capacity

โœ… GOOD:

{"questions": [{ "question": "๐ŸŽฏ WHAT I NEED: Understand when to use useCallback vs useMemo in React 18\n\n๐Ÿค” WHY: Optimizing a data-heavy dashboard with 50+ components, seeing performance issues\n\n๐Ÿ“š WHAT I KNOW: Both memoize values, useCallback for functions, useMemo for computed values. Unclear when each actually prevents re-renders.\n\n๐Ÿ”ง HOW I'LL USE THIS: Refactor Dashboard.tsx to eliminate unnecessary re-renders\n\nโ“ SPECIFIC QUESTIONS:\n1. When does useCallback actually prevent re-renders vs when it doesn't?\n2. Performance benchmarks: useCallback vs useMemo vs neither in React 18?\n3. Common anti-patterns that negate their benefits?\n4. How to measure if they're actually helping?\n\n๐ŸŒ PRIORITY: Official React docs, React team blog posts\nโšก FOCUS: Patterns for frequently updating state" }, ...add 4 more questions for comprehensive coverage]}

Pro Tips:

  1. Use 5-10 questions - Maximize parallel research capacity

  2. Follow the template - Include all 7 sections for each question

  3. Be specific - Include version numbers, error codes, library names

  4. Add 2-5 sub-questions - Break down what you need to know

  5. Attach files for code questions - MANDATORY for bugs/performance/refactoring

  6. Describe files thoroughly - Explain what the file is and what to focus on

  7. Specify focus areas - "Focus on X, Y, Z" for prioritization

  8. Group related questions - Research a domain from multiple angles

Scope Expansion Triggers - Iterate when:

  • Results mention concepts you didn't research

  • Answers raise new questions you should explore

  • You realize initial scope was too narrow

  • You discover related topics that matter

Workflow: deep_research (3-5 questions) โ†’ sequentialthinking (evaluate, identify gaps) โ†’ OPTIONAL: deep_research AGAIN with NEW questions based on learnings โ†’ sequentialthinking (synthesize) โ†’ final decision

REMEMBER:

  • ALWAYS think after getting results (digest and identify gaps!)

  • DON'T assume first research is complete (iterate based on findings!)

  • USE learnings to ask better questions (results = feedback!)

  • EXPAND scope when results reveal new important areas!

scrape_links

๐Ÿ”ฅ WEB SCRAPING - 1-50 URLs, RECOMMENDED 3-5. ALWAYS use_llm=true

This tool has TWO modes:

  1. Basic scraping (use_llm=false) - Gets raw HTML/text - messy, requires manual parsing

  2. AI-powered extraction (use_llm=true) - Intelligently extracts what you need โญ USE THIS!

โšก ALWAYS SET use_llm=true FOR INTELLIGENT EXTRACTION โšก

Why use AI extraction (use_llm=true):

  • Filters out navigation, ads, footers automatically

  • Extracts ONLY what you specify in what_to_extract

  • Handles complex page structures intelligently

  • Returns clean, structured content ready to use

  • Saves hours of manual HTML parsing

  • Cost: pennies (~$0.01 per 10 pages)

Token Budget: 32,000 tokens distributed across URLs.

  • 3 URLs: ~10,666 tokens each (deep extraction)

  • 5 URLs: ~6,400 tokens each (RECOMMENDED: balanced)

  • 10 URLs: ~3,200 tokens each (detailed)

  • 50 URLs: ~640 tokens each (quick scan)

Extraction Prompt Formula:

Extract [target1] | [target2] | [target3] | [target4] | [target5] with focus on [aspect1], [aspect2], [aspect3]

Extraction Rules:

  • Use pipe | to separate extraction targets

  • Minimum 3 targets required

  • Be SPECIFIC about what you want ("pricing tiers" not "pricing")

  • Include "with focus on" to prioritize certain aspects

  • More targets = more comprehensive extraction

  • Aim for 5-10 extraction targets

Extraction Templates by Domain:

Product Research:

Extract pricing details | feature comparisons | user reviews | technical specifications | integration options | support channels | deployment models | security features with focus on enterprise capabilities, pricing transparency, and integration complexity

Technical Documentation:

Extract API endpoints | authentication methods | rate limits | error codes | request examples | response schemas | SDK availability | webhook support with focus on authentication flow, rate limiting policies, and error handling patterns

Competitive Analysis:

Extract product features | pricing models | target customers | unique selling points | technology stack | customer testimonials | case studies | market positioning with focus on differentiators, pricing strategy, and customer satisfaction

Example: โŒ BAD: {"urls": ["url"], "use_llm": false, "what_to_extract": "get pricing"} โ†’ raw HTML, vague prompt, 1 target, no focus areas

โœ… GOOD: {"urls": [5 URLs], "use_llm": true, "what_to_extract": "Extract pricing tiers | plan features | API rate limits | enterprise options | integration capabilities | user testimonials with focus on enterprise features, API limitations, and real-world performance data"} โ†’ clean structured extraction

Pro Tips:

  1. ALWAYS use use_llm=true - The AI extraction is the tool's superpower

  2. Use 3-10 URLs - Balance between depth and breadth

  3. Specify 5-10 extraction targets - More targets = more comprehensive

  4. Use pipe - Clearly separate each target

  5. Add focus areas - "with focus on X, Y, Z" for prioritization

  6. Be specific - "pricing tiers" not "pricing", "API rate limits" not "API info"

  7. Cover multiple aspects - Features, pricing, technical, social proof

Automatic Fallback: Basic โ†’ JavaScript rendering โ†’ JavaScript + US geo-targeting Batching: Max 30 concurrent requests (50 URLs = [30] then [20] batches)

REMEMBER: AI extraction costs pennies but saves hours of manual parsing!

web_search

๐Ÿ”ฅ WEB SEARCH - MINIMUM 3 KEYWORDS, RECOMMENDED 5-7

This tool searches up to 100 keywords IN PARALLEL via Google. Using 1-2 keywords = wasting the tool's parallel search power!

Results Budget: 10 results per keyword, all searches run in parallel.

  • 3 keywords = 30 results (minimum)

  • 7 keywords = 70 results (RECOMMENDED)

  • 100 keywords = 1000 results (comprehensive)

7-Perspective Keyword Formula - Each keyword targets a DIFFERENT angle:

  1. Direct/Broad: "[topic]" Example: "React state management"

  2. Specific/Technical: "[topic] [technical term]" Example: "React useReducer vs Redux"

  3. Problem-Focused: "[topic] issues/debugging/problems" Example: "React state management performance issues"

  4. Best Practices: "[topic] best practices [year]" Example: "React state management best practices 2024"

  5. Comparison: "[A] vs [B]" Example: "React state management libraries comparison"

  6. Tutorial/Guide: "[topic] tutorial/guide" Example: "React state management tutorial"

  7. Advanced: "[topic] patterns/architecture large applications" Example: "React state management patterns large applications"

Search Operators with Examples:

  • site:domain.com - Search within specific site Example: "React hooks" site:github.com โ†’ React hooks repos on GitHub

  • "exact phrase" - Match exact phrase Example: "Docker OOM" site:stackoverflow.com โ†’ exact error discussions

  • -exclude - Exclude term from results Example: React state management -Redux โ†’ find alternatives to Redux

  • filetype:pdf - Find specific file types Example: React tutorial filetype:pdf โ†’ downloadable guides

  • OR - Match either term Example: React OR Vue state management โ†’ compare frameworks

Keyword Patterns by Use Case:

Technology Research: ["PostgreSQL vs MySQL performance 2024", "PostgreSQL best practices production", "\"PostgreSQL\" site:github.com stars:>1000", "PostgreSQL connection pooling", "PostgreSQL vs MongoDB use cases"]

Problem Solving: ["Docker container memory leak debugging", "Docker memory limit not working", "\"Docker OOM\" site:stackoverflow.com", "Docker memory optimization best practices"]

Comparison Research: ["Next.js vs Remix performance", "Next.js 14 vs Remix 2024", "\"Next.js\" OR \"Remix\" benchmarks", "Next.js vs Remix developer experience"]

Example: โŒ BAD: {"keywords": ["React"]} โ†’ 1 vague keyword, no operators, no diversity

โœ… GOOD: {"keywords": ["React state management best practices", "React useReducer vs Redux 2024", "React Context API performance", "Zustand React state library", "\"React state\" site:github.com", "React state management large applications", "React global state alternatives -Redux"]} โ†’ 7 diverse angles with operators

Pro Tips:

  1. Use 5-7 keywords minimum - Each reveals different perspective

  2. Add year numbers - "2024", "2025" for recent content

  3. Use search operators - site:, "exact", -exclude, filetype:

  4. Vary specificity - Mix broad + specific keywords

  5. Include comparisons - "vs", "versus", "compared to", "OR"

  6. Target sources - site:github.com, site:stackoverflow.com

  7. Add context - "best practices", "tutorial", "production", "performance"

  8. Think parallel - Each keyword searches independently

Workflow: web_search โ†’ sequentialthinking (evaluate which URLs look promising) โ†’ scrape_links (MUST scrape promising URLs - that's where real content is!) โ†’ sequentialthinking (evaluate scraped content) โ†’ OPTIONAL: web_search again if gaps found โ†’ synthesize

Why this workflow works:

  • Search results reveal new keywords you didn't think of

  • Scraped content shows what's actually useful vs what looked good

  • Thinking between tool calls prevents tunnel vision

  • Iterative refinement = comprehensive coverage

CRITICAL:

  • ALWAYS scrape after web_search - that's where the real content is!

  • ALWAYS think between tool calls - evaluate and refine!

  • DON'T stop after one search - iterate based on learnings!

FOLLOW-UP: Use scrape_links to extract full content from promising URLs!

Prompts

Interactive templates invoked by user choice

NameDescription

No prompts

Resources

Contextual data attached and managed by the client

NameDescription

No resources

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/yigitkonur/research-powerpack-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server