We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/yigitkonur/research-powerpack-mcp'
If you have feedback or need assistance with the MCP directory API, please join our Discord server
# Research Powerpack MCP - Tool Configuration
# Optimized for LLM consumption - <1K tokens total
version: "1.0"
metadata:
name: "mcp-researchpowerpack"
description: "Parallel research tools with AI extraction"
# CORE PRINCIPLES (ALL TOOLS)
shared:
philosophy: "MAXIMIZE inputs for parallel processing - NO time penalty for more items! More diverse inputs = better coverage = higher quality. Each input MUST target DIFFERENT angle - NO overlap/duplicates. ALWAYS use sequentialthinking between tool calls."
workflow: |
MANDATORY: THINK → EXECUTE (max diversity) → THINK (evaluate: gaps? claims to verify? links to follow?) → CROSS-REFERENCE (Reddit claims → web search, web results → scrape) → ITERATE → SYNTHESIZE.
NEVER stop after one tool call. Every search result has URLs to scrape. Every Reddit discussion has claims to verify. Every scraped page has links to follow.
iteration_triggers: "Iterate when: Reddit comments mention specific tools/links (scrape them) | Research cites sources (verify them) | Community claims contradict each other (web search for truth) | Results mention concepts you didn't research | First pass is ALWAYS incomplete"
tools:
- name: search_reddit
category: reddit
capability: search
limits: {min_queries: 10, max_queries: 50, recommended: 20}
description: |
MIN 10 queries, REC 20+ (10q=100 results, 20q=200 results). Each query MUST target DIFFERENT angle - NO overlap! 10 categories: 1) direct topic 2) recommendations (best/top) 3) specific tools/repos/names 4) comparisons (vs) 5) alternatives 6) subreddit targeting (r/macapps, r/opensource) 7) problems/issues/crashes 8) year-specific (2024/2025) 9) features 10) dev/GitHub/electron. Operators: intitle:, "exact", OR, -exclude. Using 1-3 queries wastes parallel power. Auto adds site:reddit.com.
parameters:
queries:
type: array
required: true
items: {type: string}
validation: {minItems: 10, maxItems: 50}
description: "10-50 diverse queries. Each targets different angle: direct|best|tools|vs|alternative|r/sub|issues|2024|features|GitHub. NO overlap."
date_after:
type: string
required: false
description: "Filter after date (YYYY-MM-DD)"
- name: get_reddit_post
category: reddit
capability: reddit
limits:
min_urls: 2
max_urls: 50
recommended: 20
default_max_comments: 1000
extraction_suffix: "Extract key insights: consensus, recommendations with reasoning, contrasting views, real experiences, technical details. Be comprehensive + concise, prioritize actionable info."
description: |
MIN 2 URLs, REC 10-20+. Auto comment budget: 1000 total (2 posts=500 each deep, 10=100 balanced, 20=50 broad coverage). Comments have BEST insights - always fetch_comments=true unless only need titles. DO NOT use use_llm — default is false. Users need raw exact comments to read verbatim (quotes, code snippets, specific recommendations). LLM summarization loses critical detail and nuance from discussions. Only set use_llm=true if explicitly asked to synthesize across 20+ posts. Mix subreddits for diverse perspectives. Using 2-5 posts = narrow. 20-30 posts = comprehensive.
parameters:
urls:
type: array
required: true
items: {type: string}
validation: {minItems: 2, maxItems: 50}
description: "2-50 Reddit URLs. More = broader consensus. Get from search_reddit."
fetch_comments:
type: boolean
required: false
default: true
description: "Fetch comments (true recommended - best insights in comments)"
max_comments:
type: number
required: false
default: 100
description: "Override auto allocation. Leave empty for smart allocation."
use_llm:
type: boolean
required: false
default: false
description: "Default false — DO NOT enable unless user explicitly requests synthesis. Raw comments preserve exact quotes, code snippets, nuanced opinions, and specific recommendations that LLM summarization loses. Only consider use_llm=true when processing 20+ posts and user explicitly wants a synthesized overview rather than reading individual comments."
what_to_extract:
type: string
required: false
description: "Only used when use_llm=true. Extraction instructions for AI synthesis. Be specific: 'Extract recommendations for X with pros/cons'."
- name: deep_research
category: research
capability: deepResearch
useZodSchema: true
zodSchemaRef: "deepResearchParamsSchema"
limits:
min_questions: 1
max_questions: 10
recommended: 5
min_length: 200
min_specific: 2
research_suffix: "CONSTRAINTS: No restating the question. No hedging preambles. Cite sources inline [source]. NEVER hallucinate — only report what sources confirm."
description: |
MIN 2, REC 5-10 questions parallel. 32K tokens distributed (2q=16K/each deep, 5q=6.4K balanced, 10q=3.2K comprehensive). A compression suffix auto-appended to each question (anti-hallucination, cite-inline, no filler). Output uses tables for comparisons/structured data, tight bullets for explanations. MANDATORY template per question:
"🎯 WHAT I NEED: [clear goal]
🤔 WHY: [decision/problem]
📚 WHAT I KNOW: [current understanding - so research fills gaps not basics]
🔧 HOW I'LL USE: [implementation/debugging/architecture]
❓ SPECIFIC QUESTIONS: 1) [q1] 2) [q2] 3) [q3]
🌐 PRIORITY SOURCES: [optional - docs/sites]
⚡ FOCUS: [optional - performance/security]"
ATTACH FILES for code Qs (bugs/perf/refactor/review/architecture) - MANDATORY or research is generic/unhelpful. Using 1-2 Qs wastes parallel capacity. First pass incomplete - iterate based on findings.
schemaDescriptions:
questions: "2-10 structured questions following template above. Each MUST cover different angle of topic. Attach files for ANY code-related Q (bugs/errors/perf/refactoring/review/architecture) - without code context, research is generic/useless."
file_attachments: "MANDATORY for code Qs: bugs→failing code, perf→slow code, refactor→current implementation, review→code to review, architecture→relevant modules. Format: {path: '/absolute/path', description: 'What file is, why relevant, focus areas, known issues', start_line?, end_line?}. Thorough description critical."
- name: scrape_links
category: scrape
capability: scraping
useZodSchema: true
zodSchemaRef: "scrapeLinksParamsSchema"
limits:
min_urls: 1
max_urls: 50
recommended: 5
min_extract_len: 50
min_targets: 3
extraction_prefix: "Extract ONLY from document — never hallucinate. For structured data (pricing, specs, features) → markdown table. Otherwise → tight bullet points. No intro, no confirmation message, no meta-commentary."
extraction_suffix: "Output grounded info only. First line = content, not preamble."
description: |
REC 3-5 URLs. use_llm=true BY DEFAULT — AI extraction auto-filters nav/ads/footers, returns clean structured content. A compression prefix+suffix wraps your prompt automatically for max info density. 32K tokens (3=10K each, 5=6K, 10=3K, 50=640). PROMPT FORMULA — keep it tight, no verbosity: "Extract [t1] | [t2] | [t3] | [t4] | [t5] with focus on [a1], [a2], [a3]". Min 3 targets with | separator. Be specific (pricing tiers not pricing). Aim 5-10 targets. BAD: "Please extract all information about pricing and features from this page" → GOOD: "pricing tiers|limits|features|integrations|API auth|rate limits with focus on free tier, enterprise pricing". Templates: Product (pricing|features|reviews|specs|integrations|support), Tech Docs (endpoints|auth|limits|errors|examples|schemas), Competitive (features|pricing|customers|USPs|stack|testimonials). Set use_llm=false ONLY for raw HTML debugging.
schemaDescriptions:
urls: "1-50 URLs (3-5 recommended). More URLs = broader coverage, fewer tokens/URL."
timeout: "Timeout per URL (5-120s, default 30)"
use_llm: "Defaults to true. AI extraction auto-filters noise, extracts only specified targets, returns clean structured content. Compression prefix+suffix auto-applied to maximize info density. Cost: ~$0.001/page. Set false ONLY for raw HTML debugging. Needs OPENROUTER_API_KEY."
what_to_extract: "Your extraction targets — auto-wrapped with compression prefix+suffix. Formula: 'pricing tiers|limits|features|integrations|API auth with focus on free tier, enterprise'. Min 3 targets with | separator. Be terse and specific (pricing tiers not pricing, API rate limits not API info). No verbose sentences — just targets+focus. The system adds compression instructions automatically."
model: "Override the LLM extraction model for this request. Uses OpenRouter model IDs (e.g. 'openai/gpt-4o', 'anthropic/claude-sonnet-4', 'google/gemini-2.5-flash'). Default: openai/gpt-oss-120b:nitro (configurable via LLM_EXTRACTION_MODEL env var)."
- name: web_search
category: search
capability: search
useZodSchema: true
zodSchemaRef: "webSearchParamsSchema"
limits: {min_keywords: 3, max_keywords: 100, recommended: 7}
description: |
MIN 3, REC 5-7 keywords parallel Google search. 10 results/keyword (3=30 results, 7=70, 100=1000). Each keyword = separate search - MUST be diverse angles! 7 perspectives: 1) broad [topic] 2) specific/technical [topic]+[term] 3) problems [topic] issues/debugging 4) best practices [topic] 2024/2025 5) comparison [A] vs [B] OR 6) tutorial/guide 7) advanced patterns/architecture. Operators: site:domain.com (target GitHub/StackOverflow/docs), "exact phrase", -exclude, filetype:pdf, OR. Examples: "PostgreSQL" site:github.com stars:>1000, "Docker OOM" site:stackoverflow.com. CRITICAL: Search only gives URLs - MUST follow with scrape_links to get actual content! Workflow: search → think → scrape_links (MANDATORY) → think → iterate → synthesize. Using 1-2 keywords wastes parallel power.
schemaDescriptions:
keywords: "3-100 diverse keywords (5-7 rec). Each separate Google search parallel. 7 angles: broad | technical | problems | best practices+year | vs/OR | tutorial | patterns. Operators: site:, \"exact\", -exclude, filetype:, OR."