| search_reddit | ๐ฅ REDDIT SEARCH - MINIMUM 10 QUERIES, RECOMMENDED 20+ This tool is designed for consensus analysis through MULTIPLE diverse queries.
Using 1-3 queries = wasting the tool's power. You MUST use 10+ queries minimum. Budget: 10 results per query, all run in parallel. 10-Category Query Formula - Each query targets a DIFFERENT angle. NO OVERLAP! Direct topic: "[topic] [platform]"
Example: "YouTube Music Mac app" Recommendations: "best/recommended [topic]"
Example: "best YouTube Music client Mac" Specific tools: Project names, GitHub repos
Example: "YTMDesktop", "th-ch youtube-music" Comparisons: "[A] vs [B]"
Example: "YouTube Music vs Spotify Mac desktop" Alternatives: "[topic] alternative/replacement"
Example: "YouTube Music Mac alternative" Subreddits: "r/[subreddit] [topic]" - different communities have different perspectives
Example: "r/macapps YouTube Music", "r/opensource YouTube Music" Problems/Issues: "[topic] issues/crashes/problems"
Example: "YouTube Music Mac crashes", "YTM desktop performance problems" Year-specific: Add "2024" or "2025" for recent discussions
Example: "best YouTube Music Mac 2024" Features: "[topic] [specific feature]"
Example: "YouTube Music offline Mac", "YTM lyrics desktop" Developer/GitHub: "[topic] GitHub/open source/electron"
Example: "youtube-music electron GitHub", "YTM desktop open source"
Search Operators: intitle: - Search in post titles only
"exact phrase" - Match exact phrase
OR - Match either term
-exclude - Exclude term
All queries auto-add site:reddit.com
Example showing all 10 categories:
โ BAD: {"queries": ["best YouTube Music app"]} โ 1 vague query, misses 90% of consensus
โ
GOOD: {"queries": ["YouTube Music Mac app", "best YTM client Mac", "YTMDesktop Mac", "YouTube Music vs Spotify Mac", "YouTube Music Mac alternative", "r/macapps YouTube Music", "YTM Mac crashes", "YouTube Music Mac 2024", "YTM offline Mac", "youtube-music GitHub", ...expand to 20 queries]} โ comprehensive multi-angle coverage Pro Tips: Use ALL 10 categories - Each reveals different community perspectives Target specific subreddits - Different communities have different expertise Include year numbers - "2024", "2025" filters for recent discussions Add comparison keywords - "vs", "versus" find decision threads Include problem keywords - "issue", "bug", "crash" find real experiences Vary phrasing - "best", "top", "recommended" capture different discussions Use technical terms - "electron", "GitHub", "API" find developer perspectives NO DUPLICATES - Each query must target a unique angle
Workflow:
search_reddit โ sequentialthinking (evaluate results) โ get_reddit_post OR search again โ sequentialthinking โ synthesize REMEMBER: More queries = better consensus detection = higher quality results! |
| get_reddit_post | ๐ฅ FETCH REDDIT POSTS - 2-50 URLs, RECOMMENDED 10-20+ This tool fetches Reddit posts with smart comment allocation.
Using 2-5 posts = missing community consensus. Use 10-20+ for broad perspective. Comment Budget: 1,000 total comments distributed automatically across posts. 2 posts: ~500 comments/post (deep dive) 10 posts: ~100 comments/post (balanced) 20 posts: ~50 comments/post (RECOMMENDED: broad) 50 posts: ~20 comments/post (max coverage)
Comment allocation is AUTOMATIC - you don't need to calculate! When to use different post counts: 2-5 posts: Deep dive on specific discussions 10-15 posts: Balanced depth + breadth (GOOD) 20-30 posts: Broad community perspective (RECOMMENDED) 40-50 posts: Maximum coverage Use when: Researching controversial topic, need all perspectives Trade-off: Fewer comments per post but comprehensive coverage
Example:
โ BAD: {"urls": ["single_url"]} โ 1 perspective, could be biased/outdated
โ
GOOD: {"urls": [20 URLs from diverse subreddits: programming, webdev, node, golang, devops, etc.]} โ comprehensive community perspective Pro Tips: Use 10-20+ posts - More posts = broader community perspective Mix subreddits - Different communities have different expertise and perspectives Include various discussion types - Best practices, comparisons, problems, solutions Let comment allocation auto-adjust - Don't override max_comments unless needed Use after search_reddit - Get URLs from search, then fetch full content here
CRITICAL: Comments often contain the BEST insights, solutions, and real-world experiences.
Always set fetch_comments=true unless you only need post titles. Workflow: search_reddit (find posts) โ get_reddit_post (fetch full content + comments) |
| deep_research | ๐ฅ DEEP RESEARCH - 2-10 QUESTIONS, RECOMMENDED 5+ This tool runs 2-10 questions IN PARALLEL with AI-powered research.
Using 1-2 questions = wasting the parallel research capability! Token Budget: 32,000 tokens distributed across questions. 2 questions: 16,000 tokens each (deep dive) 5 questions: 6,400 tokens each (RECOMMENDED: balanced) 10 questions: 3,200 tokens each (comprehensive multi-topic)
All questions research in PARALLEL - no time penalty for more questions! When to use this tool: Multi-perspective analysis on related topics Researching a domain from multiple angles Validating understanding across different aspects Comparing approaches/technologies side-by-side Deep technical questions requiring comprehensive research
Question Template - Each question MUST include these sections: ๐ฏ WHAT I NEED: Clearly state what you're trying to achieve or understand ๐ค WHY I'M RESEARCHING: What decision does this inform? What problem are you solving? ๐ WHAT I ALREADY KNOW: Share current understanding so research fills gaps, not repeats basics ๐ง HOW I'LL USE THIS: Practical application - implementation, debugging, architecture โ SPECIFIC QUESTIONS (2-5): Break down into specific, pointed sub-questions ๐ PRIORITY SOURCES: (optional) Preferred docs/sites to prioritize โก FOCUS AREAS: (optional) What matters most - performance, security, etc.
ATTACH FILES when asking about code - THIS IS MANDATORY: ๐ Bugs/errors โ Attach the failing code โก Performance issues โ Attach the slow code paths โป๏ธ Refactoring โ Attach current implementation ๐ Code review โ Attach code to review ๐๏ธ Architecture โ Attach relevant modules
Research without code context for code questions is generic and unhelpful! Example:
โ BAD: {"questions": [{"question": "Research React hooks"}]} โ 1 vague question, no template, no context, wastes 90% capacity โ
GOOD: {"questions": [{
"question": "๐ฏ WHAT I NEED: Understand when to use useCallback vs useMemo in React 18\n\n๐ค WHY: Optimizing a data-heavy dashboard with 50+ components, seeing performance issues\n\n๐ WHAT I KNOW: Both memoize values, useCallback for functions, useMemo for computed values. Unclear when each actually prevents re-renders.\n\n๐ง HOW I'LL USE THIS: Refactor Dashboard.tsx to eliminate unnecessary re-renders\n\nโ SPECIFIC QUESTIONS:\n1. When does useCallback actually prevent re-renders vs when it doesn't?\n2. Performance benchmarks: useCallback vs useMemo vs neither in React 18?\n3. Common anti-patterns that negate their benefits?\n4. How to measure if they're actually helping?\n\n๐ PRIORITY: Official React docs, React team blog posts\nโก FOCUS: Patterns for frequently updating state"
}, ...add 4 more questions for comprehensive coverage]} Pro Tips: Use 5-10 questions - Maximize parallel research capacity Follow the template - Include all 7 sections for each question Be specific - Include version numbers, error codes, library names Add 2-5 sub-questions - Break down what you need to know Attach files for code questions - MANDATORY for bugs/performance/refactoring Describe files thoroughly - Explain what the file is and what to focus on Specify focus areas - "Focus on X, Y, Z" for prioritization Group related questions - Research a domain from multiple angles
Scope Expansion Triggers - Iterate when: Results mention concepts you didn't research Answers raise new questions you should explore You realize initial scope was too narrow You discover related topics that matter
Workflow:
deep_research (3-5 questions) โ sequentialthinking (evaluate, identify gaps) โ
OPTIONAL: deep_research AGAIN with NEW questions based on learnings โ
sequentialthinking (synthesize) โ final decision REMEMBER: ALWAYS think after getting results (digest and identify gaps!) DON'T assume first research is complete (iterate based on findings!) USE learnings to ask better questions (results = feedback!) EXPAND scope when results reveal new important areas!
|
| scrape_links | ๐ฅ WEB SCRAPING - 1-50 URLs, RECOMMENDED 3-5. ALWAYS use_llm=true This tool has TWO modes: Basic scraping (use_llm=false) - Gets raw HTML/text - messy, requires manual parsing AI-powered extraction (use_llm=true) - Intelligently extracts what you need โญ USE THIS!
โก ALWAYS SET use_llm=true FOR INTELLIGENT EXTRACTION โก Why use AI extraction (use_llm=true): Filters out navigation, ads, footers automatically Extracts ONLY what you specify in what_to_extract Handles complex page structures intelligently Returns clean, structured content ready to use Saves hours of manual HTML parsing Cost: pennies (~$0.01 per 10 pages)
Token Budget: 32,000 tokens distributed across URLs. 3 URLs: ~10,666 tokens each (deep extraction) 5 URLs: ~6,400 tokens each (RECOMMENDED: balanced) 10 URLs: ~3,200 tokens each (detailed) 50 URLs: ~640 tokens each (quick scan)
Extraction Prompt Formula: Extract [target1] | [target2] | [target3] | [target4] | [target5]
with focus on [aspect1], [aspect2], [aspect3] Extraction Rules: Use pipe | to separate extraction targets Minimum 3 targets required Be SPECIFIC about what you want ("pricing tiers" not "pricing") Include "with focus on" to prioritize certain aspects More targets = more comprehensive extraction Aim for 5-10 extraction targets
Extraction Templates by Domain: Product Research: Extract pricing details | feature comparisons | user reviews | technical specifications |
integration options | support channels | deployment models | security features
with focus on enterprise capabilities, pricing transparency, and integration complexity Technical Documentation: Extract API endpoints | authentication methods | rate limits | error codes |
request examples | response schemas | SDK availability | webhook support
with focus on authentication flow, rate limiting policies, and error handling patterns Competitive Analysis: Extract product features | pricing models | target customers | unique selling points |
technology stack | customer testimonials | case studies | market positioning
with focus on differentiators, pricing strategy, and customer satisfaction Example:
โ BAD: {"urls": ["url"], "use_llm": false, "what_to_extract": "get pricing"} โ raw HTML, vague prompt, 1 target, no focus areas โ
GOOD: {"urls": [5 URLs], "use_llm": true, "what_to_extract": "Extract pricing tiers | plan features | API rate limits | enterprise options | integration capabilities | user testimonials with focus on enterprise features, API limitations, and real-world performance data"} โ clean structured extraction Pro Tips: ALWAYS use use_llm=true - The AI extraction is the tool's superpower Use 3-10 URLs - Balance between depth and breadth Specify 5-10 extraction targets - More targets = more comprehensive Use pipe - Clearly separate each target Add focus areas - "with focus on X, Y, Z" for prioritization Be specific - "pricing tiers" not "pricing", "API rate limits" not "API info" Cover multiple aspects - Features, pricing, technical, social proof
Automatic Fallback: Basic โ JavaScript rendering โ JavaScript + US geo-targeting
Batching: Max 30 concurrent requests (50 URLs = [30] then [20] batches) REMEMBER: AI extraction costs pennies but saves hours of manual parsing! |
| web_search | ๐ฅ WEB SEARCH - MINIMUM 3 KEYWORDS, RECOMMENDED 5-7 This tool searches up to 100 keywords IN PARALLEL via Google.
Using 1-2 keywords = wasting the tool's parallel search power! Results Budget: 10 results per keyword, all searches run in parallel. 3 keywords = 30 results (minimum) 7 keywords = 70 results (RECOMMENDED) 100 keywords = 1000 results (comprehensive)
7-Perspective Keyword Formula - Each keyword targets a DIFFERENT angle: Direct/Broad: "[topic]"
Example: "React state management" Specific/Technical: "[topic] [technical term]"
Example: "React useReducer vs Redux" Problem-Focused: "[topic] issues/debugging/problems"
Example: "React state management performance issues" Best Practices: "[topic] best practices [year]"
Example: "React state management best practices 2024" Comparison: "[A] vs [B]"
Example: "React state management libraries comparison" Tutorial/Guide: "[topic] tutorial/guide"
Example: "React state management tutorial" Advanced: "[topic] patterns/architecture large applications"
Example: "React state management patterns large applications"
Search Operators with Examples: site:domain.com - Search within specific site
Example: "React hooks" site:github.com โ React hooks repos on GitHub
"exact phrase" - Match exact phrase
Example: "Docker OOM" site:stackoverflow.com โ exact error discussions
-exclude - Exclude term from results
Example: React state management -Redux โ find alternatives to Redux
filetype:pdf - Find specific file types
Example: React tutorial filetype:pdf โ downloadable guides
OR - Match either term
Example: React OR Vue state management โ compare frameworks
Keyword Patterns by Use Case: Technology Research:
["PostgreSQL vs MySQL performance 2024", "PostgreSQL best practices production", "\"PostgreSQL\" site:github.com stars:>1000", "PostgreSQL connection pooling", "PostgreSQL vs MongoDB use cases"] Problem Solving:
["Docker container memory leak debugging", "Docker memory limit not working", "\"Docker OOM\" site:stackoverflow.com", "Docker memory optimization best practices"] Comparison Research:
["Next.js vs Remix performance", "Next.js 14 vs Remix 2024", "\"Next.js\" OR \"Remix\" benchmarks", "Next.js vs Remix developer experience"] Example:
โ BAD: {"keywords": ["React"]} โ 1 vague keyword, no operators, no diversity โ
GOOD: {"keywords": ["React state management best practices", "React useReducer vs Redux 2024", "React Context API performance", "Zustand React state library", "\"React state\" site:github.com", "React state management large applications", "React global state alternatives -Redux"]} โ 7 diverse angles with operators Pro Tips: Use 5-7 keywords minimum - Each reveals different perspective Add year numbers - "2024", "2025" for recent content Use search operators - site:, "exact", -exclude, filetype: Vary specificity - Mix broad + specific keywords Include comparisons - "vs", "versus", "compared to", "OR" Target sources - site:github.com, site:stackoverflow.com Add context - "best practices", "tutorial", "production", "performance" Think parallel - Each keyword searches independently
Workflow:
web_search โ sequentialthinking (evaluate which URLs look promising) โ
scrape_links (MUST scrape promising URLs - that's where real content is!) โ
sequentialthinking (evaluate scraped content) โ
OPTIONAL: web_search again if gaps found โ synthesize Why this workflow works: Search results reveal new keywords you didn't think of Scraped content shows what's actually useful vs what looked good Thinking between tool calls prevents tunnel vision Iterative refinement = comprehensive coverage
CRITICAL: ALWAYS scrape after web_search - that's where the real content is! ALWAYS think between tool calls - evaluate and refine! DON'T stop after one search - iterate based on learnings!
FOLLOW-UP: Use scrape_links to extract full content from promising URLs! |