Skip to main content
Glama

get_operation_requirements

Retrieve detailed requirements, validation rules, and suggestions for Reddit operations to ensure proper execution and avoid common mistakes. Essential for planning within the Reddit MCP Server.

Instructions

LAYER 2: Get detailed requirements for a Reddit operation.

USE THIS BEFORE EXECUTING to understand parameters, validation rules, and get suggestions.

Args: operation_id: The operation ID from discover_reddit_resources context: Optional context about what you're trying to accomplish

Returns: Parameter schemas, validation rules, suggestions, and common mistakes to avoid

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
contextNo
operation_idYes

Implementation Reference

  • Handler function for the 'get_operation_schema' tool, which provides detailed parameter requirements (schemas, validation rules, examples) for all Reddit operations. Returns static schema definitions based on operation_id. This is the core implementation that fulfills the 'get requirements' functionality.
    @mcp.tool( description="Get detailed requirements and parameters for a Reddit operation", annotations={"readOnlyHint": True} ) def get_operation_schema( operation_id: Annotated[str, "Operation ID from discover_operations"], include_examples: Annotated[bool, "Include example parameter values"] = True, ctx: Context = None ) -> Dict[str, Any]: """ LAYER 2: Get parameter requirements for an operation. Use after discover_operations to understand how to call operations. """ # Phase 1: Accept context but don't use it yet schemas = { "discover_subreddits": { "description": "Find communities using semantic vector search with configurable filtering and batch discovery", "parameters": { "query": { "type": "string", "required_one_of": ["query", "queries"], "description": "Single topic to find communities for", "validation": "2-100 characters" }, "queries": { "type": "array[string] or JSON string", "required_one_of": ["query", "queries"], "description": "Multiple topics for batch discovery (more efficient than individual queries)", "example": '["machine learning", "deep learning", "neural networks"]', "tip": "Batch mode reduces API calls and token usage by ~40%" }, "limit": { "type": "integer", "required": False, "default": 10, "range": [1, 50], "description": "Number of communities to return per query" }, "include_nsfw": { "type": "boolean", "required": False, "default": False, "description": "Whether to include NSFW communities" }, "min_confidence": { "type": "float", "required": False, "default": 0.0, "range": [0.0, 1.0], "description": "Minimum confidence score threshold for results", "guidance": { "0.0-0.3": "Very inclusive, includes tangentially related communities", "0.3-0.6": "Balanced, moderate relevance requirements", "0.6-0.8": "Strict, only highly relevant communities", "0.8-1.0": "Very strict, only exact semantic matches" } } }, "returns": { "subreddits": "Array with confidence scores (0-1) and match tiers", "confidence_stats": "Distribution statistics (mean, median, min, max, std_dev)", "tier_distribution": "Breakdown by match quality (exact, semantic, adjacent, peripheral)", "quality_indicators": { "good": "5+ subreddits with confidence > 0.7", "moderate": "3-5 subreddits with confidence 0.5-0.7", "poor": "All results below 0.5 confidence - refine search terms" } }, "notes": [ "Supports real-time progress reporting via context", "Lower distances map to higher confidence scores", "Generic subreddits (funny, pics, memes) are penalized unless directly searched", "Batch mode returns results keyed by query for easy analysis" ], "examples": [] if not include_examples else [ {"query": "machine learning", "limit": 15}, {"query": "python web development", "limit": 10, "min_confidence": 0.6}, {"queries": ["machine learning", "deep learning", "neural networks"], "limit": 10}, {"queries": "[\"web framework\", \"api design\"]", "include_nsfw": False, "min_confidence": 0.5} ] }, "search_subreddit": { "description": "Search for posts within a specific subreddit", "parameters": { "subreddit_name": { "type": "string", "required": True, "description": "Exact subreddit name (without r/ prefix)", "tip": "Use exact name from discover_subreddits" }, "query": { "type": "string", "required": True, "description": "Search terms" }, "sort": { "type": "enum", "options": ["relevance", "hot", "top", "new"], "default": "relevance", "description": "How to sort results" }, "time_filter": { "type": "enum", "options": ["all", "year", "month", "week", "day"], "default": "all", "description": "Time period for results" }, "limit": { "type": "integer", "default": 10, "range": [1, 100], "description": "Maximum number of results" } }, "examples": [] if not include_examples else [ {"subreddit_name": "MachineLearning", "query": "transformers", "limit": 20}, {"subreddit_name": "Python", "query": "async", "sort": "top", "time_filter": "month"} ] }, "fetch_posts": { "description": "Get posts from a single subreddit", "parameters": { "subreddit_name": { "type": "string", "required": True, "description": "Exact subreddit name (without r/ prefix)" }, "listing_type": { "type": "enum", "options": ["hot", "new", "top", "rising"], "default": "hot", "description": "Type of posts to fetch" }, "time_filter": { "type": "enum", "options": ["all", "year", "month", "week", "day"], "default": None, "description": "Time period (only for 'top' listing)" }, "limit": { "type": "integer", "default": 10, "range": [1, 100], "description": "Number of posts to fetch" } }, "examples": [] if not include_examples else [ {"subreddit_name": "technology", "listing_type": "hot", "limit": 15}, {"subreddit_name": "science", "listing_type": "top", "time_filter": "week", "limit": 20} ] }, "fetch_multiple": { "description": "Batch fetch from multiple subreddits efficiently", "parameters": { "subreddit_names": { "type": "array[string]", "required": True, "max_items": 10, "description": "List of subreddit names (without r/ prefix)", "tip": "Use names from discover_subreddits" }, "listing_type": { "type": "enum", "options": ["hot", "new", "top", "rising"], "default": "hot", "description": "Type of posts to fetch" }, "time_filter": { "type": "enum", "options": ["all", "year", "month", "week", "day"], "default": None, "description": "Time period (only for 'top' listing)" }, "limit_per_subreddit": { "type": "integer", "default": 5, "range": [1, 25], "description": "Posts per subreddit" } }, "efficiency": { "vs_individual": "70% fewer API calls", "token_usage": "~500-1000 tokens per subreddit" }, "examples": [] if not include_examples else [ {"subreddit_names": ["Python", "django", "flask"], "listing_type": "hot", "limit_per_subreddit": 5}, {"subreddit_names": ["MachineLearning", "deeplearning"], "listing_type": "top", "time_filter": "week", "limit_per_subreddit": 10} ] }, "fetch_comments": { "description": "Get complete comment tree for a post", "parameters": { "submission_id": { "type": "string", "required_one_of": ["submission_id", "url"], "description": "Reddit post ID (e.g., '1abc234')" }, "url": { "type": "string", "required_one_of": ["submission_id", "url"], "description": "Full Reddit URL to the post" }, "comment_limit": { "type": "integer", "default": 100, "recommendation": "50-100 for analysis", "description": "Maximum comments to fetch" }, "comment_sort": { "type": "enum", "options": ["best", "top", "new"], "default": "best", "description": "How to sort comments" } }, "examples": [] if not include_examples else [ {"submission_id": "1abc234", "comment_limit": 100}, {"url": "https://reddit.com/r/Python/comments/xyz789/", "comment_limit": 50, "comment_sort": "top"} ] }, "create_feed": { "description": "Create a new feed with analysis and selected subreddits", "parameters": { "name": { "type": "string", "required": True, "description": "Name for the feed (1-255 chars)" }, "website_url": { "type": "string", "required": False, "description": "URL of the website being analyzed (optional)" }, "analysis": { "type": "object", "required": False, "description": "Feed analysis data (optional)", "properties": { "description": "Description of topic/product/interest (10-1000 chars)", "audience_personas": "Array of persona tags (1-10 items)", "keywords": "Array of relevant keywords (1-50 items)" } }, "selected_subreddits": { "type": "array[object]", "required": True, "min_items": 1, "max_items": 50, "description": "List of selected subreddits", "item_properties": { "name": "Subreddit name (1-100 chars)", "description": "Subreddit description (max 1000 chars)", "subscribers": "Number of subscribers (integer >= 0)", "confidence_score": "Relevance score (0.0-1.0)" } } }, "examples": [] if not include_examples else [ { "name": "AI Research Feed", "website_url": "https://example.com", "analysis": { "description": "AI-powered data analysis platform for businesses", "audience_personas": ["data scientists", "business analysts", "ML engineers"], "keywords": ["machine learning", "data analysis", "business intelligence"] }, "selected_subreddits": [ {"name": "MachineLearning", "description": "ML community", "subscribers": 2500000, "confidence_score": 0.85}, {"name": "datascience", "description": "Data science discussions", "subscribers": 1200000, "confidence_score": 0.78} ] } ] }, "list_feeds": { "description": "List all feeds for the authenticated user", "parameters": { "limit": { "type": "integer", "required": False, "default": 50, "range": [1, 100], "description": "Maximum number of feeds to return" }, "offset": { "type": "integer", "required": False, "default": 0, "description": "Number of feeds to skip (for pagination)" } }, "examples": [] if not include_examples else [ {"limit": 10, "offset": 0}, {"limit": 25, "offset": 50} ] }, "get_feed": { "description": "Get a specific feed by ID", "parameters": { "feed_id": { "type": "string", "required": True, "description": "UUID of the feed to retrieve" } }, "examples": [] if not include_examples else [ {"feed_id": "550e8400-e29b-41d4-a716-446655440000"} ] }, "get_feed_config": { "description": "Get configuration for a feed (subreddit names, settings)", "parameters": { "feed_id": { "type": "string", "required": True, "description": "UUID of the feed to get config for" } }, "returns": { "profile_id": "UUID of the feed", "profile_name": "Name of the feed", "subreddits": "Array of subreddit names (strings)", "show_nsfw": "Whether NSFW content is enabled", "has_subreddits": "Whether feed has any subreddits configured" }, "examples": [] if not include_examples else [ {"feed_id": "550e8400-e29b-41d4-a716-446655440000"} ] }, "update_feed": { "description": "Update an existing feed (partial update - only include fields to change)", "parameters": { "feed_id": { "type": "string", "required": True, "description": "UUID of the feed to update" }, "name": { "type": "string", "required": False, "description": "New name for the feed (1-255 chars)" }, "website_url": { "type": "string", "required": False, "description": "Updated website URL" }, "analysis": { "type": "object", "required": False, "description": "Updated feed analysis data" }, "selected_subreddits": { "type": "array[object]", "required": False, "description": "Updated list of selected subreddits" } }, "examples": [] if not include_examples else [ {"feed_id": "550e8400-e29b-41d4-a716-446655440000", "name": "Updated Feed Name"}, { "feed_id": "550e8400-e29b-41d4-a716-446655440000", "selected_subreddits": [ {"name": "Python", "description": "Python programming", "subscribers": 1500000, "confidence_score": 0.9} ] } ] }, "delete_feed": { "description": "Delete a feed", "parameters": { "feed_id": { "type": "string", "required": True, "description": "UUID of the feed to delete" } }, "examples": [] if not include_examples else [ {"feed_id": "550e8400-e29b-41d4-a716-446655440000"} ] } } if operation_id not in schemas: return { "error": f"Unknown operation: {operation_id}", "available": list(schemas.keys()), "hint": "Use discover_operations() first" } return schemas[operation_id]
  • src/server.py:213-216 (registration)
    MCP tool registration decorator for get_operation_schema, defining its description and read-only hint.
    @mcp.tool( description="Get detailed requirements and parameters for a Reddit operation", annotations={"readOnlyHint": True} )
  • Static schema definitions providing input/output validation, types, required fields, examples, and guidance for each Reddit operation. This is the core data returned by the tool.
    schemas = { "discover_subreddits": { "description": "Find communities using semantic vector search with configurable filtering and batch discovery", "parameters": { "query": { "type": "string", "required_one_of": ["query", "queries"], "description": "Single topic to find communities for", "validation": "2-100 characters" }, "queries": { "type": "array[string] or JSON string", "required_one_of": ["query", "queries"], "description": "Multiple topics for batch discovery (more efficient than individual queries)", "example": '["machine learning", "deep learning", "neural networks"]', "tip": "Batch mode reduces API calls and token usage by ~40%" }, "limit": { "type": "integer", "required": False, "default": 10, "range": [1, 50], "description": "Number of communities to return per query" }, "include_nsfw": { "type": "boolean", "required": False, "default": False, "description": "Whether to include NSFW communities" }, "min_confidence": { "type": "float", "required": False, "default": 0.0, "range": [0.0, 1.0], "description": "Minimum confidence score threshold for results", "guidance": { "0.0-0.3": "Very inclusive, includes tangentially related communities", "0.3-0.6": "Balanced, moderate relevance requirements", "0.6-0.8": "Strict, only highly relevant communities", "0.8-1.0": "Very strict, only exact semantic matches" } } }, "returns": { "subreddits": "Array with confidence scores (0-1) and match tiers", "confidence_stats": "Distribution statistics (mean, median, min, max, std_dev)", "tier_distribution": "Breakdown by match quality (exact, semantic, adjacent, peripheral)", "quality_indicators": { "good": "5+ subreddits with confidence > 0.7", "moderate": "3-5 subreddits with confidence 0.5-0.7", "poor": "All results below 0.5 confidence - refine search terms" } }, "notes": [ "Supports real-time progress reporting via context", "Lower distances map to higher confidence scores", "Generic subreddits (funny, pics, memes) are penalized unless directly searched", "Batch mode returns results keyed by query for easy analysis" ], "examples": [] if not include_examples else [ {"query": "machine learning", "limit": 15}, {"query": "python web development", "limit": 10, "min_confidence": 0.6}, {"queries": ["machine learning", "deep learning", "neural networks"], "limit": 10}, {"queries": "[\"web framework\", \"api design\"]", "include_nsfw": False, "min_confidence": 0.5} ] }, "search_subreddit": { "description": "Search for posts within a specific subreddit", "parameters": { "subreddit_name": { "type": "string", "required": True, "description": "Exact subreddit name (without r/ prefix)", "tip": "Use exact name from discover_subreddits" }, "query": { "type": "string", "required": True, "description": "Search terms" }, "sort": { "type": "enum", "options": ["relevance", "hot", "top", "new"], "default": "relevance", "description": "How to sort results" }, "time_filter": { "type": "enum", "options": ["all", "year", "month", "week", "day"], "default": "all", "description": "Time period for results" }, "limit": { "type": "integer", "default": 10, "range": [1, 100], "description": "Maximum number of results" } }, "examples": [] if not include_examples else [ {"subreddit_name": "MachineLearning", "query": "transformers", "limit": 20}, {"subreddit_name": "Python", "query": "async", "sort": "top", "time_filter": "month"} ] }, "fetch_posts": { "description": "Get posts from a single subreddit", "parameters": { "subreddit_name": { "type": "string", "required": True, "description": "Exact subreddit name (without r/ prefix)" }, "listing_type": { "type": "enum", "options": ["hot", "new", "top", "rising"], "default": "hot", "description": "Type of posts to fetch" }, "time_filter": { "type": "enum", "options": ["all", "year", "month", "week", "day"], "default": None, "description": "Time period (only for 'top' listing)" }, "limit": { "type": "integer", "default": 10, "range": [1, 100], "description": "Number of posts to fetch" } }, "examples": [] if not include_examples else [ {"subreddit_name": "technology", "listing_type": "hot", "limit": 15}, {"subreddit_name": "science", "listing_type": "top", "time_filter": "week", "limit": 20} ] }, "fetch_multiple": { "description": "Batch fetch from multiple subreddits efficiently", "parameters": { "subreddit_names": { "type": "array[string]", "required": True, "max_items": 10, "description": "List of subreddit names (without r/ prefix)", "tip": "Use names from discover_subreddits" }, "listing_type": { "type": "enum", "options": ["hot", "new", "top", "rising"], "default": "hot", "description": "Type of posts to fetch" }, "time_filter": { "type": "enum", "options": ["all", "year", "month", "week", "day"], "default": None, "description": "Time period (only for 'top' listing)" }, "limit_per_subreddit": { "type": "integer", "default": 5, "range": [1, 25], "description": "Posts per subreddit" } }, "efficiency": { "vs_individual": "70% fewer API calls", "token_usage": "~500-1000 tokens per subreddit" }, "examples": [] if not include_examples else [ {"subreddit_names": ["Python", "django", "flask"], "listing_type": "hot", "limit_per_subreddit": 5}, {"subreddit_names": ["MachineLearning", "deeplearning"], "listing_type": "top", "time_filter": "week", "limit_per_subreddit": 10} ] }, "fetch_comments": { "description": "Get complete comment tree for a post", "parameters": { "submission_id": { "type": "string", "required_one_of": ["submission_id", "url"], "description": "Reddit post ID (e.g., '1abc234')" }, "url": { "type": "string", "required_one_of": ["submission_id", "url"], "description": "Full Reddit URL to the post" }, "comment_limit": { "type": "integer", "default": 100, "recommendation": "50-100 for analysis", "description": "Maximum comments to fetch" }, "comment_sort": { "type": "enum", "options": ["best", "top", "new"], "default": "best", "description": "How to sort comments" } }, "examples": [] if not include_examples else [ {"submission_id": "1abc234", "comment_limit": 100}, {"url": "https://reddit.com/r/Python/comments/xyz789/", "comment_limit": 50, "comment_sort": "top"} ] }, "create_feed": { "description": "Create a new feed with analysis and selected subreddits", "parameters": { "name": { "type": "string", "required": True, "description": "Name for the feed (1-255 chars)" }, "website_url": { "type": "string", "required": False, "description": "URL of the website being analyzed (optional)" }, "analysis": { "type": "object", "required": False, "description": "Feed analysis data (optional)", "properties": { "description": "Description of topic/product/interest (10-1000 chars)", "audience_personas": "Array of persona tags (1-10 items)", "keywords": "Array of relevant keywords (1-50 items)" } }, "selected_subreddits": { "type": "array[object]", "required": True, "min_items": 1, "max_items": 50, "description": "List of selected subreddits", "item_properties": { "name": "Subreddit name (1-100 chars)", "description": "Subreddit description (max 1000 chars)", "subscribers": "Number of subscribers (integer >= 0)", "confidence_score": "Relevance score (0.0-1.0)" } } }, "examples": [] if not include_examples else [ { "name": "AI Research Feed", "website_url": "https://example.com", "analysis": { "description": "AI-powered data analysis platform for businesses", "audience_personas": ["data scientists", "business analysts", "ML engineers"], "keywords": ["machine learning", "data analysis", "business intelligence"] }, "selected_subreddits": [ {"name": "MachineLearning", "description": "ML community", "subscribers": 2500000, "confidence_score": 0.85}, {"name": "datascience", "description": "Data science discussions", "subscribers": 1200000, "confidence_score": 0.78} ] } ] }, "list_feeds": { "description": "List all feeds for the authenticated user", "parameters": { "limit": { "type": "integer", "required": False, "default": 50, "range": [1, 100], "description": "Maximum number of feeds to return" }, "offset": { "type": "integer", "required": False, "default": 0, "description": "Number of feeds to skip (for pagination)" } }, "examples": [] if not include_examples else [ {"limit": 10, "offset": 0}, {"limit": 25, "offset": 50} ] }, "get_feed": { "description": "Get a specific feed by ID", "parameters": { "feed_id": { "type": "string", "required": True, "description": "UUID of the feed to retrieve" } }, "examples": [] if not include_examples else [ {"feed_id": "550e8400-e29b-41d4-a716-446655440000"} ] }, "get_feed_config": { "description": "Get configuration for a feed (subreddit names, settings)", "parameters": { "feed_id": { "type": "string", "required": True, "description": "UUID of the feed to get config for" } }, "returns": { "profile_id": "UUID of the feed", "profile_name": "Name of the feed", "subreddits": "Array of subreddit names (strings)", "show_nsfw": "Whether NSFW content is enabled", "has_subreddits": "Whether feed has any subreddits configured" }, "examples": [] if not include_examples else [ {"feed_id": "550e8400-e29b-41d4-a716-446655440000"} ] }, "update_feed": { "description": "Update an existing feed (partial update - only include fields to change)", "parameters": { "feed_id": { "type": "string", "required": True, "description": "UUID of the feed to update" }, "name": { "type": "string", "required": False, "description": "New name for the feed (1-255 chars)" }, "website_url": { "type": "string", "required": False, "description": "Updated website URL" }, "analysis": { "type": "object", "required": False, "description": "Updated feed analysis data" }, "selected_subreddits": { "type": "array[object]", "required": False, "description": "Updated list of selected subreddits" } }, "examples": [] if not include_examples else [ {"feed_id": "550e8400-e29b-41d4-a716-446655440000", "name": "Updated Feed Name"}, { "feed_id": "550e8400-e29b-41d4-a716-446655440000", "selected_subreddits": [ {"name": "Python", "description": "Python programming", "subscribers": 1500000, "confidence_score": 0.9} ] } ] }, "delete_feed": { "description": "Delete a feed", "parameters": { "feed_id": { "type": "string", "required": True, "description": "UUID of the feed to delete" } }, "examples": [] if not include_examples else [ {"feed_id": "550e8400-e29b-41d4-a716-446655440000"} ] } }
  • Documentation and examples referencing get_operation_schema in server-info resource, explaining its role in the three-layer architecture.
    "Layer 2: get_operation_schema(operation_id) - Get requirements", "Layer 3: execute_operation(operation_id, parameters) - Execute" ], "description": "ALWAYS start with Layer 1, then Layer 2, then Layer 3" }, "tools": [ { "name": "discover_operations", "layer": 1, "description": "Discover available Reddit operations", "parameters": "NONE - Call without any parameters: discover_operations() NOT discover_operations({})", "purpose": "Shows all available operations and recommended workflows" }, { "name": "get_operation_schema", "layer": 2, "description": "Get parameter requirements for an operation", "parameters": { "operation_id": "The operation to get schema for (from Layer 1)", "include_examples": "Whether to include examples (optional, default: true)" }, "purpose": "Provides parameter schemas, validation rules, and examples" }, { "name": "execute_operation", "layer": 3, "description": "Execute a Reddit operation", "parameters": { "operation_id": "The operation to execute", "parameters": "Parameters matching the schema from Layer 2" }, "purpose": "Actually performs the Reddit API calls" } ], "prompts": [ { "name": "reddit_research", "description": "Conduct comprehensive Reddit research on any topic or question", "parameters": { "research_request": "Natural language description of what to research (e.g., 'How do people feel about remote work?')" }, "returns": "Structured workflow guiding complete research process", "output": "Comprehensive markdown report with citations and metrics", "usage": "Select prompt, provide research question, receive guided workflow" } ], "available_operations": { "discover_subreddits": "Find communities using semantic vector search (20,000+ indexed)", "search_subreddit": "Search within a specific community", "fetch_posts": "Get posts from one subreddit", "fetch_multiple": "Batch fetch from multiple subreddits (70% more efficient)", "fetch_comments": "Get complete comment tree for deep analysis" }, "advanced_configuration": { "description": "Fine-tune search behavior with SearchConfig for power users", "searchconfig_parameters": { "min_confidence": "Filter results by confidence threshold (0.0-1.0). Higher values return only highly relevant communities", "EXACT_DISTANCE_THRESHOLD": "Distance threshold for 'exact' match tier (default: 0.2)", "SEMANTIC_DISTANCE_THRESHOLD": "Distance threshold for 'semantic' match tier (default: 0.35)", "GENERIC_PENALTY_MULTIPLIER": "Penalty applied to generic subreddits like 'funny', 'pics', 'memes' (default: 0.3)", "LARGE_SUB_THRESHOLD": "Subscriber count above which to apply boost (default: 1,000,000)", "LARGE_SUB_BOOST_MULTIPLIER": "Confidence boost for large subreddits (default: 1.1)", "CONFIDENCE_DISTANCE_BREAKPOINTS": "Custom distance-to-confidence mapping for advanced tuning" }, "usage": "Import SearchConfig from src.tools for programmatic customization", "example": "custom_config = SearchConfig(GENERIC_PENALTY_MULTIPLIER=0.1, min_confidence=0.6)", "typical_use_cases": [ "Stricter filtering: Increase min_confidence to 0.7+ for only highly relevant communities", "Broader search: Decrease GENERIC_PENALTY_MULTIPLIER to find more generic community overlaps", "Niche communities: Increase SMALL_SUB_PENALTY_MULTIPLIER to 1.0 to weight niche subs equally", "Semantic tuning: Adjust CONFIDENCE_DISTANCE_BREAKPOINTS for different distance distributions" ] }, "resources": [ { "uri": "reddit://server-info", "description": "Comprehensive server capabilities, version, and usage information", "cacheable": False, "always_current": True } ], "statistics": { "total_tools": 3, "total_prompts": 1, "total_operations": 5, "total_resources": 1, "indexed_subreddits": "20,000+" } }, "usage_examples": { "automated_research": { "description": "Use the reddit_research prompt for complete automated workflow", "steps": [ "1. Select the 'reddit_research' prompt in your MCP client", "2. Provide your research question: 'What are the best practices for React development?'", "3. The prompt guides the LLM through discovery, gathering, analysis, and reporting", "4. Receive comprehensive markdown report with citations" ] }, "manual_workflow": { "description": "Step-by-step manual research using the three-layer architecture", "steps": [ "1. discover_operations() - See what's available", "2. get_operation_schema('discover_subreddits') - Get requirements", "3. execute_operation('discover_subreddits', {'query': 'machine learning', 'limit': 15})", "4. get_operation_schema('fetch_multiple') - Get batch fetch requirements", "5. execute_operation('fetch_multiple', {'subreddit_names': [...], 'limit_per_subreddit': 10})", "6. get_operation_schema('fetch_comments') - Get comment requirements", "7. execute_operation('fetch_comments', {'submission_id': 'abc123', 'comment_limit': 100})" ] }, "targeted_search": { "description": "Find specific content in known communities", "steps": [ "1. discover_operations()", "2. get_operation_schema('search_subreddit')", "3. execute_operation('search_subreddit', {'subreddit_name': 'Python', 'query': 'async', 'limit': 20})" ] } }, "performance_tips": [ "Use the reddit_research prompt for automated comprehensive research", "Always follow the three-layer workflow for manual operations", "Use fetch_multiple for 2+ subreddits (70% fewer API calls)", "Single semantic search finds all relevant communities", "Use confidence scores to guide strategy (>0.7 = high confidence)", "Expect ~15-20K tokens for comprehensive research" ], "workflow_guidance": { "confidence_based_strategy": { "high_confidence": "Scores > 0.7: Focus on top 5-8 subreddits", "medium_confidence": "Scores 0.4-0.7: Cast wider net with 10-12 subreddits", "low_confidence": "Scores < 0.4: Refine search terms and retry" }, "research_depth": { "minimum_coverage": "10+ threads, 100+ comments, 3+ subreddits", "quality_thresholds": "Posts: 5+ upvotes, Comments: 2+ upvotes", "author_credibility": "Prioritize 100+ karma for key insights" }, "token_optimization": { "discover_subreddits": "~1-2K tokens for semantic search", "fetch_multiple": "~500-1000 tokens per subreddit", "fetch_comments": "~2-5K tokens per post with comments", "full_research": "~15-20K tokens for comprehensive analysis" } }, "rate_limiting": { "handler": "PRAW automatic rate limit handling", "strategy": "Exponential backoff with retry", "current_status": rate_limit_info }, "authentication": { "type": "Application-only OAuth", "scope": "Read-only access", "capabilities": "Search, browse, and read public content" }, "support": { "repository": "https://github.com/king-of-the-grackles/reddit-research-mcp", "issues": "https://github.com/king-of-the-grackles/reddit-research-mcp/issues", "documentation": "See README.md and specs/ directory for architecture details" } }

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/king-of-the-grackles/reddit-mcp-poc'

If you have feedback or need assistance with the MCP directory API, please join our Discord server