fetch_post_details
Retrieve detailed WallStreetBets post data, including top comments and extracted links, using a unique Reddit post ID. Cached results optimize performance for real-time market analysis.
Instructions
Fetch detailed information about a specific WSB post including top comments. Caches results for 5 minutes.
Args:
post_id: Reddit post ID
Returns:
Detailed post data including comments and extracted links
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| post_id | Yes |
Implementation Reference
- server.py:213-308 (handler)The core handler function for the 'fetch_post_details' MCP tool. It is registered via the @mcp.tool() decorator. Retrieves detailed information for a Reddit post by ID, including title, selftext, top comments, and extracted external links from post and comments. Includes caching logic and progress reporting.async def fetch_post_details(post_id: str, ctx: Context = None) -> dict: """ Fetch detailed information about a specific WSB post including top comments. Caches results for 5 minutes. Args: post_id: Reddit post ID Returns: Detailed post data including comments and extracted links """ # --- Cache Check --- cache_key = f"fetch_post_details:{post_id}" current_time = time.time() if cache_key in CACHE_DATA and current_time < CACHE_EXPIRY.get(cache_key, 0): logger.info(f"Cache hit for {cache_key}") return CACHE_DATA[cache_key] logger.info(f"Cache miss for {cache_key}") # --- End Cache Check --- try: if ctx: await ctx.report_progress(0, 3) reddit = await get_reddit_client() if not reddit: return {"error": "Unable to connect to Reddit API. Check your credentials."} try: if ctx: await ctx.report_progress(1, 3) submission = await reddit.submission(id=post_id) # Load comments if ctx: await ctx.report_progress(2, 3) await submission.comments.replace_more(limit=0) comments = await submission.comments.list() top_comments = sorted(comments, key=lambda c: c.score, reverse=True)[:10] # Extract links content_links = [] if not submission.is_self and is_valid_external_link(submission.url): content_links.append(submission.url) elif submission.is_self: content_links.extend(extract_valid_links(submission.selftext)) # Process comments comment_links = [] comment_data = [] for comment in top_comments: try: author_name = comment.author.name if comment.author else "[deleted]" links_in_comment = extract_valid_links(comment.body) if links_in_comment: comment_links.extend(links_in_comment) comment_data.append({ "content": comment.body, "score": comment.score, "author": author_name }) except Exception as e: logger.warning(f"Error processing comment: {str(e)}") # Combine all found links all_links = list(set(content_links + comment_links)) result = { "post_id": post_id, "url": f"https://www.reddit.com{submission.permalink}", "title": submission.title, "selftext": submission.selftext if submission.is_self else "", "upvote_ratio": submission.upvote_ratio, "score": submission.score, "link_flair_text": submission.link_flair_text or "", "top_comments": comment_data, "extracted_links": all_links } # --- Cache Store --- CACHE_DATA[cache_key] = result CACHE_EXPIRY[cache_key] = current_time + CACHE_TTL logger.info(f"Cached result for {cache_key} with TTL {CACHE_TTL}s") # --- End Cache Store --- if ctx: await ctx.report_progress(3, 3) return result finally: await reddit.close() except Exception as e: logger.error(f"Error in fetch_post_details: {str(e)}") return {"error": f"Failed to fetch post details: {str(e)}"}