Skip to main content
Glama
ferdousbhai

WSB Analyst MCP Server

fetch_post_details

Retrieve detailed WallStreetBets post information including comments and extracted links for market analysis. Caches results for 5 minutes.

Instructions

Fetch detailed information about a specific WSB post including top comments. Caches results for 5 minutes.

Args:
    post_id: Reddit post ID

Returns:
    Detailed post data including comments and extracted links

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
post_idYes

Implementation Reference

  • server.py:212-212 (registration)
    Registers the fetch_post_details tool using the @mcp.tool() decorator, which handles schema inference from type hints and docstring.
    @mcp.tool()
  • The core handler function for the fetch_post_details tool. Fetches a Reddit submission by post_id, loads top comments, extracts valid external links from post content and comments, implements caching with 5-minute TTL, and returns structured post details.
    async def fetch_post_details(post_id: str, ctx: Context = None) -> dict:
        """
        Fetch detailed information about a specific WSB post including top comments. Caches results for 5 minutes.
    
        Args:
            post_id: Reddit post ID
    
        Returns:
            Detailed post data including comments and extracted links
        """
        # --- Cache Check ---
        cache_key = f"fetch_post_details:{post_id}"
        current_time = time.time()
        if cache_key in CACHE_DATA and current_time < CACHE_EXPIRY.get(cache_key, 0):
            logger.info(f"Cache hit for {cache_key}")
            return CACHE_DATA[cache_key]
        logger.info(f"Cache miss for {cache_key}")
        # --- End Cache Check ---
    
        try:
            if ctx:
                await ctx.report_progress(0, 3)
    
            reddit = await get_reddit_client()
            if not reddit:
                return {"error": "Unable to connect to Reddit API. Check your credentials."}
    
            try:
                if ctx:
                    await ctx.report_progress(1, 3)
    
                submission = await reddit.submission(id=post_id)
    
                # Load comments
                if ctx:
                    await ctx.report_progress(2, 3)
    
                await submission.comments.replace_more(limit=0)
                comments = await submission.comments.list()
                top_comments = sorted(comments, key=lambda c: c.score, reverse=True)[:10]
    
                # Extract links
                content_links = []
                if not submission.is_self and is_valid_external_link(submission.url):
                    content_links.append(submission.url)
                elif submission.is_self:
                    content_links.extend(extract_valid_links(submission.selftext))
    
                # Process comments
                comment_links = []
                comment_data = []
                for comment in top_comments:
                    try:
                        author_name = comment.author.name if comment.author else "[deleted]"
                        links_in_comment = extract_valid_links(comment.body)
                        if links_in_comment:
                            comment_links.extend(links_in_comment)
    
                        comment_data.append({
                            "content": comment.body,
                            "score": comment.score,
                            "author": author_name
                        })
                    except Exception as e:
                        logger.warning(f"Error processing comment: {str(e)}")
    
                # Combine all found links
                all_links = list(set(content_links + comment_links))
    
                result = {
                    "post_id": post_id,
                    "url": f"https://www.reddit.com{submission.permalink}",
                    "title": submission.title,
                    "selftext": submission.selftext if submission.is_self else "",
                    "upvote_ratio": submission.upvote_ratio,
                    "score": submission.score,
                    "link_flair_text": submission.link_flair_text or "",
                    "top_comments": comment_data,
                    "extracted_links": all_links
                }
    
                # --- Cache Store ---
                CACHE_DATA[cache_key] = result
                CACHE_EXPIRY[cache_key] = current_time + CACHE_TTL
                logger.info(f"Cached result for {cache_key} with TTL {CACHE_TTL}s")
                # --- End Cache Store ---
    
                if ctx:
                    await ctx.report_progress(3, 3)
    
                return result
            finally:
                await reddit.close()
        except Exception as e:
            logger.error(f"Error in fetch_post_details: {str(e)}")
            return {"error": f"Failed to fetch post details: {str(e)}"}

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/ferdousbhai/wsb-analyst-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server