Skip to main content
Glama
ferdousbhai

WSB Analyst MCP Server

get_external_links

Extract external links from top WallStreetBets posts to identify shared resources for market analysis. Filter by upvote score and comment count to focus on relevant content.

Instructions

Get all external links from top WSB posts. Args: min_score: Minimum score (upvotes) required min_comments: Minimum number of comments required limit: Maximum number of posts to scan Returns: Dictionary with all unique external links found

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
min_scoreNo
min_commentsNo
limitNo

Implementation Reference

  • The handler function decorated with @mcp.tool(), which implements the get_external_links tool. It fetches top WSB posts using find_top_posts, retrieves detailed post data including extracted external links using fetch_batch_post_details, collects and deduplicates all unique external links from posts and comments, and returns them sorted.
    @mcp.tool() async def get_external_links(min_score: int = 100, min_comments: int = 10, limit: int = 10, ctx: Context = None) -> dict: """ Get all external links from top WSB posts. Args: min_score: Minimum score (upvotes) required min_comments: Minimum number of comments required limit: Maximum number of posts to scan Returns: Dictionary with all unique external links found """ if ctx: await ctx.report_progress(0, 3) # Get filtered posts posts_result = await find_top_posts(min_score, min_comments, limit) if "error" in posts_result: return {"error": posts_result["error"]} if len(posts_result["posts"]) == 0: return {"count": 0, "links": []} # Collect post IDs post_ids = [post["id"] for post in posts_result["posts"]] if ctx: await ctx.report_progress(1, 3) # Get details for all posts details_result = await fetch_batch_post_details(post_ids) if "error" in details_result: return {"error": details_result["error"]} # Extract all links all_links = [] for post_id, post_detail in details_result["posts"].items(): if "extracted_links" in post_detail: all_links.extend(post_detail["extracted_links"]) if ctx: await ctx.report_progress(2, 3) # Remove duplicates and sort unique_links = sorted(list(set(all_links))) if ctx: await ctx.report_progress(3, 3) return { "count": len(unique_links), "links": unique_links }
  • Helper function that extracts potential URL links from text using regex, excluding common Reddit domains, and filters them further using is_valid_external_link.
    def extract_valid_links(text: str) -> list[str]: if not text: return [] url_pattern = re.compile( r'https?://(?!(?:www\.)?reddit\.com|(?:www\.)?i\.redd\.it|(?:www\.)?v\.redd\.it|(?:www\.)?imgur\.com|' r'(?:www\.)?preview\.redd\.it|(?:www\.)?sh\.reddit\.com|[^.]*\.reddit\.com)' r'[^\s)\]}"\']+', re.IGNORECASE ) links = url_pattern.findall(text) return [link for link in links if is_valid_external_link(link)]
  • Helper function that determines if a URL is a valid external link by checking against a list of excluded domains like Reddit, Imgur, etc.
    def is_valid_external_link(url: str) -> bool: excluded_domains = [ "reddit.com", "redd.it", "imgur.com", "gfycat.com", "redgifs.com", "giphy.com", "imgflip.com", "youtu.be", "discord.gg", ] if any(domain in url for domain in excluded_domains): return False return True
  • server.py:464-464 (registration)
    The @mcp.tool() decorator registers the get_external_links function as an MCP tool.
    @mcp.tool()

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/ferdousbhai/wsb-analyst-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server