Skip to main content
Glama
ferdousbhai

WSB Analyst MCP Server

get_external_links

Extract all external links from top WallStreetBets posts based on specified score, comment thresholds, and post limits for market analysis and insights.

Instructions

Get all external links from top WSB posts. Args: min_score: Minimum score (upvotes) required min_comments: Minimum number of comments required limit: Maximum number of posts to scan Returns: Dictionary with all unique external links found

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
limitNo
min_commentsNo
min_scoreNo

Implementation Reference

  • The handler function for the 'get_external_links' tool. It orchestrates fetching top WSB posts, retrieving their details (including extracted external links from posts and comments), aggregates unique links, and returns them sorted. The @mcp.tool() decorator registers this function as an MCP tool.
    @mcp.tool() async def get_external_links(min_score: int = 100, min_comments: int = 10, limit: int = 10, ctx: Context = None) -> dict: """ Get all external links from top WSB posts. Args: min_score: Minimum score (upvotes) required min_comments: Minimum number of comments required limit: Maximum number of posts to scan Returns: Dictionary with all unique external links found """ if ctx: await ctx.report_progress(0, 3) # Get filtered posts posts_result = await find_top_posts(min_score, min_comments, limit) if "error" in posts_result: return {"error": posts_result["error"]} if len(posts_result["posts"]) == 0: return {"count": 0, "links": []} # Collect post IDs post_ids = [post["id"] for post in posts_result["posts"]] if ctx: await ctx.report_progress(1, 3) # Get details for all posts details_result = await fetch_batch_post_details(post_ids) if "error" in details_result: return {"error": details_result["error"]} # Extract all links all_links = [] for post_id, post_detail in details_result["posts"].items(): if "extracted_links" in post_detail: all_links.extend(post_detail["extracted_links"]) if ctx: await ctx.report_progress(2, 3) # Remove duplicates and sort unique_links = sorted(list(set(all_links))) if ctx: await ctx.report_progress(3, 3) return { "count": len(unique_links), "links": unique_links }
  • Helper function used indirectly (via fetch_post_details) to extract valid external links from post and comment text, filtering out Reddit domains and others via regex and is_valid_external_link.
    def extract_valid_links(text: str) -> list[str]: if not text: return [] url_pattern = re.compile( r'https?://(?!(?:www\.)?reddit\.com|(?:www\.)?i\.redd\.it|(?:www\.)?v\.redd\.it|(?:www\.)?imgur\.com|' r'(?:www\.)?preview\.redd\.it|(?:www\.)?sh\.reddit\.com|[^.]*\.reddit\.com)' r'[^\s)\]}"\']+', re.IGNORECASE ) links = url_pattern.findall(text) return [link for link in links if is_valid_external_link(link)]
  • Helper function to validate if a link is external by excluding common internal/media domains. Used by extract_valid_links.
    def is_valid_external_link(url: str) -> bool: excluded_domains = [ "reddit.com", "redd.it", "imgur.com", "gfycat.com", "redgifs.com", "giphy.com", "imgflip.com", "youtu.be", "discord.gg", ] if any(domain in url for domain in excluded_domains): return False return True

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/ferdousbhai/wsb-analyst-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server