Skip to main content
Glama
Arindam200

Reddit MCP Server

get_submission_by_id

Retrieve detailed information about a specific Reddit post using its ID or URL, including optional comments and metadata for analysis.

Instructions

Get a Reddit submission by its ID.

Args:
    submission_id: The ID of the Reddit submission to retrieve (can be full URL or just ID)
    include_comments: If True, load and return the full comment forest for the post
    comment_replace_more_limit: Limit for replacing "MoreComments" objects (0 for none, None for all)

Returns:
    Dictionary containing structured submission information with the following structure:
    {
        'id': str,  # Submission ID (e.g., 'abc123')
        'title': str,  # Submission title
        'author': str,  # Author's username or '[deleted]' if deleted
        'subreddit': str,  # Subreddit name
        'score': int,  # Post score (upvotes - downvotes)
        'upvote_ratio': float,  # Ratio of upvotes to total votes
        'num_comments': int,  # Number of comments
        'created_utc': float,  # Post creation timestamp (UTC)
        'url': str,  # Full URL to the post
        'permalink': str,  # Relative URL to the post
        'is_self': bool,  # Whether it's a self (text) post
        'selftext': str,  # Content of self post (if any)
        'selftext_html': Optional[str],  # HTML formatted content
        'link_url': str,  # URL for link posts (if any)
        'domain': str,  # Domain of the linked content
        'over_18': bool,  # Whether marked as NSFW
        'spoiler': bool,  # Whether marked as spoiler
        'stickied': bool,  # Whether stickied in the subreddit
        'locked': bool,  # Whether comments are locked
        'archived': bool,  # Whether the post is archived
        'distinguished': Optional[str],  # Distinguishing type (e.g., 'moderator')
        'flair': Optional[Dict],  # Post flair information if any
        'media': Optional[Dict],  # Media information if any
        'preview': Optional[Dict],  # Preview information if available
        'awards': List[Dict],  # List of awards received
        'comments': Optional[List[Dict]],  # present if include_comments is True
        'metadata': {
            'fetched_at': float,  # Timestamp when data was fetched
            'subreddit_id': str,  # Subreddit full ID
            'author_id': str,  # Author's full ID if available
            'is_original_content': bool,  # Whether marked as OC
            'is_meta': bool,  # Whether marked as meta
            'is_crosspostable': bool,  # Whether can be crossposted
            'is_reddit_media_domain': bool,  # Whether media is hosted on Reddit
            'is_robot_indexable': bool,  # Whether search engines should index
            'is_created_from_ads_ui': bool,  # Whether created via ads UI
            'is_video': bool,  # Whether the post is a video
            'pinned': bool,  # Whether the post is pinned in the subreddit
            'gilded': int,  # Number of times gilded
            'total_awards_received': int,  # Total number of awards received
            'view_count': Optional[int],  # View count if available
            'visited': bool,  # Whether the current user has visited
        }
    }

Raises:
    ValueError: If submission_id is invalid or submission not found
    RuntimeError: For other errors during the operation

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
submission_idYes
include_commentsNo
comment_replace_more_limitNo

Implementation Reference

  • Handler function for the 'get_submission_by_id' tool. Retrieves a Reddit submission (post) by its ID, extracts and formats detailed information including metadata, optional comments tree, awards, and flair. Uses PRAW client to fetch the submission object and helper functions like _extract_reddit_id and _serialize_comment_tree.
    @mcp.tool()
    def get_submission_by_id(submission_id: str, include_comments: bool = False, comment_replace_more_limit: int = 0) -> Dict[str, Any]:
        """Get a Reddit submission by its ID.
    
        Args:
            submission_id: The ID of the Reddit submission to retrieve (can be full URL or just ID)
            include_comments: If True, load and return the full comment forest for the post
            comment_replace_more_limit: Limit for replacing "MoreComments" objects (0 for none, None for all)
    
        Returns:
            Dictionary containing structured submission information with the following structure:
            {
                'id': str,  # Submission ID (e.g., 'abc123')
                'title': str,  # Submission title
                'author': str,  # Author's username or '[deleted]' if deleted
                'subreddit': str,  # Subreddit name
                'score': int,  # Post score (upvotes - downvotes)
                'upvote_ratio': float,  # Ratio of upvotes to total votes
                'num_comments': int,  # Number of comments
                'created_utc': float,  # Post creation timestamp (UTC)
                'url': str,  # Full URL to the post
                'permalink': str,  # Relative URL to the post
                'is_self': bool,  # Whether it's a self (text) post
                'selftext': str,  # Content of self post (if any)
                'selftext_html': Optional[str],  # HTML formatted content
                'link_url': str,  # URL for link posts (if any)
                'domain': str,  # Domain of the linked content
                'over_18': bool,  # Whether marked as NSFW
                'spoiler': bool,  # Whether marked as spoiler
                'stickied': bool,  # Whether stickied in the subreddit
                'locked': bool,  # Whether comments are locked
                'archived': bool,  # Whether the post is archived
                'distinguished': Optional[str],  # Distinguishing type (e.g., 'moderator')
                'flair': Optional[Dict],  # Post flair information if any
                'media': Optional[Dict],  # Media information if any
                'preview': Optional[Dict],  # Preview information if available
                'awards': List[Dict],  # List of awards received
                'comments': Optional[List[Dict]],  # present if include_comments is True
                'metadata': {
                    'fetched_at': float,  # Timestamp when data was fetched
                    'subreddit_id': str,  # Subreddit full ID
                    'author_id': str,  # Author's full ID if available
                    'is_original_content': bool,  # Whether marked as OC
                    'is_meta': bool,  # Whether marked as meta
                    'is_crosspostable': bool,  # Whether can be crossposted
                    'is_reddit_media_domain': bool,  # Whether media is hosted on Reddit
                    'is_robot_indexable': bool,  # Whether search engines should index
                    'is_created_from_ads_ui': bool,  # Whether created via ads UI
                    'is_video': bool,  # Whether the post is a video
                    'pinned': bool,  # Whether the post is pinned in the subreddit
                    'gilded': int,  # Number of times gilded
                    'total_awards_received': int,  # Total number of awards received
                    'view_count': Optional[int],  # View count if available
                    'visited': bool,  # Whether the current user has visited
                }
            }
    
        Raises:
            ValueError: If submission_id is invalid or submission not found
            RuntimeError: For other errors during the operation
        """
        manager = RedditClientManager()
        if not manager.client:
            raise RuntimeError("Reddit client not initialized")
    
        if not submission_id or not isinstance(submission_id, str):
            raise ValueError("Submission ID is required")
    
        try:
            # Clean up the submission_id if it's a full URL or permalink
            clean_submission_id = _extract_reddit_id(submission_id)
            logger.info(f"Getting submission with ID: {clean_submission_id} (include_comments={include_comments})")
    
            # Create submission from ID
            submission = manager.client.submission(id=clean_submission_id)
    
            # Force fetch submission data to verify it exists and get all attributes
            submission.title  # This will raise if submission doesn't exist
    
            # Get basic submission data with error handling
            submission_data = {
                "id": submission.id,
                "title": submission.title,
                "author": str(submission.author)
                if hasattr(submission, "author") and submission.author
                else "[deleted]",
                "subreddit": submission.subreddit.display_name
                if hasattr(submission, "subreddit")
                else "unknown",
                "score": getattr(submission, "score", 0),
                "upvote_ratio": getattr(submission, "upvote_ratio", 0.0),
                "num_comments": getattr(submission, "num_comments", 0),
                "created_utc": submission.created_utc,
                "url": f"https://www.reddit.com{submission.permalink}"
                if hasattr(submission, "permalink")
                else f"t3_{clean_submission_id}",
                "permalink": getattr(
                    submission, "permalink", f"/comments/{clean_submission_id}"
                ),
                "is_self": getattr(submission, "is_self", False),
                "selftext": getattr(submission, "selftext", ""),
                "selftext_html": getattr(submission, "selftext_html", None),
                "link_url": getattr(submission, "url", ""),
                "domain": getattr(submission, "domain", ""),
                "over_18": getattr(submission, "over_18", False),
                "spoiler": getattr(submission, "spoiler", False),
                "stickied": getattr(submission, "stickied", False),
                "locked": getattr(submission, "locked", False),
                "archived": getattr(submission, "archived", False),
                "distinguished": getattr(submission, "distinguished", None),
                "flair": None,
                "media": getattr(submission, "media", None),
                "preview": getattr(submission, "preview", None),
                "awards": [],
            }
    
            # Add flair information if available
            if hasattr(submission, "link_flair_text") and submission.link_flair_text:
                submission_data["flair"] = {
                    "text": submission.link_flair_text,
                    "css_class": getattr(submission, "link_flair_css_class", ""),
                    "template_id": getattr(submission, "link_flair_template_id", None),
                    "text_color": getattr(submission, "link_flair_text_color", None),
                    "background_color": getattr(
                        submission, "link_flair_background_color", None
                    ),
                }
    
            # Add awards information if available
            if hasattr(submission, "all_awardings"):
                submission_data["awards"] = [
                    {
                        "id": award.get("id"),
                        "name": award.get("name"),
                        "description": award.get("description"),
                        "coin_price": award.get("coin_price", 0),
                        "coin_reward": award.get("coin_reward", 0),
                        "icon_url": award.get("icon_url"),
                        "count": award.get("count", 1),
                    }
                    for award in submission.all_awardings
                ]
    
            # Add comments if requested
            if include_comments:
                try:
                    # Resolve all MoreComments to get the complete tree
                    submission.comments.replace_more(limit=comment_replace_more_limit)
    
                    top_level_comments = [
                        c
                        for c in submission.comments
                        if isinstance(c, praw.models.Comment)
                    ]
    
                    submission_data["comments"] = [
                        _serialize_comment_tree(c) for c in top_level_comments
                    ]
                except Exception as comments_error:
                    logger.exception(f"Error loading comments for submission {submission.id}")
                    submission_data["comments"] = []
    
            # Add metadata
            submission_data["metadata"] = {
                "fetched_at": time.time(),
                "subreddit_id": getattr(submission.subreddit, "id", "")
                if hasattr(submission, "subreddit")
                else "",
                "author_id": f"t2_{submission.author.id}"
                if hasattr(submission, "author")
                and submission.author
                and hasattr(submission.author, "id")
                else None,
                "is_original_content": getattr(submission, "is_original_content", False),
                "is_meta": getattr(submission, "is_meta", False),
                "is_crosspostable": getattr(submission, "is_crosspostable", False),
                "is_reddit_media_domain": getattr(
                    submission, "is_reddit_media_domain", False
                ),
                "is_robot_indexable": getattr(submission, "is_robot_indexable", True),
                "is_created_from_ads_ui": getattr(
                    submission, "is_created_from_ads_ui", False
                ),
                "is_video": getattr(submission, "is_video", False),
                "pinned": getattr(submission, "pinned", False),
                "gilded": getattr(submission, "gilded", 0),
                "total_awards_received": getattr(submission, "total_awards_received", 0),
                "view_count": getattr(submission, "view_count", None),
                "visited": getattr(submission, "visited", False),
            }
    
            return submission_data
    
        except Exception as e:
            logger.error(f"Error in get_submission_by_id: {e}")
            if "404" in str(e) or "not found" in str(e).lower():
                raise ValueError(
                    f"Submission with ID {clean_submission_id} not found"
                ) from e
            if "403" in str(e) or "forbidden" in str(e).lower():
                raise ValueError(
                    f"Not authorized to access submission with ID {clean_submission_id}"
                ) from e
            if isinstance(e, (ValueError, RuntimeError)):
                raise
            raise RuntimeError(f"Failed to get submission by ID: {e}") from e
  • Helper function used by get_submission_by_id to extract the base submission ID from a URL or raw ID.
    def _extract_reddit_id(reddit_id: str) -> str:
        """Extract the base ID from a Reddit URL or ID.
    
        Args:
            reddit_id: Either a Reddit ID or a URL containing the ID
    
        Returns:
            The extracted Reddit ID
        """
        if not reddit_id:
            raise ValueError("Empty ID provided")
    
        # If it's a URL, extract the ID part
        if "/" in reddit_id:
            # Handle both standard URLs and permalinks
            parts = [p for p in reddit_id.split("/") if p]
            # The ID is typically the last non-empty part
            reddit_id = parts[-1]
            logger.debug(f"Extracted ID {reddit_id} from URL")
    
        return reddit_id
  • Helper function used when include_comments=True in get_submission_by_id to serialize the comment tree into JSON-compatible dicts.
    def _serialize_comment_tree(comment: praw.models.Comment) -> Dict[str, Any]:
        """Serialize a PRAW comment into a JSON-serializable tree structure."""
        try:
            replies = []
            if getattr(comment, "replies", None):
                replies = [
                    _serialize_comment_tree(reply)
                    for reply in comment.replies
                    if isinstance(reply, praw.models.Comment)
                ]
        except Exception as e:
            logger.error(f"Error while serializing replies for comment {getattr(comment, 'id', 'unknown')}: {e}")
            replies = []
    
        return {
            "id": comment.id,
            "author": str(comment.author) if comment.author else "[deleted]",
            "body": getattr(comment, "body", ""),
            "score": getattr(comment, "score", 0),
            "created_utc": getattr(comment, "created_utc", 0.0),
            "permalink": getattr(comment, "permalink", ""),
            "is_submitter": getattr(comment, "is_submitter", False),
            "distinguished": getattr(comment, "distinguished", None),
            "stickied": getattr(comment, "stickied", False),
            "locked": getattr(comment, "locked", False),
            "replies": replies,
        }

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Arindam200/reddit-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server