Skip to main content
Glama
karanb192

Reddit Buddy MCP

by karanb192

get_post_details

Fetch Reddit posts with comments using URL or post ID, including options for comment limits, sorting, and link extraction to analyze discussions.

Instructions

Fetch a Reddit post with its comments. Requires EITHER url OR post_id. IMPORTANT: When using post_id alone, an extra API call is made to fetch the subreddit first (2 calls total). For better efficiency, always provide the subreddit parameter when known (1 call total).

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
comment_depthNoLevels of nested replies to include (1-10, default: 3)
comment_limitNoMaximum comments to fetch (1-500, default: 20)
comment_sortNoComment ordering: "best" (algorithm-ranked), "top" (highest score), "new", "controversial", "qa" (Q&A style). Default: best
extract_linksNoExtract all URLs mentioned in post and comments (default: false)
max_top_commentsNoNumber of top comments to return (1-50, default: 5)
post_idNoReddit post ID (e.g., "abc123"). Can be used alone or with subreddit for better performance
subredditNoOptional subreddit name when using post_id. Providing it avoids an extra API call (e.g., "science")
urlNoFull Reddit post URL (e.g., "https://reddit.com/r/science/comments/abc123/...")

Implementation Reference

  • The main handler function in RedditTools class that implements get_post_details tool logic. Extracts post identifier from URL or parameters, fetches post and comments via RedditAPI, cleans and structures the data, processes top comments, optionally extracts links from comments, and returns formatted result.
    async getPostDetails(params: z.infer<typeof getPostDetailsSchema>) {
      let postIdentifier: string;
    
      if (params.url) {
        // Extract from URL - returns format "subreddit_postid"
        postIdentifier = this.extractPostIdFromUrl(params.url);
      } else if (params.post_id) {
        // If subreddit provided, use it; otherwise getPost will fetch it
        postIdentifier = params.subreddit
          ? `${params.subreddit}_${params.post_id}`
          : params.post_id;
      } else {
        throw new Error('Provide either url OR post_id');
      }
    
      const [postListing, commentsListing] = await this.api.getPost(postIdentifier, {
        limit: params.comment_limit,
        sort: params.comment_sort,
        depth: params.comment_depth,
      });
    
      const post = postListing.data.children[0].data;
    
      // Extract essential post fields
      const cleanPost = {
        id: post.id,
        title: post.title,
        author: post.author,
        score: post.score,
        upvote_ratio: post.upvote_ratio,
        num_comments: post.num_comments,
        created_utc: post.created_utc,
        url: post.url,
        permalink: `https://reddit.com${post.permalink}`,
        subreddit: post.subreddit,
        is_video: post.is_video,
        is_text_post: post.is_self,
        content: post.selftext?.substring(0, 1000), // More text for post details
        nsfw: post.over_18,
        link_flair_text: post.link_flair_text,
        stickied: post.stickied,
        locked: post.locked,
      };
    
      // Process comments
      const comments = commentsListing.data.children
        .filter(child => child.kind === 't1') // Only comments
        .map(child => child.data);
    
      let result: any = {
        post: cleanPost,
        total_comments: comments.length,
        top_comments: comments.slice(0, params.max_top_comments || 5).map(c => ({
          id: c.id,
          author: c.author,
          score: c.score,
          body: (c.body || '').substring(0, 500),
          created_utc: c.created_utc,
          depth: c.depth,
          is_op: c.is_submitter,
          permalink: `https://reddit.com${c.permalink}`,
        })),
      };
    
      // Extract links if requested
      if (params.extract_links) {
        const links = new Set<string>();
        comments.forEach(c => {
          const urls = (c.body || '').match(/https?:\/\/[^\s]+/g) || [];
          urls.forEach(url => links.add(url));
        });
        result.extracted_links = Array.from(links);
      }
    
      return result;
    }
  • Zod schema defining input validation for get_post_details tool, specifying optional post_id, subreddit, url, and various comment fetching parameters with defaults and descriptions.
    export const getPostDetailsSchema = z.object({
      post_id: z.string().optional().describe('Reddit post ID (e.g., "1abc2d3")'),
      subreddit: z.string().optional().describe('Subreddit name (optional with post_id, but more efficient if provided)'),
      url: z.string().optional().describe('Full Reddit URL (alternative to post_id)'),
      comment_limit: z.number().min(1).max(500).optional().default(20).describe('Default 20, range (1-500). Change ONLY IF user asks.'),
      comment_sort: z.enum(['best', 'top', 'new', 'controversial', 'qa']).optional().default('best'),
      comment_depth: z.number().min(1).max(10).optional().default(3).describe('Default 3, range (1-10). Override ONLY IF user specifies.'),
      extract_links: z.boolean().optional().default(false),
      max_top_comments: z.number().min(1).max(20).optional().default(5).describe('Default 5, range (1-20). Change ONLY IF user requests.'),
    });
  • MCP tool registration definition for get_post_details, including name, detailed description, input schema conversion from Zod, and read-only hint. Added to the toolDefinitions array used by the MCP server.
      name: 'get_post_details',
      description: 'Fetch a Reddit post with its comments. Requires EITHER url OR post_id. IMPORTANT: When using post_id alone, an extra API call is made to fetch the subreddit first (2 calls total). For better efficiency, always provide the subreddit parameter when known (1 call total).',
      inputSchema: zodToJsonSchema(getPostDetailsSchema) as any,
      readOnlyHint: true
    },
  • Dispatch handler in the tools/call request handler that routes calls to get_post_details by parsing arguments with the schema and invoking the tools.getPostDetails method.
    case 'get_post_details':
      result = await tools.getPostDetails(
        getPostDetailsSchema.parse(args)
      );
      break;
  • Private helper method used by getPostDetails to parse a full Reddit post URL and extract the subreddit_postid identifier format required by the RedditAPI.
    private extractPostIdFromUrl(url: string): string {
      const match = url.match(/\/r\/(\w+)\/comments\/(\w+)/);
      if (match) {
        return `${match[1]}_${match[2]}`;
      }
      throw new Error('Invalid Reddit post URL');
    }
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes the tool's behavior: it fetches a post with comments, requires either url or post_id, and explains the performance impact of using post_id alone (an extra API call) versus with subreddit (1 call total). This covers key operational traits like API call efficiency, though it doesn't mention rate limits, authentication needs, or error handling.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded: it starts with the core purpose, then immediately provides critical usage guidelines and efficiency tips in three clear sentences. Every sentence earns its place by adding essential information without redundancy, making it highly efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (8 parameters, no output schema, no annotations), the description is largely complete. It covers the purpose, usage, and key behavioral aspects like API call efficiency. However, it lacks details on output format (e.g., structure of returned post/comments) and error scenarios, which would be helpful since there's no output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, so the schema already documents all 8 parameters thoroughly. The description adds minimal semantic value beyond the schema: it clarifies the relationship between post_id and subreddit for performance, but doesn't provide additional context about parameters like comment_depth or extract_links. This meets the baseline of 3 when schema coverage is high.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the verb 'fetch' and the resource 'Reddit post with its comments', making the purpose clear. It distinguishes from siblings like 'browse_subreddit' (which likely lists posts) and 'search_reddit' (which searches across posts) by focusing on retrieving a specific post and its comments.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool: to fetch a post with comments. It specifies alternatives for input parameters (url OR post_id) and advises on efficiency: 'For better efficiency, always provide the subreddit parameter when known'. This directly addresses when to use certain parameters over others, though it doesn't explicitly compare to sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/karanb192/reddit-mcp-buddy'

If you have feedback or need assistance with the MCP directory API, please join our Discord server