Skip to main content
Glama
Decodo

Decodo MCP Server

reddit_user

Read-only

Scrape a Reddit user's profile, posts, and comments by providing their user profile URL.

Instructions

Scrape a Reddit user profile and their posts/comments

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
urlYesReddit user profile URL (eg. https://www.reddit.com/user/IWasRightOnce/)

Implementation Reference

  • The async handler function executed when the 'reddit_user' tool is called. It receives scraping params, calls the ScraperApiClient with the REDDIT_USER target, transforms the response by removing high-character-count fields, and returns the data as text content.
    async (scrapingParams: ScrapingMCPParams, extra: ProgressExtra) => {
      const params = {
        ...scrapingParams,
        target: SCRAPER_API_TARGETS.REDDIT_USER,
      } satisfies ScraperAPIParams;
    
      const { data } = await sapiClient.scrape<object>({ auth, scrapingParams: params, extra });
    
      const { data: text } = this.transformResponse({ data });
    
      return {
        content: [
          {
            type: 'text',
            text,
          },
        ],
      };
    }
  • Input schema defined via Zod: requires a 'url' string (the Reddit user profile URL) and includes annotations (readOnlyHint, openWorldHint).
    inputSchema: {
      url: z
        .string()
        .describe('Reddit user profile URL (eg. https://www.reddit.com/user/IWasRightOnce/)'),
    },
  • The 'register' method that calls server.registerTool with the name 'reddit_user', description, input schema, and the handler function.
    register = ({ server, sapiClient, auth }: ToolRegistrationArgs) => {
      server.registerTool(
        'reddit_user',
        {
          description: 'Scrape a Reddit user profile and their posts/comments',
          inputSchema: {
            url: z
              .string()
              .describe('Reddit user profile URL (eg. https://www.reddit.com/user/IWasRightOnce/)'),
          },
          annotations: {
            readOnlyHint: true,
            openWorldHint: true,
          },
        },
        async (scrapingParams: ScrapingMCPParams, extra: ProgressExtra) => {
          const params = {
            ...scrapingParams,
            target: SCRAPER_API_TARGETS.REDDIT_USER,
          } satisfies ScraperAPIParams;
    
          const { data } = await sapiClient.scrape<object>({ auth, scrapingParams: params, extra });
    
          const { data: text } = this.transformResponse({ data });
    
          return {
            content: [
              {
                type: 'text',
                text,
              },
            ],
          };
        }
      );
    };
  • Instantiation of RedditUserTool in the allTools array, which is later iterated to register all tools on the server.
      new RedditUserTool(),
      new BingSearchTool(),
      new ChatGPTTool(),
      new PerplexityTool(),
    ];
  • The transformResponse method that removes high-character-count fields (author_flair_richtext, preview, media_metadata) from the scraped data to reduce payload size.
    transformResponse = ({ data }: { data: object }) => {
      for (const fieldToRemove of RedditUserTool.FIELDS_WITH_HIGH_CHAR_COUNT) {
        data = removeKeyFromNestedObject({ obj: data, keyToRemove: fieldToRemove });
      }
    
      return { data: JSON.stringify(data) };
    };
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The annotations already declare readOnlyHint=true and openWorldHint=true, so the description does not need to restate that this is a safe read operation. The description adds that it scrapes posts and comments, which is useful but does not disclose additional behavioral traits like pagination behavior, depth of scraping, or rate limits. It does not contradict the annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, clear sentence that immediately conveys the tool's purpose. There is no redundant or extraneous information, making it highly concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the low complexity (one parameter with full schema coverage) and no output schema, the description provides a basic understanding of the tool's function. However, it fails to mention expected output format, pagination limits, or potential content depth, which would be valuable for a scraping tool. The description is adequate but not comprehensive.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage for the single parameter 'url', which already provides an example format. The description does not add any further semantic meaning or constraints beyond what the schema lists, meeting the baseline but not exceeding it.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'scrape' and the resource 'Reddit user profile and their posts/comments', which distinctly identifies the tool's purpose. It also differentiates from sibling tools like reddit_post and reddit_subreddit, which focus on specific posts or subreddits rather than entire user profiles.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies that this tool is for scraping a user's profile and content, which aligns with its name. However, it provides no explicit guidance on when to use this tool versus alternatives (e.g., reddit_post for a single post) or any prerequisites or limitations. The context is adequate but lacks explicit when-to-use or when-not-to-use instructions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Decodo/mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server