Skip to main content
Glama

Server Details

Reddit MCP — public Reddit data via JSON endpoints (no auth required)

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-reddit
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.7/5 across 8 of 8 tools scored. Lowest: 2.6/5.

Server CoherenceA
Disambiguation3/5

Tools like get_post, get_subreddit, and search_posts are distinct for Reddit operations, but ask_pipeworx and discover_tools introduce ambiguity as they seem to be general-purpose helpers for tool selection rather than Reddit-specific actions. The memory tools (forget, recall, remember) are orthogonal but could cause confusion about when to use them vs. Reddit tools.

Naming Consistency4/5

Most tool names follow a verb_noun pattern (e.g., get_post, get_subreddit, search_posts, remember, forget, recall). However, ask_pipeworx and discover_tools break this pattern with more generic names that don't clearly indicate their action-resource structure.

Tool Count4/5

8 tools is a reasonable count for a Reddit-focused server, covering basic retrieval and memory. However, three tools (ask_pipeworx, discover_tools, and memory utilities) are not strictly Reddit-specific, slightly expanding the scope. Still, the count feels appropriate.

Completeness3/5

The tool set covers fetching posts, subreddits, and searching, but lacks essential Reddit operations like creating posts, commenting, voting, or managing user accounts. The inclusion of generic memory tools and a meta-tool (discover_tools) suggests the server's purpose is broader than Reddit, leaving gaps for full Reddit interaction.

Available Tools

8 tools
ask_pipeworxAInspect

Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".

ParametersJSON Schema
NameRequiredDescriptionDefault
questionYesYour question or request in natural language
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description discloses that the tool selects the best data source, fills arguments, and returns results, which goes beyond the input schema. Since no annotations are provided, the description carries the full burden, and it adequately describes the tool's behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise, using only three sentences to convey purpose, usage, and examples. Every sentence adds value, and it is front-loaded with the core function.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has a single parameter, no output schema, and no annotations, the description is fairly complete. It explains the orchestration behavior and provides examples, but lacks details on error handling or return format.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, and the description adds context about the parameter being a natural language question, which is already in the schema's description. The baseline is 3, and the description does not add much beyond what the schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool takes a plain English question and returns an answer from the best available data source. It distinguishes itself from siblings by acting as an orchestrator that picks the right tool, but it doesn't explicitly name a sibling or differentiate from them.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit examples of when to use this tool (e.g., asking about trade deficits, adverse events, SEC filings) and implies it's for natural language queries. However, it does not explicitly state when not to use it or mention alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discover_toolsAInspect

Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of tools to return (default 20, max 50)
queryYesNatural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries")
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the burden. It discloses that the tool returns 'the most relevant tools with names and descriptions', which is sufficient behavioral transparency. However, it does not mention any side effects or performance characteristics, but given the read-only nature implied by 'Search', this is adequate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is only three sentences, each essential. It states the core function, what it returns, and when to call it. No wasted words or redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (2 params, no output schema, no nested objects), the description is adequately complete. It covers purpose, usage guidance, and parameter semantics. It could optionally mention that results are ranked by relevance, but this is not a critical gap.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description adds value by stating that the query should be a 'natural language description' and provides examples (e.g., 'analyze housing market trends'), which enriches the parameter's meaning beyond the schema. This warrants a 4.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: to search the Pipeworx tool catalog by describing what you need. It specifies the action ('Search'), the resource ('tool catalog'), and the mechanism ('by describing what you need'), which is unique among siblings and provides clear differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly advises to 'Call this FIRST when you have 500+ tools available and need to find the right ones for your task.' This gives a clear when-to-use instruction, and since no sibling tool performs a similar catalog search, it effectively implies when not to use others.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

forgetAInspect

Delete a stored memory by key.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key to delete
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It only states the basic action without disclosing behavioral traits such as whether deletion is permanent, whether it requires confirmation, or what happens on success/failure. The description lacks detail on side effects (e.g., no undo) or permissions needed.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, short sentence that is front-loaded with the verb 'Delete' and the resource. Every word is necessary and there is no filler. It is highly concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (1 parameter, no output schema, no annotations), the description is minimally adequate. However, it lacks details about error cases (e.g., key not found), return value (void or confirmation), and whether the operation is idempotent. A more complete description would mention these aspects.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% for the single parameter 'key', which has a description in the schema. The tool description adds no additional meaning beyond what the schema already provides. Baseline 3 is appropriate since the schema already documents the parameter.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action (delete), resource (stored memory), and identifier (key). It distinguishes the tool from its siblings: 'remember' creates memories, 'recall' retrieves them, and 'forget' deletes them. The purpose is unambiguous and specific.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies when to use this tool (when you need to delete a memory), but provides no guidance on when not to use it or alternatives. For example, there's no mention that the memory must exist or what happens if the key is not found. No explicit exclusions or references to sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_postAInspect

Get a Reddit post's full content and top-level comments by post ID. Returns text, score, author, subreddit, and comment threads.

ParametersJSON Schema
NameRequiredDescriptionDefault
post_idYesReddit post ID (the alphanumeric ID, e.g. "abc123")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations are empty, so the description carries the full burden. It states that the tool gets a post and top-level comments, which is transparent. However, it does not disclose any limitations (e.g., rate limits, pagination, or that only top-level comments are returned without nested replies). Given no annotations, a score of 3 is appropriate as it covers basic behavior but lacks depth.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that is front-loaded with the verb and resource, and includes the key input (post ID). Every word earns its place, no fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (one parameter, no nested objects, no output schema), the description is complete enough to convey what the tool does and what it returns (post and top-level comments). It could mention that comments are top-level only, but that is implied. A score of 4 reflects that it is sufficient for the complexity level.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with one required parameter 'post_id' already described as 'Reddit post ID (the alphanumeric ID, e.g. "abc123")'. The description adds that the post ID is needed but does not add new meaning beyond the schema. With high schema coverage, baseline is 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Get' and the resource 'a Reddit post and its top-level comments', and specifies the input is by 'post ID'. It distinguishes from siblings like 'get_subreddit' which fetches a subreddit, and 'search_posts' which searches, making the purpose distinct.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies when to use this tool: when you need a post and its top-level comments given a post ID. It does not explicitly exclude alternatives or mention when not to use it, but the context is clear. Sibling tools like 'search_posts' are for searching, so the guideline is adequately clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_subredditCInspect

Get trending posts from a subreddit (e.g., 'python', 'news'). Returns titles, scores, authors, URLs, and comment counts.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoNumber of posts to return (default: 10, max: 100)
subredditYesSubreddit name without the r/ prefix (e.g. "programming")
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description only says 'hot posts', but does not explain other possible behaviors (e.g., whether 'hot' is the only sort, if it returns a specific format, or any rate limits). Since there are no annotations, the description carries full burden but fails to disclose behavioral traits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is very concise (single sentence) and to the point. It is front-loaded with the main action. However, it could be slightly expanded without losing conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity and the existence of sibling tools for posts, the description is insufficient. It does not explain what 'hot posts' means, how the limit parameter works beyond what's in the schema, or what the response format is. No output schema exists to clarify.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% coverage with clear descriptions for both parameters (limit and subreddit). The description adds no additional meaning beyond what the schema already provides, so baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states it gets 'hot posts from a subreddit', which is a specific verb and resource. However, it does not distinguish from sibling tools like 'get_post' or 'search_posts', leaving ambiguity about the scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. For example, it doesn't mention that other tools like 'search_posts' might be better for searching across subreddits or that 'get_post' is for a single post.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recallAInspect

Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyNoMemory key to retrieve (omit to list all keys)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It states behavior: retrieve by key or list all. But lacks details on what happens if key doesn't exist (error vs empty?), or any side effects. Adequate but not rich.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no wasted words. Front-loaded with action and resource. Efficient and clear.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given simple tool (1 optional param, no output schema), description covers core functionality. But lacks info on error cases, performance (e.g., large memory lists), or persistence. Adequate for basic use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema covers key parameter with description 'Memory key to retrieve (omit to list all keys)'. Description adds that omitting key lists all, which aligns with schema. With 100% schema coverage, description adds minimal extra meaning, baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states verb 'Retrieve' and resource 'memory by key', and distinguishes between retrieving a specific memory and listing all. Sibling 'remember' and 'forget' suggest memory operations, and this tool's purpose is distinct.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Description explicitly says 'omit key to list all keys' and mentions retrieving context from earlier sessions. However, no explicit when-not-to-use or alternatives to other tools, though context is clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rememberAInspect

Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key (e.g., "subject_property", "target_ticker", "user_preference")
valueYesValue to store (any text — findings, addresses, preferences, notes)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It discloses that memory persists for authenticated users and 24 hours for anonymous sessions, which is helpful. However, it does not mention if values can be overwritten, size limits, or whether storing duplicate keys updates or errors.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences: first states purpose, second gives usage examples, third adds behavioral context. No wasted words, all sentences earn their place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has only 2 parameters, no output schema, and no nested objects, the description is fairly complete. It covers purpose, usage, and persistence behavior. Could mention if values can be overwritten or updated.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with each parameter having a description. The tool description adds examples of key usage (subject_property, target_ticker) and notes value can store any text, but these add moderate value beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool stores a key-value pair in session memory, specifying the verb 'store' and resource 'key-value pair'. It distinguishes from siblings like 'recall' and 'forget' by focusing on saving data, not retrieving or deleting.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use (save intermediate findings, user preferences, context across tool calls) and mentions persistence differences based on authentication. However, it does not explicitly state when not to use or name alternatives like 'recall' for retrieval.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_postsCInspect

Search Reddit posts by keyword or phrase across all subreddits. Returns matching posts with titles, scores, subreddits, authors, and URLs.

ParametersJSON Schema
NameRequiredDescriptionDefault
sortNoSort order: relevance, hot, top, new, comments (default: relevance)
limitNoNumber of results to return (default: 10, max: 100)
queryYesSearch query
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must convey behavior. It states 'Search Reddit posts' implying a read-only operation, which is consistent. However, it does not disclose any side effects, rate limits, or whether results are paginated. The schema details are minimal but sufficient for basic use.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single short sentence, which is concise. However, it is too minimal to be highly useful. It earns its place but could be slightly expanded to include key details like default sort or limit without being verbose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With only 3 parameters and no output schema, the description is incomplete. It does not mention the return format, error conditions, or any constraints like time range. For a search tool, this is insufficient for an agent to fully understand its behavior.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the description adds no additional meaning beyond the schema. The description does not elaborate on parameter semantics; it only repeats the tool's purpose. Baseline 3 is appropriate given full schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: searching Reddit posts by query string. The verb 'Search' and resource 'Reddit posts' are specific, and the description distinguishes it from siblings like get_post (single post) and get_subreddit (subreddit info). However, it could be more precise by mentioning it searches by query, not by other criteria.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives like get_post or get_subreddit. There is no mention of prerequisites or when not to use it. The description is minimal and does not help the agent decide between this and other search/retrieval tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.