mcp-reddit
Server Quality Checklist
Latest release: v1.0.0
- Disambiguation5/5
The two tools have clearly distinct purposes: one fetches a list of hot threads from a subreddit, while the other retrieves detailed content for a specific post. There is no overlap in functionality, making it easy for an agent to select the appropriate tool based on whether it needs overview information or detailed content.
Naming Consistency5/5Both tools follow a consistent verb_noun pattern with 'fetch_reddit_' prefix, using snake_case throughout. The naming is predictable and readable, with no deviations in style or convention across the tool set.
Tool Count2/5With only 2 tools, the server feels thin for a Reddit integration. While the tools cover basic fetching of posts and content, there are likely gaps in functionality such as posting, voting, or searching, which limits the server's utility for comprehensive Reddit interactions.
Completeness2/5The tool surface is severely incomplete for a Reddit domain. It only supports fetching operations (read-only), with no ability to create, update, or delete content (e.g., posting, commenting, or voting). This will cause agent failures when trying to perform common Reddit actions beyond simple retrieval.
Average 3.2/5 across 2 of 2 tools scored.
See the Tool Scores section below for per-tool breakdowns.
- 0 of 1 issues responded to in the last 6 months
- 0 commits in the last 12 weeks
- No stable releases found
- No critical vulnerability alerts
- No high-severity vulnerability alerts
- No code scanning findings
- CI status not available
This repository is licensed under MIT License.
This repository includes a README.md file.
Tools from this server were used 5 times in the last 30 days.
Add a glama.json file to provide metadata about your server.
If you are the author, simply .
If the server belongs to an organization, first add
glama.jsonto the root of your repository:{ "$schema": "https://glama.ai/mcp/schemas/server.json", "maintainers": [ "your-github-username" ] }Then . Browse examples.
Add related servers to improve discoverability.
How to sync the server with GitHub?
Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.
To manually sync the server, click the "Sync Server" button in the MCP server admin interface.
How is the quality score calculated?
The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).
Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.
Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).
Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.
Tool Scores
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. While it mentions the return format ('human readable string'), it doesn't address critical behavioral aspects like rate limits, authentication requirements, error conditions, or whether this is a read-only operation. The description is insufficient for a tool that interacts with an external API.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and well-structured with clear sections for Args and Returns. Each sentence serves a purpose, though the 'Returns' section could be slightly more informative about the content format beyond 'human readable string'.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has an output schema (though not provided in the prompt), the description doesn't need to detail return values extensively. However, for an API interaction tool with no annotations and sibling tools, the description should provide more behavioral context about limitations, errors, and differentiation from alternatives to be truly complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description provides essential semantic context for both parameters: 'subreddit' is explained as 'name of the subreddit' and 'limit' as 'number of posts to fetch' with a default value. This compensates well for the lack of schema descriptions, though it doesn't specify format constraints or validation rules.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('fetch') and resource ('hot threads from a subreddit'), making the tool's purpose immediately understandable. However, it doesn't explicitly differentiate from its sibling 'fetch_reddit_post_content', which appears to fetch content of individual posts rather than lists of hot threads.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives, including its sibling 'fetch_reddit_post_content'. There's no mention of prerequisites, limitations, or contextual factors that would help an agent decide between this tool and other options.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. While it states the tool fetches content and returns a human-readable string, it doesn't cover important behavioral aspects like rate limits, authentication requirements, error conditions, pagination, or whether this is a read-only operation. The description is minimal and leaves significant behavioral questions unanswered.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely well-structured and concise. It begins with a clear purpose statement, then provides a clean 'Args:' section with parameter explanations, followed by a 'Returns:' section. Every sentence earns its place, and the information is front-loaded with the most important details first. No wasted words or redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given that there's an output schema (though not shown), the description doesn't need to explain return values in detail. However, for a tool with 3 parameters, 0% schema description coverage, and no annotations, the description provides adequate but minimal coverage. It explains what the tool does and what parameters mean, but lacks behavioral context and usage guidance that would make it more complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description adds significant value beyond the input schema, which has 0% description coverage. It clearly explains what each parameter means: 'post_id: Reddit post ID', 'comment_limit: Number of top level comments to fetch', and 'comment_depth: Maximum depth of comment tree to traverse'. This provides essential semantic context that the bare schema lacks, though it doesn't specify format details or constraints.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Fetch detailed content of a specific post' - a specific verb ('fetch') and resource ('detailed content of a specific post'). It distinguishes from the sibling tool 'fetch_reddit_hot_threads' by focusing on individual posts rather than hot threads. However, it doesn't explicitly contrast with the sibling beyond the inherent difference in scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention the sibling tool 'fetch_reddit_hot_threads' or explain when to fetch a specific post versus browsing hot threads. There are no usage prerequisites, exclusions, or contextual recommendations provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GitHub Badge
Glama performs regular codebase and documentation scans to:
- Confirm that the MCP server is working as expected.
- Confirm that there are no obvious security issues.
- Evaluate tool definition quality.
Our badge communicates server capabilities, safety, and installation instructions.
Card Badge
Copy to your README.md:
Score Badge
Copy to your README.md:
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/adhikasp/mcp-reddit'
If you have feedback or need assistance with the MCP directory API, please join our Discord server