Skip to main content
Glama

Superpowers.social

Server Details

Hosted MCP for X/Twitter and Reddit. 12 read-only tools, no API keys, free during beta.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pkobielak/social-superpowers-mcp
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.6/5 across 12 of 12 tools scored. Lowest: 2.9/5.

Server CoherenceA
Disambiguation5/5

Each tool clearly targets either Reddit or Twitter with specific actions (get post, search, user info, etc.). No two tools overlap in purpose; the platform prefixes eliminate any ambiguity.

Naming Consistency5/5

All tools follow a consistent pattern: platform prefix (reddit- or twitter-) followed by verb_noun in snake_case (e.g., reddit-get-post, twitter-search). This makes naming predictable and easy to understand.

Tool Count5/5

With 12 tools covering two major platforms, the count is well within the ideal 3-15 range. Each tool serves a distinct purpose without being excessive or insufficient for the server's stated scope.

Completeness4/5

The tool set covers core read operations for both platforms (posts, user info, search, etc.). Minor gaps exist, such as missing Reddit trending or Twitter user info, but the surface is largely complete for a read-only social media aggregator.

Available Tools

12 tools
reddit-get-postGet Reddit PostB
Read-onlyIdempotent
Inspect

Get detailed content of a specific Reddit post with comments.

ParametersJSON Schema
NameRequiredDescriptionDefault
post_idYesReddit post ID
subredditNoSubreddit name for faster lookup
text_onlyNoIf true, return only id and title per post to save tokens.
comment_limitNoNumber of comments to return (default 10)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint, openWorldHint, idempotentHint, destructiveHint; description adds 'with comments' but lacks details on error handling or comment limits beyond schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, no unnecessary words, front-loaded with core purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a tool with full schema coverage and safe annotations, but missing return format details (e.g., includes post body, comments) that could help an agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema covers 100% of parameters, so baseline is 3; description adds no additional meaning beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the tool retrieves detailed content of a specific Reddit post with comments, distinguishing it from sibling tools that handle multiple posts (reddit-get-posts).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives like reddit-get-posts or when to avoid it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

reddit-get-postsGet Subreddit PostsB
Read-onlyIdempotent
Inspect

Get posts from a subreddit.

ParametersJSON Schema
NameRequiredDescriptionDefault
sortNoSort: hot, new, top, risinghot
countNoNumber of results to return (default 10)
subredditYesSubreddit name without r/ prefix
text_onlyNoIf true, return only id and title per post to save tokens.
time_filterNoTime filter for top sort: hour, day, week, month, year, allweek
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnly, openWorld, and idempotent; the description adds no behavioral context beyond the schema. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, no fluff. Could be slightly more informative, but appropriately concise given rich schema and annotations.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema and minimal description; missing details on return format or any constraints beyond parameter descriptions.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so each parameter has a description. The tool description does not add further meaning beyond what the schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Get posts') and the resource ('from a subreddit'), differentiating it from sibling tools like 'reddit-get-post' and 'reddit-search'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool vs alternatives such as 'reddit-get-post' for a single post or 'reddit-search' for across subreddits.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

reddit-get-subreddit-infoSubreddit InfoA
Read-onlyIdempotent
Inspect

Get information about a subreddit.

ParametersJSON Schema
NameRequiredDescriptionDefault
subredditYesSubreddit name without r/ prefix
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, providing a safe behavioral profile. The description adds no further behavioral context (e.g., data freshness, rate limits, or what 'information' includes), but does not contradict annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single concise sentence with no extraneous words, effectively communicating the tool's purpose without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (one parameter, no output schema, rich annotations), the description is minimally adequate. However, it does not specify what information is returned (e.g., subscribers, description), leaving some ambiguity for an agent. Still, it meets basic completeness for a trivial tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema covers 100% of the single parameter with a clear description ('Subreddit name without r/ prefix'). The tool description does not add any additional meaning beyond what the schema already provides, resulting in a baseline score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Get information about a subreddit' clearly states the action (get) and resource (subreddit info), effectively distinguishing from sibling tools like reddit-get-post or reddit-get-user-info which target different entities.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description offers no guidance on when to use this tool versus alternatives (e.g., reddit-get-posts for retrieving posts, or reddit-search for finding subreddits). An agent lacks context to decide which tool is appropriate for their query.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

reddit-get-user-commentsReddit User CommentsA
Read-onlyIdempotent
Inspect

Get comments submitted by a specific Reddit user.

ParametersJSON Schema
NameRequiredDescriptionDefault
sortNoSort: hot, new, top, controversialnew
countNoNumber of results to return (default 10)
usernameYesReddit username without u/ prefix
time_filterNoTime filter: hour, day, week, month, year, allall
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, destructiveHint=false, covering safety aspects. The description adds no additional behavioral context beyond the basics. Since annotations already handle the key traits, the description's contribution is minimal but consistent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

A single, front-loaded sentence that conveys the essential purpose without any fluff. Every word earns its place. Highly concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simplicity of the tool (get user comments), the lack of output schema is acceptable. The combination of annotations and schema covers safety and parameters. The description is adequate for understanding the tool's role, though it could mention pagination or format of returned comments.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% with each parameter (username, sort, count, time_filter) described in the input schema. The description adds no additional meaning; it is already covered by the schema. Baseline of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description is clear: 'Get comments submitted by a specific Reddit user.' It specifies the action (get), resource (comments), and scope (specific user). This distinguishes it clearly from sibling tools like reddit-get-user-posts (posts) and reddit-search (general search).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Usage guidelines are implied but not explicit. The description states what it does but does not provide when-to-use or when-not-to-use compared to siblings. There are no exclusions or alternatives mentioned. Given the presence of sibling tools like reddit-get-user-posts, more guidance would help.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

reddit-get-user-infoReddit User InfoB
Read-onlyIdempotent
Inspect

Get information about a Reddit user.

ParametersJSON Schema
NameRequiredDescriptionDefault
usernameYesReddit username without u/ prefix
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, openWorldHint=true, idempotentHint=true, and destructiveHint=false, making the tool's safety profile clear. The description adds no behavioral context beyond that, which is acceptable but not enriching. It does not mention error handling, rate limits, or output specifics.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, direct sentence with no unnecessary words. It is front-loaded and immediately clear, making it highly efficient for an agent to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool has no output schema and only one parameter, so the description should compensate by hinting at what 'information' means (e.g., karma, creation date). It fails to do so, leaving uncertainty for the agent about what to expect. However, annotations provide basic safety context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% coverage, and the description for the only parameter ('username') adds value by specifying 'without u/ prefix,' guiding correct usage. This goes beyond the schema's basic type and required flag.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the purpose: getting information about a Reddit user. It specifies the resource (Reddit user) and action (get info), but 'information' is vague and does not distinguish what specific data is retrieved (e.g., profile, karma). Sibling tools like reddit-get-user-comments are clearly different, but more detail would improve clarity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like reddit-get-user-posts or reddit-get-user-comments. There is no mention of prerequisites, contexts, or when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

reddit-get-user-postsReddit User PostsA
Read-onlyIdempotent
Inspect

Get posts submitted by a specific Reddit user.

ParametersJSON Schema
NameRequiredDescriptionDefault
sortNoSort: hot, new, top, controversialnew
countNoNumber of results to return (default 10)
usernameYesReddit username without u/ prefix
text_onlyNoIf true, return only id and title per post to save tokens.
time_filterNoTime filter: hour, day, week, month, year, allall
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and destructiveHint=false, so the description's lack of additional behavioral context is acceptable. However, it adds no extra details such as rate limits or auth requirements beyond the annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, front-loaded sentence with no extraneous words. Every part is meaningful and earned its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite having 5 parameters and no output schema, the description does not hint at the return format (e.g., list of posts with fields) or any pagination behavior. It is incomplete for a moderately complex tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage for all 5 parameters. The description adds minimal value, e.g., clarifying that username should be without 'u/' prefix. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Get posts submitted by a specific Reddit user' uses a specific verb and resource. It clearly distinguishes from sibling tools like 'reddit-get-user-comments' which retrieves comments.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage via its name but provides no explicit guidance on when to use this tool versus alternatives like reddit-search or other user tools. No when-not-to-use or comparison with siblings is given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

twitter-newsTwitter NewsB
Read-onlyIdempotent
Inspect

Get trending news from X/Twitter.

ParametersJSON Schema
NameRequiredDescriptionDefault
countNoNumber of news items to return (default 10)
categoryNoOptional tab filter — one of: for-you, news, sports, entertainment, trending
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, providing a solid behavioral baseline. The description adds no extra context (e.g., pagination, rate limits, or result structure), but it does not contradict annotations. A score of 3 is appropriate as the description neither harms nor significantly adds value.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, front-loaded sentence that conveys the core function without unnecessary words. It is concise, though it could slightly expand on the scope (e.g., 'trending' vs. 'latest') without losing conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With two optional parameters, no output schema, and detailed annotations, the description is minimally adequate. However, it lacks information about return format or any nuances like rate limiting. The presence of annotations partially compensates, but the description could be more complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% coverage, with both 'count' and 'category' described adequately in the schema itself. The description adds no parameter-specific information. Baseline 3 is correct since the schema already does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Get' and the resource 'trending news from X/Twitter.' This distinctly separates it from sibling tools like twitter-search (general search) and twitter-user-tweets (user-specific), making the purpose unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No usage guidelines are provided. The description does not mention when to use this tool versus alternatives like twitter-search or twitter-thread, nor does it specify any prerequisites or limitations. The agent must infer context from the name alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

twitter-readRead TweetA
Read-onlyIdempotent
Inspect

Read a single tweet by URL or ID.

ParametersJSON Schema
NameRequiredDescriptionDefault
tweetYesTweet URL or tweet ID
text_onlyNoIf true, return only id and text to save tokens.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint, idempotentHint, and destructiveHint, covering safety. The description adds minimal behavioral context beyond reading by URL/ID. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, front-loaded, no redundant information. Efficient and clear.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Tool is simple with 2 params, no output schema, and comprehensive annotations. Description fully covers purpose and input, sufficient for the tool's complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear descriptions for both parameters. The description does not add meaning beyond the schema, so baseline of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Read a single tweet by URL or ID', specifying a specific verb and resource. It distinguishes from sibling tools like twitter-thread (multiple tweets) and twitter-user-tweets (user's tweets).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context for reading a single tweet, but lacks explicit when-to-use vs. alternatives or exclusions. Sibling tool names provide indirect guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

twitter-threadRead ThreadA
Read-onlyIdempotent
Inspect

Read an entire tweet thread/conversation.

ParametersJSON Schema
NameRequiredDescriptionDefault
tweetYesTweet URL or ID of any tweet in the thread
cursorNoPagination cursor from a previous response
text_onlyNoIf true, return only id and text per tweet to save tokens.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint=true, destructiveHint=false, idempotentHint=true. The description adds that it retrieves the 'entire thread,' which is useful but minimal. No additional behavioral context (e.g., pagination, rate limits) is given beyond schema hints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, front-loaded sentence that conveys the tool's purpose efficiently. No wasted words, and the structure is clear and direct.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of annotations and full schema coverage, the description covers the core functionality. However, it omits mention of pagination (cursor parameter) and return structure, which are not documented elsewhere (no output schema). Still, it is reasonably complete for a simple read tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema covers all 3 parameters with 100% descriptions (tweet, cursor, text_only). The description does not add further meaning beyond what the schema provides, thus baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Read an entire tweet thread/conversation,' specifying the verb 'Read' and the resource 'entire tweet thread/conversation.' This distinguishes it from siblings like 'twitter-read' (single tweet) and 'twitter-search' (search), providing high purpose clarity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for reading threads, but lacks explicit guidance on when not to use it or alternatives. While the name and sibling context help, no direct 'when-to-use' or 'when-not-to-use' statements are present, leaving some ambiguity.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

twitter-user-tweetsGet User TweetsA
Read-onlyIdempotent
Inspect

Get recent tweets from a specific X/Twitter user.

ParametersJSON Schema
NameRequiredDescriptionDefault
countNoNumber of tweets to return (default 20)
cursorNoPagination cursor from a previous response
handleYesTwitter handle (without @)
text_onlyNoIf true, return only id and text per tweet to save tokens.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, destructiveHint=false, indicating a safe, read-only operation. The description adds no additional behavioral context (e.g., rate limits, token usage). No contradiction.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, clear sentence with no unnecessary information. It is front-loaded and concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of an output schema, the description does not specify the returned fields (e.g., tweet objects with id, text, etc.). While annotations provide some context, a more complete description would mention the default returned data structure.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% coverage for parameter descriptions (handle, count, cursor, text_only). The description itself does not add extra meaning beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Get recent tweets from a specific X/Twitter user', specifying the action (get), resource (tweets), and the scope (specific user). This distinguishes it from sibling tools like twitter-search (searching) and twitter-thread (reading a thread).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives (e.g., twitter-search, twitter-thread). It does not mention prerequisites or context for usage, leaving the agent to infer.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.