Skip to main content
Glama

fetch_posts

Retrieve Twitter posts matching a search query, using AI to evaluate content quality and engagement metrics until sufficient roast material is found.

Instructions

Fetch posts from Twitter for a given query, looping until quality threshold is met. Uses AI to evaluate batch quality and stops early when sufficient roast content is found. Returns posts with engagement metrics.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
queryYesSingle search query to find posts
loop_limitNoMax fetch iterations (default: 5, max: 10)
countNoPosts per fetch (default: 10, max: 100)
targetNoTarget name for quality evaluation (optional, improves quality checking)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and does well by disclosing key behavioral traits: it involves looping until a quality threshold is met, uses AI for batch quality evaluation, stops early when sufficient content is found, and returns posts with engagement metrics. This covers iterative fetching, quality assessment, and output format, though it lacks details on rate limits or error handling.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose and efficiently details the iterative process, AI evaluation, and return values in three concise sentences. Every sentence earns its place by adding critical behavioral context without redundancy or fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (iterative fetching with AI evaluation), no annotations, and no output schema, the description is largely complete: it explains the purpose, behavior, and output. However, it could improve by detailing error cases or the exact format of 'engagement metrics', leaving minor gaps for a tool with such dynamic behavior.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds no additional meaning beyond the schema, such as explaining how 'target' relates to 'quality evaluation' or the interplay between 'loop_limit' and 'count'. Baseline 3 is appropriate as the schema does the heavy lifting without extra value from the description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('fetch posts from Twitter'), identifies the resource ('posts'), and distinguishes from siblings by mentioning AI evaluation for quality threshold and roast content, which neither 'generate_search_query' nor 'rank_posts' imply. It goes beyond a basic fetch operation with its iterative quality-checking behavior.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool: for fetching Twitter posts with iterative quality evaluation until a threshold is met, specifically for finding 'roast content.' However, it does not explicitly state when not to use it or name alternatives like 'rank_posts' for post-processing or 'generate_search_query' for query creation, missing explicit exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Ebop14/slander_mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server