Skip to main content
Glama
rafaljanicki

X (Twitter) MCP server

by rafaljanicki

get_highlights_tweets

Retrieve highlighted tweets from a specific user's timeline using their user ID. Customize results by setting the count and cursor for targeted data extraction.

Instructions

Retrieves highlighted tweets from a user's timeline (simulated)

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
countNo
cursorNo
user_idYes

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • Registers the 'get_highlights_tweets' tool with FastMCP server using the @server.tool decorator.
    @server.tool(name="get_highlights_tweets", description="Retrieves highlighted tweets from a user's timeline (simulated)")
  • The handler function that implements the tool logic by initializing the Twitter client and fetching the user's recent tweets using `get_users_tweets` as a proxy for 'highlights', since no direct endpoint exists.
    async def get_highlights_tweets(user_id: str, count: Optional[int] = 100, cursor: Optional[str] = None) -> List[Dict]:
        """Fetches highlighted tweets from a user's timeline. (Simulated using user's timeline as Twitter API v2 doesn't have a direct 'highlights' endpoint).
    
        Args:
            user_id (str): The ID of the user whose highlights are to be fetched.
            count (Optional[int]): Number of tweets to retrieve. Default 100. Min 5, Max 100 for get_users_tweets.
            cursor (Optional[str]): Pagination token for fetching the next set of results.
        """
        client, _ = initialize_twitter_clients()
        # Twitter API v2 doesn't have highlights; use user timeline
        tweets = client.get_users_tweets(id=user_id, max_results=count, pagination_token=cursor, tweet_fields=["id", "text", "created_at"])
        return [tweet.data for tweet in tweets.data]
  • Function signature defining input parameters (user_id: str, count: Optional[int], cursor: Optional[str]) and output type (List[Dict]) for schema inference.
    async def get_highlights_tweets(user_id: str, count: Optional[int] = 100, cursor: Optional[str] = None) -> List[Dict]:
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It only states the retrieval action and simulation aspect, lacking critical information about permissions, rate limits, pagination behavior (implied by cursor parameter), or what 'highlighted' means operationally. This is inadequate for a tool with multiple parameters and no annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that gets straight to the point without unnecessary words. However, it's arguably too concise given the lack of parameter and behavioral information needed for this tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 3 parameters (one required), 0% schema coverage, no annotations, but with an output schema, the description is insufficient. It doesn't explain the simulation aspect, parameter meanings, or behavioral constraints. The output schema helps with return values, but the description should provide more operational context given the complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate but adds no parameter information. It doesn't explain what user_id refers to, what count controls, or how cursor works for pagination. The description fails to provide any semantic context beyond what's minimally inferable from parameter names.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Retrieves') and resource ('highlighted tweets from a user's timeline'), distinguishing it from siblings like get_timeline or get_user_mentions. However, it doesn't specify what makes tweets 'highlighted' or how this differs from other tweet-fetching tools beyond the simulated aspect.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives like get_timeline or get_user_mentions. The description mentions 'simulated' but doesn't explain what that means for usage decisions, leaving the agent with no context for tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/rafaljanicki/x-twitter-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server