Skip to main content
Glama

get_hot_feeds

Retrieve trending content from a Weibo user's feed by providing their unique identifier. Specify the number of posts to return for monitoring popular discussions.

Instructions

Get a Weibo user's hot feeds Returns: list[dict]: List of dictionaries containing hot feeds

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
uidYesThe unique identifier of the Weibo user
limitNoMaximum number of feeds to return, defaults to 15

Implementation Reference

  • MCP tool handler for get_hot_feeds that defines the tool schema via Annotated Fields and delegates to WeiboCrawler instance.
    @mcp.tool() async def get_hot_feeds( ctx: Context, uid: Annotated[int, Field(description="The unique identifier of the Weibo user")], limit: Annotated[int, Field(description="Maximum number of feeds to return, defaults to 15", default=15)] = 15, ) -> list[dict]: """ Get a Weibo user's hot feeds Returns: list[dict]: List of dictionaries containing hot feeds """ return await crawler.get_hot_feeds(uid, limit)
  • Core logic implementing the hot feeds retrieval from Weibo API using specific containerid for hot mblogs.
    async def get_hot_feeds(self, uid: int, limit: int = 15) -> list[FeedItem]: """ Extract hot feeds (posts) from a specific user's Weibo profile. Args: uid (int): The unique identifier of the Weibo user limit (int): Maximum number of hot feeds to extract, defaults to 15 Returns: list[FeedItem]: List of hot feeds from the user's profile """ async with httpx.AsyncClient() as client: try: params = { 'containerid': f'231002{str(uid)}_-_HOTMBLOG', 'type': 'uid', 'value': uid, } encoded_params = urlencode(params) response = await client.get(f'{SEARCH_URL}?{encoded_params}', headers=DEFAULT_HEADERS) result = response.json() cards = list(filter(lambda x:x['card_type'] == 9, result["data"]["cards"])) feeds = [self._to_feed_item(item['mblog']) for item in cards] return feeds[:limit] except httpx.HTTPError: self.logger.error(f"Unable to extract hot feeds for uid '{str(uid)}'", exc_info=True) return []
  • Pydantic BaseModel defining the structure of each FeedItem returned by get_hot_feeds.
    class FeedItem(BaseModel): """ Data model for a single Weibo feed item. Attributes: id (int): Unique identifier for the feed item text (str): Content of the feed item source (str): Source of the feed (e.g., app or web) created_at (str): Timestamp when the feed was created user (Union[dict, UserProfile]): User information associated with the feed comments_count (int): Number of comments on the feed attitudes_count (int): Number of likes on the feed reposts_count (int): Number of reposts of the feed raw_text (str): Raw text content of the feed region_name (str): Region information pics (list[dict]): List of pictures in the feed videos (dict): Video information in the feed """ id: int = Field() text: str = Field() source: str = Field() created_at: str = Field() user: Union[dict, UserProfile] = Field() comments_count: int = Field() attitudes_count: int = Field() reposts_count: int = Field() raw_text: str = Field() region_name: str = Field() pics: list[dict] = Field() videos: dict = Field()
  • The @mcp.tool() decorator registers the get_hot_feeds function as an MCP tool.
    @mcp.tool() async def get_hot_feeds( ctx: Context, uid: Annotated[int, Field(description="The unique identifier of the Weibo user")], limit: Annotated[int, Field(description="Maximum number of feeds to return, defaults to 15", default=15)] = 15, ) -> list[dict]: """ Get a Weibo user's hot feeds Returns: list[dict]: List of dictionaries containing hot feeds """ return await crawler.get_hot_feeds(uid, limit)

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/qinyuanpei/mcp-server-weibo'

If you have feedback or need assistance with the MCP directory API, please join our Discord server