get_hot_feeds
Retrieve trending posts from a Weibo user’s feed by specifying their unique ID. Customize results by setting the maximum number of posts to fetch. Ideal for monitoring popular user content on Weibo.
Instructions
Get a Weibo user's hot feeds
Returns:
list[dict]: List of dictionaries containing hot feeds
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of feeds to return, defaults to 15 | |
| uid | Yes | The unique identifier of the Weibo user |
Implementation Reference
- src/mcp_server_weibo/server.py:56-68 (handler)MCP tool handler implementation for 'get_hot_feeds'. This is the entry point for the tool, decorated with @mcp.tool(), defining input schema via Annotated Fields, and delegating to the WeiboCrawler instance.@mcp.tool() async def get_hot_feeds( ctx: Context, uid: Annotated[int, Field(description="The unique identifier of the Weibo user")], limit: Annotated[int, Field(description="Maximum number of feeds to return, defaults to 15", default=15)] = 15, ) -> list[dict]: """ Get a Weibo user's hot feeds Returns: list[dict]: List of dictionaries containing hot feeds """ return await crawler.get_hot_feeds(uid, limit)
- src/mcp_server_weibo/weibo.py:66-93 (helper)Core logic for fetching hot feeds from Weibo API. Performs HTTP request to Weibo's search endpoint with specific containerid for hot mblogs, parses response, filters cards, converts to FeedItem objects using helper method.async def get_hot_feeds(self, uid: int, limit: int = 15) -> list[FeedItem]: """ Extract hot feeds (posts) from a specific user's Weibo profile. Args: uid (int): The unique identifier of the Weibo user limit (int): Maximum number of hot feeds to extract, defaults to 15 Returns: list[FeedItem]: List of hot feeds from the user's profile """ async with httpx.AsyncClient() as client: try: params = { 'containerid': f'231002{str(uid)}_-_HOTMBLOG', 'type': 'uid', 'value': uid, } encoded_params = urlencode(params) response = await client.get(f'{SEARCH_URL}?{encoded_params}', headers=DEFAULT_HEADERS) result = response.json() cards = list(filter(lambda x:x['card_type'] == 9, result["data"]["cards"])) feeds = [self._to_feed_item(item['mblog']) for item in cards] return feeds[:limit] except httpx.HTTPError: self.logger.error(f"Unable to extract hot feeds for uid '{str(uid)}'", exc_info=True) return []
- Pydantic model for FeedItem used in the return type of get_hot_feeds. Defines the structure and validation for individual feed items returned by the tool.class FeedItem(BaseModel): """ Data model for a single Weibo feed item. Attributes: id (int): Unique identifier for the feed item text (str): Content of the feed item source (str): Source of the feed (e.g., app or web) created_at (str): Timestamp when the feed was created user (Union[dict, UserProfile]): User information associated with the feed comments_count (int): Number of comments on the feed attitudes_count (int): Number of likes on the feed reposts_count (int): Number of reposts of the feed raw_text (str): Raw text content of the feed region_name (str): Region information pics (list[dict]): List of pictures in the feed videos (dict): Video information in the feed """ id: int = Field() text: str = Field() source: str = Field() created_at: str = Field() user: Union[dict, UserProfile] = Field() comments_count: int = Field() attitudes_count: int = Field() reposts_count: int = Field() raw_text: str = Field() region_name: str = Field() pics: list[dict] = Field() videos: dict = Field()
- src/mcp_server_weibo/server.py:9-12 (registration)Initialization of FastMCP server instance and WeiboCrawler, which is used by all tools including get_hot_feeds.mcp = FastMCP("Weibo") # Create an instance of WeiboCrawler for handling Weibo API operations crawler = WeiboCrawler()