Skip to main content
Glama
qinyuanpei

Weibo MCP Server

get_feeds

Retrieve a Weibo user's timeline content by providing their unique identifier, returning a list of feeds with configurable limits.

Instructions

Get a Weibo user's feeds
    
Returns:
    list[dict]: List of dictionaries containing feeds

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
uidYesThe unique identifier of the Weibo user
limitNoMaximum number of feeds to return, defaults to 15

Implementation Reference

  • MCP tool handler and registration for get_feeds tool. Delegates to WeiboCrawler instance.
    @mcp.tool()
    async def get_feeds(
        ctx: Context, 
        uid: Annotated[int, Field(description="The unique identifier of the Weibo user")], 
        limit: Annotated[int, Field(description="Maximum number of feeds to return, defaults to 15", default=15)] = 15,
        ) -> list[dict]:
        """
        Get a Weibo user's feeds
            
        Returns:
            list[dict]: List of dictionaries containing feeds
        """
        return await crawler.get_feeds(str(uid), limit)
  • Core implementation of get_feeds in WeiboCrawler: fetches paginated user feeds using HTTP requests and parsing.
    async def get_feeds(self, uid: int, limit: int = 15) -> list[FeedItem]:
        """
        Extract user's Weibo feeds (posts) with pagination support.
    
        Args:
            uid (int): The unique identifier of the Weibo user
            limit (int): Maximum number of feeds to extract, defaults to 15
    
        Returns:
            list[FeedItem]: List of user's Weibo feeds
        """
        feeds = []
        sinceId = ''
        async with httpx.AsyncClient() as client:
            containerId = await self._get_container_id(client, uid)
    
            while len(feeds) < limit:
                pagedFeeds = await self._extract_feeds(client, uid, containerId, sinceId)
                if not pagedFeeds.Feeds:
                    break
    
                feeds.extend(pagedFeeds.Feeds)
                sinceId = pagedFeeds.SinceId
                if not sinceId:
                    break
    
        return feeds
  • Pydantic model defining the structure of individual feed items returned by get_feeds.
    class FeedItem(BaseModel):
        """
        Data model for a single Weibo feed item.
        
        Attributes:
            id (int): Unique identifier for the feed item
            text (str): Content of the feed item
            source (str): Source of the feed (e.g., app or web)
            created_at (str): Timestamp when the feed was created
            user (Union[dict, UserProfile]): User information associated with the feed
            comments_count (int): Number of comments on the feed
            attitudes_count (int): Number of likes on the feed
            reposts_count (int): Number of reposts of the feed
            raw_text (str): Raw text content of the feed
            region_name (str): Region information
            pics (list[dict]): List of pictures in the feed
            videos (dict): Video information in the feed
        """
        id: int = Field()
        text: str = Field()
        source: str = Field()
        created_at: str = Field()
        user: Union[dict, UserProfile] = Field()
        comments_count: int = Field()
        attitudes_count: int = Field()
        reposts_count: int = Field()
        raw_text: str = Field()
        region_name: str = Field()
        pics: list[dict] = Field()
        videos: dict = Field()

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/qinyuanpei/mcp-server-weibo'

If you have feedback or need assistance with the MCP directory API, please join our Discord server