Skip to main content
Glama

get_comments

Retrieve comments from a specific Weibo post using its unique identifier. Supports pagination to access all available responses.

Instructions

Get comments for a specific Weibo post. Returns: list[dict]: List of dictionaries containing comments

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
feed_idYesThe unique identifier of the Weibo post
pageNoPage number for pagination, defaults to 1

Implementation Reference

  • MCP tool handler and registration for 'get_comments'. Defines input schema via Annotated Fields and delegates execution to WeiboCrawler.get_comments.
    @mcp.tool() async def get_comments( ctx: Context, feed_id: Annotated[int, Field(description="The unique identifier of the Weibo post")], page: Annotated[int, Field(description="Page number for pagination, defaults to 1", default=1)] = 1 ) -> list[dict]: """ Get comments for a specific Weibo post. Returns: list[dict]: List of dictionaries containing comments """ return await crawler.get_comments(feed_id, page)
  • Core implementation of the get_comments functionality in WeiboCrawler. Performs HTTP GET to Weibo comments API, parses JSON response, and converts comments using helper.
    async def get_comments(self, feed_id: str, page: int = 1) -> list[CommentItem]: """ Get comments for a specific Weibo post. Args: feed_id (str): The ID of the Weibo post page (int): The page number for pagination, defaults to 1 Returns: list[CommentItem]: List of comments for the specified Weibo post """ try: async with httpx.AsyncClient() as client: url = COMMENTS_URL.format(feed_id=feed_id, page=page) response = await client.get(url, headers=DEFAULT_HEADERS) data = response.json() comments = data.get('data', {}).get('data', []) return [self._to_comment_item(comment) for comment in comments] except httpx.HTTPError: self.logger.error(f"Unable to fetch comments for feed_id '{feed_id}'", exc_info=True) return []
  • Pydantic BaseModel defining the structure of individual comment items returned by the tool.
    class CommentItem(BaseModel): """ Data model for a single comment on a Weibo post. Attributes: id (int): Unique identifier for the comment text (str): Content of the comment created_at (str): Timestamp when the comment was created user (UserProfile): User information associated with the comment like_count (int): Number of likes on the comment reply_count (int): Number of replies to the comment """ id: int = Field() text: str = Field() created_at: str = Field() source: str = Field() user: UserProfile = Field() reply_id: Union[int, None] = Field(default=None) reply_text: str = Field(default="")
  • Supporting utility function that transforms raw dictionary from Weibo API into a structured CommentItem model.
    def _to_comment_item(self, item: dict) -> CommentItem: """ Convert raw comment data to CommentItem object. Args: item (dict): Raw comment data from Weibo API Returns: CommentItem: Formatted comment information """ return CommentItem( id = item.get('id'), text = item.get('text'), created_at = item.get('created_at'), user = self._to_user_profile(item.get('user', {})), source=item.get('source', ''), reply_id = item.get('reply_id', None), reply_text = item.get('reply_text', ''), )
  • Constant URL template used to construct the API endpoint for retrieving post comments.
    # URL template for fetching comments of a specific Weibo post # {feed_id}: The ID of the Weibo post # {page}: The page number for pagination COMMENTS_URL = 'https://m.weibo.cn/api/comments/show?id={feed_id}&page={page}'

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/qinyuanpei/mcp-server-weibo'

If you have feedback or need assistance with the MCP directory API, please join our Discord server