Skip to main content
Glama

search_content

Search for Weibo content using keywords to find relevant posts and information. Returns results as a list of dictionaries for easy data processing.

Instructions

Search for content on Weibo based on a keyword. Returns: list[dict]: List of dictionaries containing search results

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
keywordYesSearch term to find content
limitNoMaximum number of results to return, defaults to 15
pageNoPage number for pagination, defaults to 1

Implementation Reference

  • FastMCP tool handler function for 'search_content'. Defines input schema via Annotated Fields and delegates execution to WeiboCrawler.search_content.
    @mcp.tool() async def search_content( ctx: Context, keyword: Annotated[str, Field(description="Search term to find content")], limit: Annotated[int, Field(description="Maximum number of results to return, defaults to 15", default=15)] = 15, page: Annotated[int, Field(description="Page number for pagination, defaults to 1", default=1)] = 1 ) -> list[dict]: """ Search for content on Weibo based on a keyword. Returns: list[dict]: List of dictionaries containing search results """ return await crawler.search_content(keyword, limit, page)
  • Pydantic model FeedItem defining the structure of each item in the list returned by search_content tool.
    class FeedItem(BaseModel): """ Data model for a single Weibo feed item. Attributes: id (int): Unique identifier for the feed item text (str): Content of the feed item source (str): Source of the feed (e.g., app or web) created_at (str): Timestamp when the feed was created user (Union[dict, UserProfile]): User information associated with the feed comments_count (int): Number of comments on the feed attitudes_count (int): Number of likes on the feed reposts_count (int): Number of reposts of the feed raw_text (str): Raw text content of the feed region_name (str): Region information pics (list[dict]): List of pictures in the feed videos (dict): Video information in the feed """ id: int = Field() text: str = Field() source: str = Field() created_at: str = Field() user: Union[dict, UserProfile] = Field() comments_count: int = Field() attitudes_count: int = Field() reposts_count: int = Field() raw_text: str = Field() region_name: str = Field() pics: list[dict] = Field() videos: dict = Field()
  • Core helper method in WeiboCrawler that performs the actual Weibo content search via HTTP requests and parses results into FeedItem objects.
    async def search_content(self, keyword: str, limit: int = 15, page: int = 1) -> list[FeedItem]: """ Search Weibo content (posts) by keyword. Args: keyword (str): The search keyword limit (int): Maximum number of content results to return, defaults to 15 page (int, optional): The starting page number, defaults to 1 Returns: list[FeedItem]: List of FeedItem objects containing content search results """ results = [] current_page = page try: while len(results) < limit: params = { 'containerid': f'100103type=1&q={keyword}', 'page_type': 'searchall', 'page': page, } encoded_params = urlencode(params) async with httpx.AsyncClient() as client: response = await client.get(f'{SEARCH_URL}?{encoded_params}', headers=DEFAULT_HEADERS) data = response.json() cards = data.get('data', {}).get('cards', []) content_cards = [] for card in cards: if card.get('card_type') == 9: content_cards.append(card) elif 'card_group' in card and isinstance(card['card_group'], list): content_group = [ item for item in card['card_group'] if item.get('card_type') == 9] content_cards.extend(content_group) if not content_cards: break for card in content_cards: if len(results) >= limit: break mblog = card.get('mblog') if not mblog: continue content_result = self._to_feed_item(mblog) results.append(content_result) current_page += 1 cardlist_info = data.get('data', {}).get('cardlistInfo', {}) if not cardlist_info.get('page') or str(cardlist_info.get('page')) == '1': break return results[:limit] except httpx.HTTPError: self.logger.error( f"Unable to search Weibo content for keyword '{keyword}'", exc_info=True) return []

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/qinyuanpei/mcp-server-weibo'

If you have feedback or need assistance with the MCP directory API, please join our Discord server