Skip to main content
Glama
Selenium39

Weibo MCP Server

search_content

Search Weibo posts by keyword to find relevant content and discussions on the platform.

Instructions

根据关键词搜索微博内容并返回相关的微博帖子

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
keywordYes搜索微博内容的关键词
limitYes返回的最大微博条目数量
pageNo起始页码,默认为1

Implementation Reference

  • The core handler method in WeiboCrawler that implements searching Weibo content: constructs API URL with keyword and page, fetches data, parses cards for type 9 mblogs, extracts text, user info, pics, video, handles pagination up to limit.
    async searchWeiboContent(keyword: string, limit: number, page: number = 1): Promise<ContentSearchResult[]> {
      try {
        const results: ContentSearchResult[] = [];
        let currentPage = page;
        
        while (results.length < limit) {
          const url = SEARCH_CONTENT_URL
            .replace('{keyword}', encodeURIComponent(keyword))
            .replace('{page}', currentPage.toString());
            
          const response = await axios.get(url, {
            headers: DEFAULT_HEADERS
          });
          
          const data = response.data;
          const cards = data?.data?.cards || [];
          
          // 微博通常会返回多个卡片,我们寻找包含微博内容的卡片组
          let contentCards: any[] = [];
          for (const card of cards) {
            // 微博内容卡片通常有card_type=9
            if (card.card_type === 9) {
              contentCards.push(card);
            } 
            // 处理卡片组
            else if (card.card_group && Array.isArray(card.card_group)) {
              const contentGroup = card.card_group.filter((item: any) => item.card_type === 9);
              contentCards = contentCards.concat(contentGroup);
            }
          }
          
          if (contentCards.length === 0) {
            break; // 没有更多内容,退出循环
          }
          
          // 处理每个内容卡片
          for (const card of contentCards) {
            if (results.length >= limit) {
              break;
            }
            
            const mblog = card.mblog;
            if (!mblog) continue;
            
            // 提取图片链接
            const pics: string[] = [];
            if (mblog.pics && Array.isArray(mblog.pics)) {
              for (const pic of mblog.pics) {
                if (pic.url) {
                  pics.push(pic.url);
                }
              }
            }
            
            // 提取视频链接
            let videoUrl = undefined;
            if (mblog.page_info && mblog.page_info.type === 'video') {
              videoUrl = mblog.page_info.media_info?.stream_url || 
                         mblog.page_info.urls?.mp4_720p_mp4 ||
                         mblog.page_info.urls?.mp4_hd_mp4 ||
                         mblog.page_info.urls?.mp4_ld_mp4;
            }
            
            // 创建内容搜索结果对象
            const contentResult: ContentSearchResult = {
              id: mblog.id,
              text: mblog.text,
              created_at: mblog.created_at,
              reposts_count: mblog.reposts_count,
              comments_count: mblog.comments_count,
              attitudes_count: mblog.attitudes_count,
              user: {
                id: mblog.user.id,
                screen_name: mblog.user.screen_name,
                profile_image_url: mblog.user.profile_image_url,
                verified: mblog.user.verified
              },
              pics: pics.length > 0 ? pics : undefined,
              video_url: videoUrl
            };
            
            results.push(contentResult);
          }
          
          currentPage++;
          
          // 检查是否有下一页
          if (!data?.data?.cardlistInfo?.page || data.data.cardlistInfo.page === "1") {
            break;
          }
        }
        
        return results.slice(0, limit);
      } catch (error) {
        console.error(`无法搜索关键词为'${keyword}'的微博内容`, error);
        return [];
      }
    }
  • src/server.ts:84-97 (registration)
    Registers the MCP tool 'search_content' with name, description, Zod input schema (keyword, limit, optional page), and async handler delegating to WeiboCrawler's searchWeiboContent method, returning JSON stringified results.
    server.tool("search_content",
      "根据关键词搜索微博内容并返回相关的微博帖子",
      { 
        keyword: z.string().describe("搜索微博内容的关键词"),
        limit: z.number().describe("返回的最大微博条目数量"),
        page: z.number().optional().describe("起始页码,默认为1")
      },
      async ({ keyword, limit, page }) => {
        const contents = await crawler.searchWeiboContent(keyword, limit, page || 1);
        return {
          content: [{ type: "text", text: JSON.stringify(contents) }]
        };
      }
    );
  • TypeScript interface defining the output structure ContentSearchResult for search results, including id, text, timestamps, counts, user details, optional pics and video_url.
    export interface ContentSearchResult {
      /**
       * 微博ID
       */
      id: string;
      
      /**
       * 微博文本内容
       */
      text: string;
      
      /**
       * 创建时间
       */
      created_at: string;
      
      /**
       * 转发数
       */
      reposts_count: number;
      
      /**
       * 评论数
       */
      comments_count: number;
      
      /**
       * 点赞数
       */
      attitudes_count: number;
      
      /**
       * 发布该微博的用户信息
       */
      user: {
        id: number;
        screen_name: string;
        profile_image_url: string;
        verified: boolean;
      };
      
      /**
       * 图片链接列表(如果有)
       */
      pics?: string[];
      
      /**
       * 视频链接(如果有)
       */
      video_url?: string;
    } 
  • Constant URL template for Weibo content search API endpoint, with placeholders for {keyword} and {page}, used in searchWeiboContent handler.
    export const SEARCH_CONTENT_URL = 'https://m.weibo.cn/api/container/getIndex?containerid=100103type%3D1%26q%3D{keyword}&page_type=searchall&page={page}'; 

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Selenium39/mcp-server-weibo'

If you have feedback or need assistance with the MCP directory API, please join our Discord server