Skip to main content
Glama
Selenium39

Weibo MCP Server

get_feeds

Retrieve recent posts and updates from a specific Weibo user by providing their unique identifier and desired post limit.

Instructions

获取指定微博用户的最新动态和帖子

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
uidYes微博用户的唯一标识符
limitYes返回的最大动态数量

Implementation Reference

  • src/server.ts:51-63 (registration)
    Registers the MCP 'get_feeds' tool with input schema for uid and limit, description, and a handler that calls WeiboCrawler.extractWeiboFeeds and returns JSON stringified feeds.
    server.tool("get_feeds",
      "获取指定微博用户的最新动态和帖子",
      { 
        uid: z.number().describe("微博用户的唯一标识符"),
        limit: z.number().describe("返回的最大动态数量")
      },
      async ({ uid, limit }) => {
        const feeds = await crawler.extractWeiboFeeds(uid, limit);
        return {
          content: [{ type: "text", text: JSON.stringify(feeds) }]
        };
      }
    );
  • Core handler implementing the tool logic: fetches user's Weibo feeds by obtaining containerId from profile, then iteratively fetches pages of feeds until the limit is reached or no more available.
    async extractWeiboFeeds(uid: number, limit: number): Promise<Record<string, any>[]> {
      const feeds: Record<string, any>[] = [];
      let sinceId = '';
      
      try {
        const containerId = await this.getContainerId(uid);
        if (!containerId) return feeds;
    
        while (feeds.length < limit) {
          const pagedFeeds = await this.extractFeeds(uid, containerId, sinceId);
          if (!pagedFeeds.Feeds || pagedFeeds.Feeds.length === 0) {
            break;
          }
    
          feeds.push(...pagedFeeds.Feeds);
          sinceId = pagedFeeds.SinceId as string;
          if (!sinceId) {
            break;
          }
        }
      } catch (error) {
        console.error(`无法获取UID为'${uid}'的动态`, error);
      }
      
      return feeds.slice(0, limit);
    }
  • Helper method to extract the containerId for a user's feeds from their profile API response.
    private async getContainerId(uid: number): Promise<string | null> {
      try {
        const response = await axios.get(PROFILE_URL.replace('{userId}', uid.toString()), {
          headers: DEFAULT_HEADERS
        });
        
        const data = response.data;
        const tabsInfo = data?.data?.tabsInfo?.tabs || [];
        
        for (const tab of tabsInfo) {
          if (tab.tabKey === 'weibo') {
            return tab.containerid;
          }
        }
        return null;
      } catch (error) {
        console.error(`无法获取UID为'${uid}'的containerId`, error);
        return null;
      }
    }
  • Helper method to fetch a single page of user feeds using the feeds API URL constructed with containerId and sinceId, returning PagedFeeds structure.
    private async extractFeeds(uid: number, containerId: string, sinceId: string): Promise<PagedFeeds> {
      try {
        const url = FEEDS_URL
          .replace('{userId}', uid.toString())
          .replace('{containerId}', containerId)
          .replace('{sinceId}', sinceId);
        
        const response = await axios.get(url, { headers: DEFAULT_HEADERS });
        const data = response.data;
        
        const newSinceId = data?.data?.cardlistInfo?.since_id || '';
        const cards = data?.data?.cards || [];
        
        if (cards.length > 0) {
          return { SinceId: newSinceId, Feeds: cards };
        } else {
          return { SinceId: newSinceId, Feeds: [] };
        }
      } catch (error) {
        console.error(`无法获取UID为'${uid}'的动态`, error);
        return { SinceId: null, Feeds: [] };
      }
    }
  • Zod input schema defining parameters for the get_feeds tool: uid (number) and limit (number).
    { 
      uid: z.number().describe("微博用户的唯一标识符"),
      limit: z.number().describe("返回的最大动态数量")
    },

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Selenium39/mcp-server-weibo'

If you have feedback or need assistance with the MCP directory API, please join our Discord server