Skip to main content
Glama
MissionSquad

@missionsquad/mcp-rss

Official

fetch_multiple_feeds

Batch fetch multiple RSS feeds simultaneously or sequentially with success/error status for each URL, enabling efficient monitoring and management of feed data.

Instructions

Batch fetch multiple RSS feeds with success/error status for each

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
parallelNoIf 'true', fetch feeds in parallel; otherwise, fetch sequentiallytrue
urlsYesArray of RSS feed URLs to fetch

Implementation Reference

  • The execute handler for 'fetch_multiple_feeds' that fetches multiple RSS feeds, supports parallel/sequential modes with concurrency limits, caching via feedCache, and returns aggregated results with per-feed status.
    execute: async (args, context) => {
      logger.info(
        `Fetching ${args.urls.length} feeds (parallel: ${args.parallel})`
      );
    
      const fetchFeed = async (url: string): Promise<MultiFeedResult> => {
        try {
          // Check cache first
          const cached = feedCache.get(url);
          if (cached) {
            return { url, success: true, data: cached };
          }
    
          const result = await rssReader.fetchFeed(url);
          feedCache.set(url, result);
    
          return { url, success: true, data: result };
        } catch (error: any) {
          logger.error(`Failed to fetch ${url}: ${error.message}`);
          return {
            url,
            success: false,
            error: {
              url,
              error: error.message,
              code: error.code,
              timestamp: Date.now(),
            },
          };
        }
      };
    
      let results: MultiFeedResult[];
    
      if (args.parallel === 'true') {
        // Parallel fetching with concurrency limit
        const chunks: string[][] = [];
        for (
          let i = 0;
          i < args.urls.length;
          i += config.rssMaxConcurrentFetches
        ) {
          chunks.push(args.urls.slice(i, i + config.rssMaxConcurrentFetches));
        }
    
        results = [];
        for (const chunk of chunks) {
          const chunkResults = await Promise.all(chunk.map(fetchFeed));
          results.push(...chunkResults);
        }
      } else {
        // Sequential fetching
        results = [];
        for (const url of args.urls) {
          results.push(await fetchFeed(url));
        }
      }
    
      const successCount = results.filter((r) => r.success).length;
      logger.info(
        `Fetched ${successCount}/${args.urls.length} feeds successfully`
      );
    
      return JSON.stringify(
        {
          total: args.urls.length,
          successful: successCount,
          failed: args.urls.length - successCount,
          results,
        },
        null,
        2
      );
    },
  • Zod schema defining input parameters for the fetch_multiple_feeds tool: urls array and optional parallel flag.
    const FetchMultipleFeedsSchema = z.object({
      urls: z.array(z.string()).describe("Array of RSS feed URLs to fetch"),
      parallel: z
        .string()
        .optional()
        .default("true")
        .describe(
          "If 'true', fetch feeds in parallel; otherwise, fetch sequentially"
        ),
    });
  • src/index.ts:88-167 (registration)
    Registration of the 'fetch_multiple_feeds' tool with FastMCP server using addTool, including name, description, parameters schema, and execute handler.
    server.addTool({
      name: "fetch_multiple_feeds",
      description:
        "Batch fetch multiple RSS feeds with success/error status for each",
      parameters: FetchMultipleFeedsSchema,
      execute: async (args, context) => {
        logger.info(
          `Fetching ${args.urls.length} feeds (parallel: ${args.parallel})`
        );
    
        const fetchFeed = async (url: string): Promise<MultiFeedResult> => {
          try {
            // Check cache first
            const cached = feedCache.get(url);
            if (cached) {
              return { url, success: true, data: cached };
            }
    
            const result = await rssReader.fetchFeed(url);
            feedCache.set(url, result);
    
            return { url, success: true, data: result };
          } catch (error: any) {
            logger.error(`Failed to fetch ${url}: ${error.message}`);
            return {
              url,
              success: false,
              error: {
                url,
                error: error.message,
                code: error.code,
                timestamp: Date.now(),
              },
            };
          }
        };
    
        let results: MultiFeedResult[];
    
        if (args.parallel === 'true') {
          // Parallel fetching with concurrency limit
          const chunks: string[][] = [];
          for (
            let i = 0;
            i < args.urls.length;
            i += config.rssMaxConcurrentFetches
          ) {
            chunks.push(args.urls.slice(i, i + config.rssMaxConcurrentFetches));
          }
    
          results = [];
          for (const chunk of chunks) {
            const chunkResults = await Promise.all(chunk.map(fetchFeed));
            results.push(...chunkResults);
          }
        } else {
          // Sequential fetching
          results = [];
          for (const url of args.urls) {
            results.push(await fetchFeed(url));
          }
        }
    
        const successCount = results.filter((r) => r.success).length;
        logger.info(
          `Fetched ${successCount}/${args.urls.length} feeds successfully`
        );
    
        return JSON.stringify(
          {
            total: args.urls.length,
            successful: successCount,
            failed: args.urls.length - successCount,
            results,
          },
          null,
          2
        );
      },
    });
  • TypeScript interface defining parameters for fetch_multiple_feeds, matching the Zod schema.
    export interface FetchMultipleFeedsParams {
      urls: string[];
      parallel?: 'true' | 'false';
    }
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden. It discloses key behavioral traits: batch fetching capability and per-feed status reporting. However, it doesn't mention error handling details, rate limits, authentication requirements, or what constitutes 'success' versus 'error' status. The description adds value but leaves important behavioral aspects unspecified.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core functionality. Every word earns its place: 'batch' establishes scope, 'fetch multiple RSS feeds' states the action, and 'with success/error status for each' adds crucial behavioral context. No wasted words or redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a batch operation tool with no annotations and no output schema, the description provides adequate functional context but lacks important details. It doesn't explain the return format (what does 'success/error status' look like?), error handling behavior, or performance implications of the parallel parameter. Given the complexity of batch operations, more completeness would be beneficial.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already fully documents both parameters. The description doesn't add any parameter-specific semantics beyond what's in the schema. It mentions 'batch fetch' which aligns with the urls array parameter, but provides no additional syntax, format, or usage details for parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('batch fetch') and resource ('multiple RSS feeds'), and distinguishes it from siblings by emphasizing batch processing and per-feed status reporting. It explicitly mentions 'success/error status for each' which differentiates it from single-feed tools like fetch_rss_feed.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context (batch processing multiple feeds with status tracking) but doesn't explicitly state when to use this versus alternatives like fetch_rss_feed (single feed) or get_feed_headlines (headlines only). It provides clear functional context but lacks explicit comparison guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/MissionSquad/mcp-rss'

If you have feedback or need assistance with the MCP directory API, please join our Discord server