Skip to main content
Glama

get_multiple_series

Retrieve U.S. labor statistics time series data for up to 50 economic indicators from the Bureau of Labor Statistics API. Specify date ranges and optional data enhancements for comprehensive economic analysis.

Instructions

Retrieve data for one or more BLS time series. Registered users can include up to 50 series IDs. Optionally specify start/end years (up to 20-year range), and enable catalog, calculations, annual averages, or aspects.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
series_idsYesArray of BLS series IDs
start_yearNoStart year in YYYY format
end_yearNoEnd year in YYYY format
catalogNoInclude catalog data (requires registration key)
calculationsNoInclude calculations such as net and percent changes (requires registration key)
annual_averageNoInclude annual average data (requires registration key)
aspectsNoInclude aspect data (requires registration key)

Implementation Reference

  • The handler implementation for get_multiple_series, which calls client.getSeriesData and wraps the response or errors.
      async ({ series_ids, start_year, end_year, catalog, calculations, annual_average, aspects }) => {
        try {
          const data = await client.getSeriesData({
            seriesid: series_ids,
            startyear: start_year,
            endyear: end_year,
            catalog,
            calculations,
            annualaverage: annual_average,
            aspects,
          });
          return { content: [{ type: "text" as const, text: JSON.stringify(data, null, 2) }] };
        } catch (error) {
          return wrapError(error);
        }
      }
    );
  • Zod schema validation for input parameters of get_multiple_series.
    {
      series_ids: z
        .array(
          z.string().regex(SERIES_ID_PATTERN, "Each series ID must be uppercase with no special characters except _, -, #")
        )
        .min(1)
        .max(50)
        .describe("Array of BLS series IDs"),
      start_year: z
        .string()
        .regex(/^\d{4}$/, "Must be a 4-digit year")
        .optional()
        .describe("Start year in YYYY format"),
      end_year: z
        .string()
        .regex(/^\d{4}$/, "Must be a 4-digit year")
        .optional()
        .describe("End year in YYYY format"),
      catalog: z
        .boolean()
        .optional()
        .describe("Include catalog data (requires registration key)"),
      calculations: z
        .boolean()
        .optional()
        .describe("Include calculations such as net and percent changes (requires registration key)"),
      annual_average: z
        .boolean()
        .optional()
        .describe("Include annual average data (requires registration key)"),
      aspects: z
        .boolean()
        .optional()
        .describe("Include aspect data (requires registration key)"),
    },
  • MCP tool registration for get_multiple_series.
    server.tool(
      "get_multiple_series",
      "Retrieve data for one or more BLS time series. " +
        "Registered users can include up to 50 series IDs. " +
        "Optionally specify start/end years (up to 20-year range), " +
        "and enable catalog, calculations, annual averages, or aspects.",
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It discloses key behavioral traits: user registration requirements for certain features (catalog, calculations, annual average, aspects) and limits (up to 50 series IDs, up to 20-year range). However, it lacks details on error handling, rate limits, or output format.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in two sentences, front-loaded with the core purpose and followed by optional features. Every sentence adds necessary information without redundancy, making it highly concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations and no output schema, the description provides basic operational context but lacks details on response format, error conditions, or advanced usage scenarios. It's adequate for a read operation but could be more complete for a tool with 7 parameters.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds minimal value beyond the schema, mentioning optional features but not elaborating on their semantics. Baseline 3 is appropriate as the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Retrieve data') and resource ('BLS time series'), specifying it handles 'one or more' series. It distinguishes from siblings like get_single_series by mentioning multiple series support and from get_latest_series by allowing date ranges.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It provides clear context for usage: retrieving data for multiple series with optional enhancements. However, it doesn't explicitly state when to use this versus alternatives like get_single_series or get_latest_series, missing explicit sibling differentiation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/larasrinath/bls_mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server