Skip to main content
Glama
lzinga

US Government Open Data MCP

bls_series_data

Read-only

Fetch Bureau of Labor Statistics time series data for employment, wages, prices, and economic indicators using BLS series IDs.

Instructions

Fetch time series data from the Bureau of Labor Statistics. Returns monthly/quarterly/annual observations for employment, wages, prices, and more.

Popular series IDs:

  • CES0000000001: Total nonfarm employment (thousands)

  • LNS14000000: Unemployment rate

  • CUUR0000SA0: CPI-U All Items

  • CES0500000003: Average hourly earnings, total private

  • JTS000000000000000JOR: Job openings rate (JOLTS)

  • PRS85006092: Nonfarm business labor productivity

Series ID prefixes: CES (jobs by industry), LNS (unemployment), CU (CPI), WP (PPI), OE (wages), JT (JOLTS)

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
series_idsYesComma-separated BLS series IDs (max 50). Example: 'CES0000000001,LNS14000000,CUUR0000SA0'
start_yearNoStart year (default: 3 years ago). Max 20 year range with API key, 10 without.
end_yearNoEnd year (default: current year)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, so the agent knows this is a safe read operation. The description adds useful context about return formats ('monthly/quarterly/annual observations') and hints at data categories, but doesn't disclose rate limits, authentication requirements, or pagination behavior. With annotations covering safety, a 3 is appropriate—the description adds some value but not rich behavioral details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured: the first sentence states the core purpose, followed by a bulleted list of examples and prefixes. Every sentence earns its place by clarifying usage, though it could be slightly more front-loaded by moving the series ID examples to a separate section for better scannability.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (fetching economic data with multiple parameters) and the absence of an output schema, the description does a good job by explaining what data is returned and providing examples. However, it lacks details on error handling, data freshness, or response structure, which would be helpful for an agent invoking this tool without output schema guidance.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds marginal value by listing example series IDs and prefixes, which helps interpret the 'series_ids' parameter, but doesn't provide additional syntax or format details beyond what the schema provides. Baseline 3 is correct when schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Fetch time series data'), the source ('from the Bureau of Labor Statistics'), and the scope ('monthly/quarterly/annual observations for employment, wages, prices, and more'). It distinguishes itself from sibling tools like 'bls_cpi_breakdown' or 'bls_employment_by_industry' by being a general-purpose data fetcher rather than a specialized breakdown tool.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides implicit context through the list of popular series IDs and prefixes, suggesting when to use this tool (for BLS economic indicators). However, it doesn't explicitly state when to choose this over alternatives like 'bls_search_series' (for discovering series) or 'bls_cpi_breakdown' (for detailed CPI data), nor does it mention prerequisites like API key limitations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/lzinga/us-government-open-data-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server