Skip to main content
Glama
lzinga

US Government Open Data MCP

bea_gdp_national

Read-only

Retrieve U.S. national GDP data including growth rates, components, and deflators from NIPA tables to analyze economic performance.

Instructions

Get U.S. national GDP data from the NIPA tables. Shows GDP, GDP growth, components (consumption, investment, government, net exports), and deflators.

Common table names:

  • T10101: GDP and major components (real)

  • T10106: GDP and major components (nominal)

  • T10111: GDP percent change

  • T20100: Personal income and its disposition

  • T30100: Government receipts and expenditures

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
table_nameNoNIPA table name (default: T10101 — Real GDP). Other: T10106 (nominal GDP), T10111 (% change), T20100 (personal income)
frequencyNoFrequency: Q=quarterly (default), A=annual, M=monthly
yearNoYear(s) to fetch. Use 'X' for all, 'LAST5' for last 5, or specific year. Default: LAST5
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate readOnlyHint=true, which the description aligns with by using 'Get' (a read operation). The description adds value beyond annotations by specifying the data source (NIPA tables) and listing common table names, which helps the agent understand what data is available. However, it lacks details on behavioral traits like rate limits, error handling, or response format, which would be useful given no output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured: the first sentence clearly states the purpose, and the subsequent list of common table names provides essential context without redundancy. Every sentence earns its place, and it is appropriately sized for a data retrieval tool with well-documented parameters.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (3 parameters, no output schema), the description is reasonably complete. It covers the purpose, data source, and key components, and the annotations provide safety context (read-only). However, without an output schema, the description could better explain the return format (e.g., structured data with columns) to fully guide the agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the input schema already fully documents the three parameters (table_name, frequency, year) with descriptions and enums. The description adds minimal parameter semantics by listing common table names, which partially overlaps with the schema's description for table_name. This provides some reinforcement but no significant additional meaning beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool's purpose: 'Get U.S. national GDP data from the NIPA tables.' It specifies the exact resource (national GDP data), source (NIPA tables), and key data components (GDP, GDP growth, components, deflators). This clearly distinguishes it from sibling tools like bea_gdp_by_state or bea_personal_income, which focus on different geographic or economic dimensions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool by listing common table names (e.g., T10101 for real GDP, T10111 for percent change) and implying usage for national-level GDP analysis. However, it does not explicitly state when not to use it or name specific alternatives among siblings (e.g., bea_gdp_by_industry for industry breakdowns), though the context is reasonably inferable from the tool's focus on national data.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/lzinga/us-government-open-data-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server