Skip to main content
Glama

BEA Economic Accounts

Server Details

County GDP, personal income, and employment from the Bureau of Economic Analysis

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

5 tools
get_county_employmentAInspect

Get total full-time and part-time employment by county from BEA (CAEMP25N table).

Returns employment counts for a county or state total. When year is omitted,
returns the last 5 available years.

GeoFips format: 5-digit county FIPS (e.g. '53067') or state total (e.g. '53000').

Args:
    county_fips: 5-digit county FIPS code (e.g. '53067') or state total (e.g. '53000').
    year: Specific year to retrieve. Omit for last 5 years.
ParametersJSON Schema
NameRequiredDescriptionDefault
yearNo
county_fipsYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without annotations, the description carries the full burden and discloses key behavioral traits: it specifies the return type (employment counts), temporal scope (last 5 years when omitted), geographic aggregation (county or state total), and data source (BEA). However, it omits details on rate limits, caching, or error conditions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the purpose and includes essential behavioral details, but suffers from redundancy between the 'GeoFips format' line and the subsequent Args section description of county_fips. The Args section format is slightly verbose but functional.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has only 2 simple parameters, no nested objects, and an output schema exists (per context signals), the description provides sufficient context by identifying the specific data source (BEA/CAEMP25N), explaining parameter formats, and documenting default temporal behavior without needing to describe return structure.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the Args section compensates effectively by documenting both parameters: county_fips includes format constraints (5-digit) and examples for both county and state totals, while year explains the omission behavior. This adds critical semantic information absent from the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific resource (total full-time and part-time employment by county), the data source (BEA CAEMP25N table), and implies the distinguishing metric (employment counts vs. GDP or income), effectively differentiating it from sibling tools like get_county_gdp through the specific table reference.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides implicit usage guidance by specifying the CAEMP25N dataset and explaining parameter behavior ('When year is omitted, returns the last 5 available years'), but lacks explicit guidance on when to choose this tool over siblings like get_county_personal_income or get_metro_gdp.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_county_gdpAInspect

Get GDP by county from the BEA Regional accounts (CAGDP1 table).

Returns all-industry GDP for a county or state total. When year is omitted,
returns the last 5 available years.

GeoFips format: 5-digit county FIPS (e.g. '53067' for Thurston County, WA).
For state totals use 2-digit state FIPS + '000' (e.g. '53000' for Washington).

Args:
    county_fips: 5-digit county FIPS code (e.g. '53067') or state total (e.g. '53000').
    year: Specific year to retrieve. Omit for last 5 years.
ParametersJSON Schema
NameRequiredDescriptionDefault
yearNo
county_fipsYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and successfully discloses key behavioral traits: it specifies the default behavior when year is omitted (returns last 5 years), identifies the exact data table (CAGDP1), and clarifies that results include all-industry GDP. It could be improved by mentioning error behaviors for invalid FIPS codes or data availability limitations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and front-loaded with the core purpose in the first sentence. The Args section provides clear parameter documentation. Minor deduction because the FIPS code examples and format specifications are slightly redundant between the main text ('GeoFips format...') and the Args section, though this redundancy helps ensure the agent understands the complex formatting requirement.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 2-parameter data retrieval tool with an output schema present, the description is complete. It identifies the data source precisely (BEA CAGDP1), explains geographic scope ambiguity (county vs state), provides valid parameter formats with examples, and describes the default temporal behavior. No additional description of return values is necessary given the output schema existence.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Given 0% schema description coverage (only titles provided), the description fully compensates by providing rich semantic context for both parameters. For county_fips, it explains the 5-digit format, provides the Thurston County example, and documents the state total pattern (53000). For year, it explains the specific purpose and default behavior when omitted.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb (Get), resource (GDP by county), and data source (BEA Regional accounts CAGDP1). It specifies the scope covers all-industry GDP for counties or state totals. However, it does not explicitly distinguish when to use this versus sibling get_metro_gdp or the other county economic indicators (employment/personal income), which would be helpful given the similar tool names.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear parameter usage guidance explaining that omitting the year parameter returns the last 5 available years. However, it lacks explicit guidance on when to select this tool versus alternatives like get_metro_gdp (geographic granularity) or get_county_employment (different economic metric), leaving some ambiguity for the agent in tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_county_personal_incomeAInspect

Get personal income by county from the BEA Regional accounts (CAINC1 table).

Returns total personal income for a county or state total. When year is omitted,
returns the last 5 available years.

GeoFips format: 5-digit county FIPS (e.g. '53067') or state total (e.g. '53000').

Args:
    county_fips: 5-digit county FIPS code (e.g. '53067') or state total (e.g. '53000').
    year: Specific year to retrieve. Omit for last 5 years.
ParametersJSON Schema
NameRequiredDescriptionDefault
yearNo
county_fipsYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses the data source, table identifier, and the 'last 5 years' default behavior when year is omitted, but omits safety information (read-only status), error handling, or rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While structured with clear sections (purpose, behavior, format, args), there is wasteful repetition: the FIPS code examples and state total format are stated twice (once in 'GeoFips format' line and again in Args), violating the 'every sentence earns its place' standard.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple parameter structure (2 scalar fields) and presence of an output schema, the description is sufficiently complete. It covers the data domain, parameter formats, and default temporal behavior without needing to detail return structure.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 0% description coverage. The description compensates by documenting both parameters: county_fips includes format specification and concrete examples ('53067', '53000'), and year explains the omission behavior. Minor redundancy between the 'GeoFips format' line and Args section prevents a 5.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action (Get), resource (personal income), scope (by county), and data source (BEA Regional accounts, CAINC1 table), distinguishing it from siblings like get_county_employment and get_county_gdp.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage through specificity of data type (personal income vs. employment/GDP) and mentions the CAINC1 table, but lacks explicit guidance on when to choose this over sibling tools or prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_metro_gdpAInspect

Get GDP by metropolitan statistical area from BEA (MAGDP1 table).

Returns all-industry GDP for an MSA. When year is omitted,
returns the last 5 available years.

Args:
    msa_code: MSA/CBSA code (e.g. '42660' for Seattle-Tacoma-Bellevue).
    year: Specific year to retrieve. Omit for last 5 years.
ParametersJSON Schema
NameRequiredDescriptionDefault
yearNo
msa_codeYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Discloses data source (BEA), specific table (MAGDP1), scope (all-industry GDP), and default temporal behavior. Missing potential constraints like rate limits or data publication lags, but covers core behavioral traits adequately.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Efficient structure: source/scope declaration first, then return behavior, then Args section. No filler content. Example value is high-signal and immediately clarifies the parameter format.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given output schema exists (per context signals), description appropriately avoids repeating return value specifications. Parameter documentation is complete despite poor schema coverage. Could optionally reference sibling tools for geographic hierarchy context, but fully adequate as-is.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the Args section provides critical semantic context: msa_code includes definition (MSA/CBSA code) and concrete example ('42660' for Seattle-Tacoma-Bellevue), while year explains the conditional behavior (omit for last 5 years). Fully compensates for empty schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific action (Get), resource (GDP by metropolitan statistical area), and data source (BEA MAGDP1 table). Explicitly mentions 'metropolitan statistical area' which clearly distinguishes from county-level siblings (get_county_gdp, get_county_employment).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Documents parameter behavior (omitting year returns last 5 available years) and provides an example MSA code. However, lacks explicit guidance on when to use this versus county-level alternatives or whether to use list_available_tables first for discovery.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_available_tablesAInspect

List all available Regional dataset table names from the BEA API.

Returns table name codes and descriptions for use with other BEA tools. Useful for discovering which datasets are available.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses return values ('table name codes and descriptions') beyond the tool name. However, with no annotations provided, it omits key behavioral details like authentication requirements (BEA typically requires API keys), rate limits, or error handling that the description should carry.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with purpose front-loaded. Sentence 3 ('Useful for discovering...') is slightly redundant with sentence 2's implication of discovery, but overall minimal waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a zero-parameter discovery tool. Mentions the API source, return format, and relationship to other tools. Since output schema exists, the description appropriately avoids detailing return structure.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has zero parameters, which sets baseline 4. No parameters require semantic explanation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb 'List' and resource 'Regional dataset table names from the BEA API'. Distinguishes from siblings implicitly by being a discovery tool for 'available' tables vs. the 'get_' tools that retrieve specific data, though it doesn't explicitly contrast them.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear context with 'Useful for discovering which datasets are available' and 'for use with other BEA tools', indicating it should be called before data retrieval operations. Lacks explicit 'when not to use' but the implied workflow is clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources