Skip to main content
Glama

econdata

Server Details

Econdata MCP — wraps BLS (Bureau of Labor Statistics) public API v2

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-econdata
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 3.6/5 across 4 of 4 tools scored.

Server CoherenceA
Disambiguation3/5

There is significant overlap between tools, particularly get_cpi and get_unemployment which are specific cases that could be handled by the more general get_series tool. An agent might be confused about when to use the specialized tools versus the general one, though the descriptions help clarify the specific series IDs involved.

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern with get_ prefix and snake_case formatting. The naming is predictable and readable throughout the set, with no deviations in style or convention.

Tool Count4/5

Four tools is a reasonable count for an economic data server, but it feels slightly thin given the domain. The tools cover key economic indicators, but the set could benefit from additional tools for other data types or more granular operations.

Completeness3/5

The server provides access to core US economic indicators (CPI, employment, unemployment) and a general series fetcher, but there are notable gaps. For example, it lacks tools for GDP, interest rates, or international data, and there's no update or delete functionality (though this might be appropriate for read-only data). The general get_series tool helps mitigate some gaps.

Available Tools

4 tools
get_cpiBInspect

Get the US Consumer Price Index for All Urban Consumers (BLS series CUUR0000SA0). Returns year, month, and index value for each period.

ParametersJSON Schema
NameRequiredDescriptionDefault
end_yearNoEnd year as 4-digit string (e.g. "2024"). Optional.
start_yearNoStart year as 4-digit string (e.g. "2020"). Optional.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It states the tool returns data but doesn't mention whether it's a read-only operation, potential rate limits, data freshness, error conditions, or authentication requirements. For a data retrieval tool with zero annotation coverage, this leaves significant behavioral gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise - two sentences that efficiently convey the tool's purpose and return format without any wasted words. It's front-loaded with the core functionality and follows with return details, making every sentence earn its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (economic data retrieval with optional date filtering), no annotations, no output schema, and 100% schema coverage, the description is minimally adequate. It explains what data is returned but doesn't cover behavioral aspects like data sources, update frequency, or error handling. The description meets basic requirements but leaves important contextual gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description doesn't mention any parameters, but schema description coverage is 100% with both parameters well-documented in the schema (start_year and end_year as optional 4-digit strings). Since the schema does the heavy lifting, the baseline score of 3 is appropriate - the description adds no parameter information beyond what's already in the structured schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get'), identifies the exact resource ('US Consumer Price Index for All Urban Consumers (BLS series CUUR0000SA0)'), and distinguishes it from siblings by specifying the particular economic indicator. It also details what data is returned ('year, month, and index value for each period'), making the purpose unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus the sibling tools (get_employment_by_industry, get_series, get_unemployment). It doesn't mention alternatives, prerequisites, or any context for choosing this specific CPI data tool over others. The agent must infer usage from the tool name alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_employment_by_industryAInspect

Get US non-farm payroll employment figures by industry. Industry options: "total_nonfarm" (default), "manufacturing", "construction", "retail", "financial", "government". Returns employment in thousands.

ParametersJSON Schema
NameRequiredDescriptionDefault
end_yearNoEnd year as 4-digit string (e.g. "2024"). Optional.
industryNoIndustry to retrieve. One of: "total_nonfarm", "manufacturing", "construction", "retail", "financial", "government". Defaults to "total_nonfarm".
start_yearNoStart year as 4-digit string (e.g. "2020"). Optional.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states the return format ('employment in thousands') but lacks details on data freshness, source, rate limits, error handling, or whether it's a read-only operation. For a data retrieval tool with zero annotation coverage, this leaves significant gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the purpose and includes essential details like industry options and return format. Every word earns its place without redundancy or unnecessary elaboration.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (3 parameters, no output schema, no annotations), the description is adequate but incomplete. It covers the core purpose and return units, but lacks behavioral context (e.g., data source, update frequency) and does not fully compensate for the absence of annotations or output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the input schema fully documents all parameters. The description adds value by listing industry options and specifying the default, but does not provide additional context beyond what the schema already covers, such as date range implications or data availability constraints.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Get'), resource ('US non-farm payroll employment figures'), and scope ('by industry'), with specific industry options listed. It distinguishes from sibling tools like 'get_cpi' or 'get_unemployment' by focusing on employment data rather than inflation or unemployment rates.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for retrieving employment data by industry, but does not explicitly state when to use this tool versus alternatives like 'get_series' or 'get_unemployment'. No guidance on prerequisites, exclusions, or comparative contexts is provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_seriesAInspect

Fetch a BLS time series by series ID. Returns data points with year, period, and value. Example series IDs: "CUUR0000SA0" (CPI), "LNS14000000" (unemployment rate), "CES0000000001" (total nonfarm employment).

ParametersJSON Schema
NameRequiredDescriptionDefault
end_yearNoEnd year as 4-digit string (e.g. "2024"). Optional.
series_idYesBLS series ID (e.g. "CUUR0000SA0" for CPI)
start_yearNoStart year as 4-digit string (e.g. "2020"). Optional.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses the return format ('data points with year, period, and value') and provides example series IDs, which adds useful context. However, it lacks details on behavioral traits like rate limits, error handling, or data freshness, which are important for a data-fetching tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by return details and examples, all in two efficient sentences with zero waste. Every sentence earns its place by clarifying functionality and usage.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations and no output schema, the description is moderately complete for a simple data-fetching tool. It covers the purpose, return format, and examples, but lacks details on error cases, rate limits, or output structure beyond basic fields, which could be important for agent invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters (series_id, start_year, end_year) with descriptions and optionality. The description adds value by providing example series IDs, but does not explain parameter semantics beyond what the schema provides, such as date format constraints or interactions between start_year and end_year.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Fetch a BLS time series by series ID') and resource ('BLS time series'), distinguishing it from siblings like get_cpi, get_employment_by_industry, and get_unemployment by being a general-purpose series fetcher rather than a specific metric tool. It provides concrete examples of series IDs to illustrate its scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage by providing example series IDs for common metrics (CPI, unemployment rate, employment), suggesting this tool is for general time series retrieval rather than specialized sibling tools. However, it does not explicitly state when to use this tool versus the alternatives or any exclusions, leaving some ambiguity.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_unemploymentAInspect

Get the US civilian unemployment rate over time (BLS series LNS14000000). Returns year, month, and rate for each period.

ParametersJSON Schema
NameRequiredDescriptionDefault
end_yearNoEnd year as 4-digit string (e.g. "2024"). Optional.
start_yearNoStart year as 4-digit string (e.g. "2020"). Optional.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden. It discloses the return format (year, month, rate for each period) which is valuable behavioral information, but doesn't mention data freshness, source reliability, rate limits, error conditions, or whether this is a read-only operation (though 'Get' implies reading).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise with two sentences that each earn their place: first establishes purpose and data source, second specifies return format. No wasted words, well-structured, and front-loaded with essential information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a read-only data retrieval tool with 100% schema coverage and no output schema, the description provides good context: purpose, specific data series, and return format. However, it lacks information about data range defaults (what happens when no parameters provided), temporal granularity, or potential limitations that would be helpful for complete understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already fully documents both optional parameters (start_year and end_year). The description adds no additional parameter information beyond what's in the schema, maintaining the baseline score for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific verb ('Get') and resource ('US civilian unemployment rate over time'), identifies the exact data series (BLS series LNS14000000), and distinguishes from siblings by specifying the unemployment rate data rather than CPI, employment by industry, or generic series data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for retrieving unemployment rate time series data, but provides no explicit guidance on when to use this tool versus alternatives like get_cpi or get_employment_by_industry. There's no mention of prerequisites, limitations, or comparative use cases.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.