FRED Economic Data
Server Details
GDP, unemployment, CPI, interest rates, and 800K+ economic time series from the Federal Reserve
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
5 toolsget_category_seriesAInspect
Get all FRED series within a specific category.
FRED organizes data into a hierarchy of categories. Use this to browse
available series within a topic area. Common category IDs:
- 32991: Money, Banking, & Finance
- 10: Population, Employment, & Labor Markets
- 32992: National Accounts (GDP)
- 1: Production & Business Activity
- 32455: Prices (CPI, PPI)
- 97: Housing
Args:
category_id: FRED category identifier (integer).| Name | Required | Description | Default |
|---|---|---|---|
| category_id | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full disclosure burden. It adds useful domain context (FRED category hierarchy) and concrete example IDs, but lacks technical behavioral details such as pagination behavior for large categories, rate limits, or explicit confirmation that this is a safe read-only operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and front-loaded with the core purpose. The 'Args:' section clearly documents the parameter, and the bulleted examples are concise and high-value. The text efficiently combines context, usage, and parameter documentation without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has only one parameter, an output schema (covering return structure), and no annotations, the description provides adequate completeness. The category examples are particularly valuable for a browsing tool. It could improve by mentioning pagination or result limits if applicable.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage (only 'title': 'Category Id' in schema), the description compensates effectively. It explains the parameter is a 'FRED category identifier' (adding domain semantics) and provides six concrete integer examples mapping to specific economic topics, which is critical for an opaque integer parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves FRED series within a category using the specific verb 'Get'. It identifies the resource (FRED series) and scope (within a specific category). However, it does not explicitly distinguish from siblings like search_series (keyword search vs category browsing) or get_series_info (specific series vs category listing), though 'browse' implies a filtering pattern.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use the tool ('Use this to browse available series within a topic area'), establishing the browsing use case. However, it lacks explicit guidance on when not to use it (e.g., 'if you don't know the category ID, use search_series instead') or alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_release_datesAInspect
Get release dates for a specific FRED data release.
Returns the dates when a particular data release was published. Useful
for tracking when economic indicators are updated. Common release IDs:
- 10: Consumer Price Index
- 46: Producer Price Index
- 50: Employment Situation (jobs report)
- 53: Gross Domestic Product
- 17: Federal Reserve H.15 (interest rates)
- 21: Federal Reserve H.6 (money stock)
Args:
release_id: FRED release identifier (integer).| Name | Required | Description | Default |
|---|---|---|---|
| release_id | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully discloses the read-only nature ('Get', 'Returns') and—crucially—provides enumerated examples of valid release IDs mapping to economic indicators (CPI, PPI, etc.), which is essential behavioral context for correct invocation. Lacks explicit safety confirmation or error handling details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with front-loaded purpose, followed by use case, then practical enum examples, and finally Args documentation. The 'Args:' line is slightly redundant given the schema but acceptable. No extraneous information; the enumerated IDs earn their place by preventing hallucination.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of an output schema (per context signals), the description appropriately does not detail return values. The single parameter is fully documented with examples, and the economic domain context (common release IDs) provides sufficient completeness for an agent to invoke this correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 0% description coverage (only 'Release Id' as title). The description compensates fully by providing the semantic meaning ('FRED release identifier'), data type (integer), and—critically—concrete examples of valid values (10, 46, 50, etc.) with human-readable mappings. This adds substantial value beyond the bare schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states a clear verb ('Get') and resource ('release dates for a specific FRED data release'), and distinguishes itself from series-focused siblings by specifying 'data release' rather than series data. The first sentence establishes exact scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides context on utility ('Useful for tracking when economic indicators are updated'), but lacks explicit guidance on when to use this versus siblings like get_series_observations (data values) or get_series_info (metadata). No exclusion criteria provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_series_infoAInspect
Get metadata and details for a specific FRED series.
Returns comprehensive information about a series including its title,
frequency, units, seasonal adjustment, source, and date range. Use this
to understand what a series measures before pulling observations.
Args:
series_id: FRED series identifier (e.g. 'UNRATE', 'GDP', 'CPIAUCSL').| Name | Required | Description | Default |
|---|---|---|---|
| series_id | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses what information is returned (title, frequency, units, etc.), which helps the agent understand the output nature. However, it lacks explicit statements about idempotency, safety, or error conditions despite being a read-only metadata operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description efficiently delivers purpose, behavioral context, and usage guidance in two sentences before documenting the parameter. The 'Args' section is slightly unusual for MCP format but is necessary given the schema's lack of descriptions. No extraneous content is present.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given that an output schema exists (covering return structure), the description appropriately focuses on usage workflow and input semantics rather than output details. For a single-parameter metadata lookup tool, the description provides sufficient context for safe invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has 0% description coverage for the series_id parameter. The description fully compensates by defining it as a 'FRED series identifier' and providing concrete examples ('UNRATE', 'GDP', 'CPIAUCSL'), giving the agent necessary semantic context that the schema lacks.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses the specific verb 'Get' with the resource 'metadata and details' and scope 'specific FRED series'. It clearly distinguishes from data-fetching siblings by contrasting 'metadata' versus the implied 'observations' from get_series_observations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The phrase 'Use this to understand what a series measures before pulling observations' establishes a clear workflow sequence, implicitly guiding the agent to use this before get_series_observations. However, it does not explicitly mention search_series as the alternative when the series_id is unknown.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_series_observationsAInspect
Get time series observations (data points) for a FRED series.
Returns the actual data values for an economic indicator over time.
Use search_series first to find the series_id, or use well-known IDs
like UNRATE, GDP, CPIAUCSL, FEDFUNDS, MORTGAGE30US.
For state unemployment, use state abbreviation + 'UR' (e.g. WAUR for
Washington, CAUR for California).
Args:
series_id: FRED series identifier (e.g. 'UNRATE', 'GDP', 'CPIAUCSL').
start_date: Optional start date in YYYY-MM-DD format (e.g. '2020-01-01').
end_date: Optional end date in YYYY-MM-DD format (e.g. '2024-12-31').| Name | Required | Description | Default |
|---|---|---|---|
| end_date | No | ||
| series_id | Yes | ||
| start_date | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses that the tool returns 'actual data values' and implies read-only behavior through verbs like 'Get' and 'Returns,' but lacks operational details such as rate limits, pagination behavior, date range restrictions, or error handling when invalid series IDs are provided.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with zero waste: opening sentence defines purpose, second clarifies return value, third provides usage prerequisite, fourth gives domain-specific pattern, followed by Args documentation. Every sentence earns its place with distinct, non-redundant information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the presence of an output schema (not shown but indicated), the description appropriately focuses on input requirements and usage workflow rather than output details. It addresses the sibling tool relationship (search_series), explains FRED-specific conventions (state unemployment codes), and fully documents all parameters, making it complete for a data retrieval tool of this complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0% (only titles present), requiring the description to compensate. The Args section comprehensively documents all three parameters with semantic meaning: series_id is a 'FRED series identifier,' and both date parameters specify the YYYY-MM-DD format with concrete examples, adding critical type/format information absent from the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'Get[s] time series observations (data points) for a FRED series' and clarifies it 'Returns the actual data values for an economic indicator over time.' This specifically distinguishes it from sibling get_series_info (metadata) and search_series (discovery) by emphasizing 'actual data values' vs. searching or information retrieval.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states workflow prerequisite: 'Use search_series first to find the series_id.' Also provides alternatives for common cases by listing well-known IDs (UNRATE, GDP, etc.) and documents domain-specific patterns for state unemployment data (WAUR, CAUR), giving clear guidance on when and how to use the tool vs. alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_seriesAInspect
Search for FRED economic data series by keyword.
Use this to find series IDs for economic indicators. For example,
search 'unemployment rate' to find UNRATE, or 'gross domestic product'
to find GDP. Returns series metadata including ID, title, frequency,
units, and date range.
Common series: UNRATE (unemployment), GDP (gross domestic product),
CPIAUCSL (consumer price index), FEDFUNDS (federal funds rate),
MORTGAGE30US (30-year mortgage rate), MEHOINUSA672N (median household income).
Args:
search_text: Keywords to search for (e.g. 'unemployment rate', 'GDP', 'inflation').
limit: Maximum number of results to return (default 10, max 1000).| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| search_text | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses return values ('Returns series metadata including ID, title, frequency, units, and date range') which compensates partially. Could be improved by explicitly stating this is read-only/safe, though this is implied by 'search' semantics.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured and front-loaded: purpose first, followed by usage context, return values, helpful common series examples, and parameter documentation. The common series list is slightly lengthy but provides valuable domain context. No wasted words in the opening sentences.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriate for complexity: 2 simple parameters fully documented despite 0% schema coverage, and return values sufficiently summarized given that an output schema exists (per context signals). Common series examples add practical completeness for the economic data domain.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0% (properties lack descriptions), but the Args section fully compensates: 'search_text' includes purpose + examples, and 'limit' includes constraints (default 10, max 1000) not visible in the raw schema. Adds complete semantic meaning beyond the bare titles.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear, specific verb ('Search') and resource ('FRED economic data series'). The description effectively distinguishes this discovery tool from siblings like 'get_series_info' and 'get_series_observations' by emphasizing it finds series IDs by keyword, while siblings likely require existing IDs.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states 'Use this to find series IDs for economic indicators' with concrete examples (unemployment rate → UNRATE). While it doesn't explicitly name sibling alternatives, the workflow implication (search first to get IDs, then use other tools) is clear from the context and return value description.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail — every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control — enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management — store and rotate API keys and OAuth tokens in one place
Change alerts — get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption — public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics — see which tools are being used most, helping you prioritize development and documentation
Direct user feedback — users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!