FXMacroData
Server Details
Macroeconomic and FX time-series data for AI agents: indicators, calendars, COT, forex, commodities.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- fxmacrodata/fxmacrodata
- GitHub Stars
- 3
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
9 toolscommoditiesCommodity IndicatorsARead-onlyInspect
Get commodity and energy indicator time series. Supported indicators: gold, platinum, silver.
| Name | Required | Description | Default |
|---|---|---|---|
| end_date | No | ||
| indicator | Yes | ||
| start_date | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint=true, destructiveHint=false, and openWorldHint=false, covering safety and scope. The description adds minimal behavioral context beyond this, only listing supported indicators. It doesn't disclose rate limits, authentication needs, or data freshness. With annotations doing heavy lifting, the description adds some value but lacks rich behavioral details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise with two sentences that directly state the tool's function and supported indicators. It is front-loaded with the main purpose and wastes no words. Every sentence earns its place by providing essential information without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (3 parameters, 1 required), annotations cover safety/scope, and an output schema exists (so return values needn't be explained), the description is reasonably complete. It clarifies the indicator parameter but misses date parameter details. For a read-only data retrieval tool with good annotations, it's mostly adequate but could be more thorough on parameters.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the schema provides no parameter descriptions. The description mentions 'Supported indicators: gold, platinum, silver', which clarifies the 'indicator' parameter semantics. However, it doesn't explain 'start_date' or 'end_date' parameters, their formats, or default behaviors. It partially compensates for the coverage gap but leaves key parameters undocumented.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Get commodity and energy indicator time series' specifies the verb (get) and resource (time series data). It distinguishes from some siblings by focusing on commodities/energy (vs. forex, market_sessions), though not explicitly compared. However, it doesn't fully differentiate from indicator_query which might overlap.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage by listing supported indicators (gold, platinum, silver), suggesting when to use this tool for those specific commodities. However, it provides no explicit guidance on when to choose this over alternatives like indicator_query or forex, nor any prerequisites or exclusions. Usage context is implied but not clearly articulated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
cot_dataCOT ReportARead-onlyInspect
Get CFTC Commitment of Traders (COT) weekly positioning data for a currency's FX futures contract. Supported currencies: AUD, CAD, CHF, EUR, GBP, JPY, NZD, USD.
| Name | Required | Description | Default |
|---|---|---|---|
| currency | Yes | ||
| end_date | No | ||
| start_date | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, openWorldHint=false, and destructiveHint=false, covering safety and scope. The description adds useful context about weekly data frequency and currency support, but does not disclose additional behavioral traits like rate limits, authentication needs, or data format. No contradiction with annotations exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose in the first sentence, followed by essential details in the second. Every sentence earns its place by specifying data type, scope, and constraints without redundancy. It is appropriately sized and efficiently structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (3 parameters, 1 required), annotations cover safety/scope, and an output schema exists, the description is mostly complete. It explains the tool's purpose and currency constraints well, but could improve by addressing date parameters or data format. The output schema reduces the need for return value details.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It explains the 'currency' parameter by listing supported values (AUD, CAD, etc.), but does not address 'start_date' or 'end_date' parameters. This partial coverage meets the baseline for low schema coverage, as it adds some meaning but leaves gaps.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get CFTC Commitment of Traders (COT) weekly positioning data'), identifies the resource ('currency's FX futures contract'), and distinguishes from siblings by specifying the exact data type (COT reports) and supported currencies. It avoids tautology by providing concrete details beyond the name/title.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by listing supported currencies, but does not explicitly state when to use this tool versus alternatives (e.g., other data tools like 'forex' or 'commodities'). It provides some guidance on scope but lacks explicit when/when-not instructions or named alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
data_catalogueIndicator CatalogueARead-onlyInspect
Get available macroeconomic indicators for a currency. Supported currencies: AUD, BRL, CAD, CHF, CNY, DKK, EUR, GBP, JPY, NZD, PLN, SEK, SGD, USD.
| Name | Required | Description | Default |
|---|---|---|---|
| currency | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, openWorldHint=false, and destructiveHint=false, covering safety and scope. The description adds value by specifying the supported currencies, which provides operational context beyond annotations. However, it doesn't disclose additional behavioral traits like rate limits, error handling, or response format, keeping the score at baseline with annotations present.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose and follows with essential details (supported currencies). Every word earns its place with no redundancy or fluff, making it highly concise and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (1 parameter), rich annotations (covering safety and scope), and the presence of an output schema (which handles return values), the description is reasonably complete. It specifies the tool's purpose and supported currencies, addressing key contextual gaps. However, it could improve by clarifying differentiation from siblings or adding minor usage nuances.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 1 parameter with 0% description coverage, so the schema provides minimal semantic information. The description compensates by listing supported currency values (e.g., AUD, USD), which adds meaning to the 'currency' parameter. However, it doesn't fully detail parameter usage or constraints, resulting in an adequate but not comprehensive score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Get available macroeconomic indicators for a currency.' It specifies the verb ('Get') and resource ('macroeconomic indicators') with a clear scope ('for a currency'). However, it doesn't explicitly differentiate from sibling tools like 'indicator_query' or 'forex', which prevents a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides some usage context by listing supported currencies, which implies when to use it (for those currencies). However, it doesn't explicitly state when to use this tool versus alternatives like 'indicator_query' or 'forex', nor does it mention prerequisites or exclusions. The guidance is implied rather than explicit.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forexFX Spot RatesARead-onlyInspect
Get FX spot rates for a currency pair with optional technical indicators. Supported currencies: AUD, BRL, CAD, CHF, CNY, DKK, EUR, GBP, JPY, NZD, PLN, SEK, SGD, USD. Optional indicators parameter accepts a comma-separated list of technical indicator slugs to compute from the spot-rate series. Supported indicator values: bollinger_bands, ema_12, ema_26, macd, rsi_14, sma_20, sma_200, sma_50, all.
| Name | Required | Description | Default |
|---|---|---|---|
| base | Yes | ||
| quote | Yes | ||
| end_date | No | ||
| indicators | No | ||
| start_date | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only, non-destructive, and closed-world behavior. The description adds value by specifying supported currencies and indicators, which provides context on data availability and optional features, though it doesn't detail rate limits or authentication needs.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose and efficiently lists supported currencies and indicators. It could be slightly more structured, but every sentence adds value without redundancy, making it appropriately concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity, annotations covering safety, and an output schema, the description is mostly complete. It covers key inputs and optional features, though it could benefit from more usage context or error handling details.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, but the description compensates by explaining the 'indicators' parameter with supported values and format. However, it doesn't clarify 'base' and 'quote' beyond listing currencies, or explain date parameters, leaving some semantic gaps.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('Get FX spot rates') and resources ('currency pair'), and distinguishes it from siblings by mentioning technical indicators. It explicitly lists supported currencies and indicators, making the scope unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage by specifying supported currencies and indicators, but does not explicitly state when to use this tool versus alternatives like 'commodities' or 'indicator_query'. No exclusions or clear alternatives are provided, leaving some ambiguity.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
indicator_queryIndicator Time SeriesBRead-onlyInspect
Get macroeconomic indicator time series for a currency. Supported currencies: AUD, BRL, CAD, CHF, CNY, DKK, EUR, GBP, JPY, NZD, PLN, SEK, SGD, USD. Supported indicators: average_hourly_earnings, balance_on_goods, balance_on_services, boc_business_outlook, breakeven_inflation_rate, broad_money, building_approvals, building_permits, business_confidence, business_sentiment, cb_assets, commodity_price_energy, commodity_price_ex_energy, commodity_price_index, commodity_prices, consumer_confidence, consumer_expectations, consumer_sentiment, core_inflation, core_inflation_median, core_inflation_mom, core_inflation_trim, credit_growth, current_account_balance, deposit_rates, domestic_credit, durable_goods_orders, employment, exports, foreign_reserves, full_time_employment, fx_reserves, gdp, gdp_quarterly, gold_reserves, gov_bond_10y, gov_bond_1y, gov_bond_20y, gov_bond_2y, gov_bond_30y, gov_bond_3y, gov_bond_40y, gov_bond_4y, gov_bond_5y, gov_bond_7y, government_debt, house_price_index, household_credit, housing_starts, imports, industrial_production, inflation, inflation_expectations, inflation_linked_bond, inflation_mom, initial_jobless_claims, job_openings, kof_barometer, m1, m2, m3, money_supply_currency, money_supply_savings_deposits, money_supply_term_deposits, money_supply_transaction_deposits, monthly_cpi, mortgage_rate, nairu, nmi, non_farm_payrolls, part_time_employment, participation_rate, pce, pce_mom, pmi, policy_rate, ppi, ppi_mom, private_sector_credit, real_exchange_rate, retail_sales, risk_free_rate, sight_deposits, snb_balance_sheet, tankan_capex, terms_of_trade, trade_balance, trade_weighted_index, trimmed_mean_inflation, unemployment, wage_price_index, wages.
| Name | Required | Description | Default |
|---|---|---|---|
| currency | Yes | ||
| end_date | No | ||
| indicator | Yes | ||
| start_date | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, openWorldHint=false, and destructiveHint=false, so the agent knows this is a safe, read-only operation with limited scope. The description adds context by listing supported currencies and indicators, which helps set expectations about available data. However, it doesn't disclose rate limits, authentication needs, or data freshness, which would be valuable behavioral context beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is poorly structured and excessively long due to the massive list of indicators. The core purpose is buried in a single sentence followed by overwhelming detail. While the lists are necessary for parameter guidance, they make the description difficult to parse and not front-loaded effectively.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (4 parameters, comprehensive indicator list) and the presence of an output schema (which handles return values), the description is reasonably complete. It covers the core functionality and parameter constraints well. The main gap is lack of usage guidance relative to sibling tools, but overall it provides sufficient context for effective use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description compensates well by providing comprehensive lists of supported values for 'currency' and 'indicator' parameters. This adds significant meaning beyond the bare schema. However, it doesn't explain the optional 'start_date' and 'end_date' parameters or their format, leaving some parameter semantics undocumented.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Get macroeconomic indicator time series for a currency.' It specifies the verb ('Get') and resource ('macroeconomic indicator time series'), and distinguishes it from siblings like 'commodities' or 'forex' by focusing on economic indicators. However, it doesn't explicitly differentiate from 'indicator_visual_artifact' which might provide similar data in visual form.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It lists supported currencies and indicators but doesn't mention when to choose this over 'data_catalogue' for metadata, 'release_calendar' for upcoming data, or 'indicator_visual_artifact' for visualizations. Usage is implied through the parameter lists but not explicitly stated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
indicator_visual_artifactIndicator Visual ArtifactARead-onlyInspect
Get indicator time series data with MCP Apps metadata so compatible clients can render an interactive chart artifact. Supported currencies: AUD, BRL, CAD, CHF, CNY, DKK, EUR, GBP, JPY, NZD, PLN, SEK, SGD, USD. Supported indicators: average_hourly_earnings, balance_on_goods, balance_on_services, boc_business_outlook, breakeven_inflation_rate, broad_money, building_approvals, building_permits, business_confidence, business_sentiment, cb_assets, commodity_price_energy, commodity_price_ex_energy, commodity_price_index, commodity_prices, consumer_confidence, consumer_expectations, consumer_sentiment, core_inflation, core_inflation_median, core_inflation_mom, core_inflation_trim, credit_growth, current_account_balance, deposit_rates, domestic_credit, durable_goods_orders, employment, exports, foreign_reserves, full_time_employment, fx_reserves, gdp, gdp_quarterly, gold_reserves, gov_bond_10y, gov_bond_1y, gov_bond_20y, gov_bond_2y, gov_bond_30y, gov_bond_3y, gov_bond_40y, gov_bond_4y, gov_bond_5y, gov_bond_7y, government_debt, house_price_index, household_credit, housing_starts, imports, industrial_production, inflation, inflation_expectations, inflation_linked_bond, inflation_mom, initial_jobless_claims, job_openings, kof_barometer, m1, m2, m3, money_supply_currency, money_supply_savings_deposits, money_supply_term_deposits, money_supply_transaction_deposits, monthly_cpi, mortgage_rate, nairu, nmi, non_farm_payrolls, part_time_employment, participation_rate, pce, pce_mom, pmi, policy_rate, ppi, ppi_mom, private_sector_credit, real_exchange_rate, retail_sales, risk_free_rate, sight_deposits, snb_balance_sheet, tankan_capex, terms_of_trade, trade_balance, trade_weighted_index, trimmed_mean_inflation, unemployment, wage_price_index, wages.
| Name | Required | Description | Default |
|---|---|---|---|
| currency | Yes | ||
| end_date | No | ||
| indicator | Yes | ||
| start_date | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, openWorldHint=false, and destructiveHint=false, so the agent knows this is a safe, non-destructive read operation with limited scope. The description adds value by specifying the intended output format ('interactive chart artifact') and listing supported currencies and indicators, which provides useful context beyond what annotations convey about safety and scope.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is poorly structured with a massive, unformatted list of indicators that makes it difficult to parse. While the first sentence is clear and front-loaded, the lengthy indicator list (over 100 items) is excessive and doesn't earn its place in the description when a reference to available indicators would be more appropriate. This violates the principle that every sentence should earn its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (4 parameters, no output schema, 0% schema coverage), the description is incomplete. While it covers the required parameters well with extensive lists, it completely ignores the optional date parameters. The description doesn't explain what the tool returns (beyond mentioning 'interactive chart artifact'), nor does it address potential limitations, error conditions, or data freshness considerations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description carries the full burden of parameter documentation. It provides extensive lists of supported currencies and indicators, which directly correspond to the 'currency' and 'indicator' required parameters. However, it doesn't mention the optional 'start_date' and 'end_date' parameters at all, leaving them completely undocumented.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Get indicator time series data with MCP Apps metadata so compatible clients can render an interactive chart artifact.' This specifies the verb ('Get'), resource ('indicator time series data'), and intended outcome ('render an interactive chart artifact'). It distinguishes this tool from sibling 'indicator_query' by emphasizing the visual artifact generation aspect.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by listing supported currencies and indicators, suggesting this tool should be used when those specific data types are needed. However, it doesn't explicitly state when to use this tool versus the sibling 'indicator_query' tool, nor does it provide clear exclusion criteria or alternative recommendations for different scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
market_sessionsFX Market SessionsARead-onlyInspect
Get the current FX market-session timetable and overlap windows, or request a snapshot for a specific UTC timestamp.
| Name | Required | Description | Default |
|---|---|---|---|
| at | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The annotations already declare this is a read-only, non-destructive, closed-world operation. The description adds useful context about what information is returned (timetable, overlap windows) and the optional timestamp parameter functionality. However, it doesn't provide additional behavioral details like rate limits, authentication requirements, or specific format of returned data beyond what annotations cover.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly concise - a single sentence that front-loads the primary functionality and includes the parameter option. Every word earns its place with no redundancy or unnecessary elaboration. The structure efficiently communicates both the default behavior and optional parameter usage.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity, comprehensive annotations (readOnlyHint, openWorldHint, destructiveHint), and the existence of an output schema, the description provides sufficient context. It explains what the tool returns and how to use the optional parameter. The output schema will handle return value details, so the description appropriately focuses on purpose and usage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage for the single parameter 'at', the description compensates well by explaining its purpose: 'request a snapshot for a specific UTC timestamp.' This clarifies that the parameter is optional (implied by 'or') and specifies the expected format (UTC timestamp). The description effectively adds meaning beyond the bare schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('Get', 'request') and resources ('FX market-session timetable', 'overlap windows', 'snapshot for a specific UTC timestamp'). It distinguishes itself from sibling tools like 'forex' or 'commodities' by focusing specifically on market session timing rather than price data or other financial information.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool - to obtain market session timetables and overlap windows. It implicitly suggests using the 'at' parameter for historical snapshots versus current data. However, it doesn't explicitly state when NOT to use this tool or name specific alternatives among the sibling tools for related needs.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
pingPingARead-onlyInspect
Verify that the FXMacroData API and MCP server are reachable.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare this as read-only, non-destructive, and closed-world, covering key behavioral traits. The description adds valuable context about what's being verified (both API and server reachability), which isn't captured in annotations. No contradiction exists between the description and annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that immediately communicates the tool's purpose without any redundant information. Every word contributes to understanding, and it's perfectly front-loaded with the core functionality.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (no parameters, read-only operation), the presence of comprehensive annotations, and an output schema, the description provides exactly what's needed. It explains the purpose clearly without needing to cover behavioral details already in structured fields or return values handled by the output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With zero parameters and 100% schema description coverage, the schema fully documents the input requirements. The description appropriately doesn't discuss parameters, maintaining focus on the tool's purpose. A baseline of 4 is appropriate for parameterless tools when the schema is complete.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific verb ('verify') and resource ('FXMacroData API and MCP server'), distinguishing it from sibling tools like data retrieval or analysis functions. It precisely communicates the tool's role as a connectivity check rather than data processing.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implicitly provides usage context by stating it verifies reachability, suggesting it should be used to test connectivity before attempting data operations. However, it doesn't explicitly mention when NOT to use it or name alternative diagnostic tools, which prevents a perfect score.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
release_calendarRelease CalendarCRead-onlyInspect
Get upcoming release dates for a currency and optional indicator. Supported currencies: AUD, BRL, CAD, CHF, CNY, COMM, DKK, EUR, GBP, HKD, JPY, KRW, MXN, NOK, NZD, PLN, SEK, SGD, USD. Supported indicators: average_hourly_earnings, balance_on_goods, balance_on_services, boc_business_outlook, breakeven_inflation_rate, broad_money, building_approvals, building_permits, business_confidence, business_sentiment, cb_assets, commodity_price_energy, commodity_price_ex_energy, commodity_price_index, commodity_prices, consumer_confidence, consumer_expectations, consumer_sentiment, core_inflation, core_inflation_median, core_inflation_mom, core_inflation_trim, credit_growth, current_account_balance, deposit_rates, domestic_credit, durable_goods_orders, employment, exports, foreign_reserves, full_time_employment, fx_reserves, gdp, gdp_quarterly, gold_reserves, gov_bond_10y, gov_bond_1y, gov_bond_20y, gov_bond_2y, gov_bond_30y, gov_bond_3y, gov_bond_40y, gov_bond_4y, gov_bond_5y, gov_bond_7y, government_debt, house_price_index, household_credit, housing_starts, imports, industrial_production, inflation, inflation_expectations, inflation_linked_bond, inflation_mom, initial_jobless_claims, job_openings, kof_barometer, m1, m2, m3, money_supply_currency, money_supply_savings_deposits, money_supply_term_deposits, money_supply_transaction_deposits, monthly_cpi, mortgage_rate, nairu, nmi, non_farm_payrolls, part_time_employment, participation_rate, pce, pce_mom, pmi, policy_rate, ppi, ppi_mom, private_sector_credit, real_exchange_rate, retail_sales, risk_free_rate, sight_deposits, snb_balance_sheet, tankan_capex, terms_of_trade, trade_balance, trade_weighted_index, trimmed_mean_inflation, unemployment, wage_price_index, wages.
| Name | Required | Description | Default |
|---|---|---|---|
| currency | Yes | ||
| indicator | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, destructiveHint=false, and openWorldHint=false, covering safety and scope. The description adds no behavioral context beyond the supported values list—no mention of rate limits, authentication needs, or data freshness. With annotations providing basic safety info, this earns a baseline score for minimal added value.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is poorly structured: it starts with a clear purpose but devolves into a massive, unformatted list of currencies and indicators. This overwhelms the agent with raw data that could be better handled via enums or external documentation, reducing readability and efficiency.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (2 parameters, one optional), annotations cover safety, and an output schema exists, the description is minimally complete. It defines the purpose and valid inputs but lacks usage context, error handling, or output examples, leaving gaps that could hinder effective tool selection.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the schema provides no parameter details. The description compensates by listing all supported currencies and indicators, which clarifies valid inputs for both parameters. However, it doesn't explain parameter interactions or formatting, keeping it at a baseline level of adequacy.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Get upcoming release dates for a currency and optional indicator.' It specifies the verb ('Get') and resource ('upcoming release dates'), though it doesn't explicitly differentiate from sibling tools like 'indicator_query' or 'forex', which prevents a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It lists supported currencies and indicators but doesn't explain context, prerequisites, or how it differs from sibling tools like 'indicator_query' or 'data_catalogue', leaving the agent to guess based on tool names alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!