Skip to main content
Glama
Bigred97

Reserve Bank of Australia

get_data

Retrieve time series observations from RBA F-tables by table ID and series key. Supports multiple series, date filtering, and CSV output for charting.

Instructions

Query an RBA F-table and return observations.

Curated tables (F1.1, F4, F6, F11, F11.1) accept plain-English series
keys that map to canonical RBA series IDs server-side. Pass a list of
keys for a multi-series query, or omit `series` to get every curated
series in the table.

Examples:
    # Cash rate target since 2020
    resp = await get_data("F1.1", series="cash_rate_target", start_date="2020")
    # → resp.records[0]: period='2020-01-01', value=0.25, series='cash_rate_target'

    # All FX rates against AUD, last year
    resp = await get_data("F11.1", start_date="2024-01-01", end_date="2024-12-31")
    # → resp.records covers aud_usd, aud_eur, aud_jpy, aud_cny, ... daily

    # Mortgage rates as CSV
    resp = await get_data("F6", format="csv", start_date="2023")
    # → resp.csv = "date,series,value

2023-01-01,housing_standard_variable,..."

    # Raw (non-curated) F-table — pass raw RBA series IDs
    resp = await get_data("F1", series=["FIRMMCRTD", "FIRMMBAB30"])

When to use:
    - You want a time series of an RBA indicator (use latest() for current-only)
    - You want a multi-series comparison (e.g. all FX rates)
    - You want CSV for downstream charting

Returns:
    DataResponse with records, unit, period bounds, RBA source URL,
    and CC-BY 4.0 attribution.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
table_idYesRBA F-table ID like 'F1.1', 'F11'. Use search_tables() to discover.
seriesNoWhich series to return. For curated tables: plain-English keys (e.g. 'aud_usd', 'cash_rate_target') or a list for multi-series. For raw F-tables: raw RBA series IDs (e.g. 'FXRUSD'). Pass None (default) to return all curated series in the table.
start_dateNoInclusive start date. Accepts 'YYYY', 'YYYY-MM', or 'YYYY-MM-DD'. An int year (e.g. 2024) is also accepted and treated as 'YYYY'. Semantic-checked: '2024-13' or '----' rejected at the boundary.
end_dateNoInclusive end date. Same format as start_date.
formatNoResponse shape. 'records' (default): flat list of observations. 'series': observations grouped by series_id. 'csv': returns the table as a CSV string in the `csv` field.records

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
table_idYes
table_nameYes
queryNo
periodNo
unitNo
recordsNo
csvNo
sourceNoReserve Bank of Australia
attributionNoData sourced from the Reserve Bank of Australia and licensed under Creative Commons Attribution 4.0 International (CC BY 4.0). https://www.rba.gov.au/copyright/
retrieved_atYes
rba_urlYes
server_versionNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full responsibility. It explains query behavior, curated vs. raw series handling, and response formats. While read-only nature is implied, explicit statement could improve transparency, but overall it's sufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured: core action upfront, bullet points for key distinctions, clear examples, and a 'When to use' section. It is concise without superfluous text, earning its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (5 parameters, curated vs raw, multiple formats) and the presence of an output schema, the description covers all necessary aspects: parameter semantics, usage guidelines, return structure, and examples. It is fully complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description adds value by clarifying series parameter semantics (plain-English keys vs raw IDs) and providing concrete examples, going beyond the schema's own descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Query an RBA F-table and return observations,' specifying the resource and action. It distinguishes between curated and raw tables, and contrasts with the sibling tool 'latest' for current-only queries, ensuring no ambiguity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The 'When to use' section explicitly recommends this tool for time series, multi-series comparisons, and CSV output, while implying alternatives like 'latest'. It provides clear context for when to choose this tool over siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Bigred97/rba-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server