Skip to main content
Glama
Bigred97

Reserve Bank of Australia

describe_table

Retrieve series keys, units, and frequency from RBA F-tables. Curated tables yield plain-English keys; others return raw series IDs.

Instructions

Describe an RBA F-table's series, units, and frequency.

For curated F-tables (F1.1, F4, F6, F11, F11.1), returns plain-English series keys (like 'cash_rate_target', 'aud_usd') with descriptions and units. For other F-tables, fetches the CSV and returns the raw RBA series IDs from the header along with start dates.

Examples: # Curated table — plain-English keys detail = await describe_table("F1.1") # detail.series[0]: key='cash_rate_target', series_id='FIRMMCRT', # unit='Per cent per annum', frequency='Daily'

# Curated FX table
detail = await describe_table("F11.1")
# detail.series has 'aud_usd', 'aud_eur', 'aud_jpy', 'aud_cny', etc.

When to use: - Before calling get_data on a new table — to discover valid series keys - To get the canonical RBA source URL for citation - To distinguish curated (plain-English) tables from raw F-tables

Returns: TableDetail with id, name, description, is_curated flag, frequency, list of SeriesDetail (key, series_id, description, unit), and rba_url.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
table_idYesRBA F-table ID like 'F1.1', 'F11', 'F6'. Use search_tables() to discover or list_curated() to enumerate the 5 plain-English tables. Case-insensitive ('f11' resolves to 'F11').

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
idYes
nameYes
descriptionYes
is_curatedYes
frequencyNo
seriesYes
source_urlYes
rba_urlYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, description carries full burden. It discloses two behavioral modes (curated vs raw tables), provides return structure details (TableDetail with fields), and gives examples showing output. Lacks mention of potential side effects or external calls, but overall transparent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is well-structured with sections for examples, when to use, and returns. Every sentence adds value, no redundancy. Appropriate length given complexity of two-behavior tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Covers all relevant aspects: purpose, parameter guidance, usage context, examples, and return structure. With good schema coverage and output schema described in prose, the description is complete for a single-parameter tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with detailed description for table_id including case-insensitivity, examples, and references to sibling tools to discover valid IDs. Description adds context on how parameter affects behavior (curated vs raw), exceeding schema alone.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool describes an RBA F-table's series, units, and frequency, distinguishing between curated and non-curated tables. Specific verb 'describe' and resource 'F-table' differentiate it from sibling tools like get_data (fetches data) and list_curated (enumerates curated tables).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Includes explicit 'When to use' section listing three contexts: before get_data to discover valid series keys, to get canonical URL, and to distinguish curated vs raw tables. Omits explicit when-not-to-use but implies appropriate usage through examples and references to sibling tools in input schema.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Bigred97/rba-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server