Skip to main content
Glama
Bigred97

Australian Institute of Health and Welfare

list_curated

Retrieve a sorted list of all curated dataset IDs available for plain-English queries using get_data.

Instructions

List every curated dataset ID in this version of aihw-mcp.

These are the datasets where get_data accepts plain-English filter keys and returns aliased, well-typed measure columns. Each ID is documented via describe_dataset.

Returns: Sorted list of dataset IDs.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description fully covers behavioral aspects: it returns a sorted list of dataset IDs and explains the nature of curated datasets. It also notes the relationship to get_data and describe_dataset, providing complete transparency for a read-only listing tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise and well-structured: a clear action statement, explanatory context, and explicit return value. Each sentence adds value without unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple zero-parameter tool with an output schema, the description is complete. It defines what curated means, explains integration with sibling tools, and describes the output format (sorted list of IDs).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has no parameters and schema coverage is 100%, so the description does not need to add parameter semantics. A baseline score of 4 is appropriate as the description adds no redundant info.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool lists curated dataset IDs and distinguishes it from siblings by explaining that these datasets allow plain-English filter keys and aliased columns in get_data, which is unique among sibling tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description indicates the tool should be used to get the full set of curated IDs and mentions that each ID is documented via describe_dataset, providing clear usage context. However, it does not explicitly state when not to use it or compare to siblings like search_datasets or top_n.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Bigred97/aihw-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server