Skip to main content
Glama

FEMA Disaster Declarations

fema.disaster.declarations
Read-onlyIdempotent

Retrieve US federal disaster declarations from 1953 to present. Filter by state, incident type (Fire, Flood, Hurricane, etc.), and year to access disaster number, title, dates, and programs (IA, PA, HM). Source: OpenFEMA.

Instructions

Search US federal disaster declarations from 1953 to present. Filter by state, incident type (Fire, Flood, Hurricane, Tornado, Earthquake), and year. Returns disaster number, title, dates, designated programs (IA, PA, HM). Source: OpenFEMA (US Gov open data).

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
stateNoUS state code (e.g. CA, TX, FL). Omit for all states.
incident_typeNoDisaster type to filter (e.g. Fire, Flood, Hurricane)
yearNoFilter by declaration year (1953-2026)
limitNoNumber of results (1-50, default 10)

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultNoTool response payload. Shape varies per tool — consult the tool description and inputSchema. May be an object, array, string, or number depending on the upstream provider response.
errorNoPresent only when the call failed. Includes error code, message, request_id, and any provider-specific extras.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only, non-destructive, and idempotent behavior. The description adds value by specifying the temporal range (1953-present), return fields (disaster number, title, dates, programs), and data source (OpenFEMA). This provides meaningful context beyond what annotations convey, though it could mention pagination behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise at three sentences, with no extraneous information. It front-loads the main purpose and follows with filter details and return data. Every sentence serves a purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 4 parameters, an output schema, and annotations, the description covers the essential use case: what it does, filters, and return fields. It does not mention the limit parameter, but the input schema covers it. The description is sufficiently complete for an agent to understand the tool's capability without additional guidance.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description adds general meaning by listing filters (state, incident type, year) but only enumerates 5 incident types (Fire, Flood, Hurricane, Tornado, Earthquake) while the schema includes 11 via enum. This incomplete listing could mislead agents and detracts from the semantics, keeping the score at baseline.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The tool name 'fema.disaster.declarations' is descriptive, and the description clearly states it searches US federal disaster declarations from 1953 to present, listing key filters and return fields. It effectively distinguishes from siblings like 'fema.disaster.assistance' and 'fema.disaster.flood_claims' by specifying its unique focus on declarations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly mentions filtering by state, incident type, and year, providing clear context for use. However, it does not explicitly state when not to use this tool versus alternatives, such as directing users to siblings for assistance or flood claims. Given the siblings are distinct, the omission is minor.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/whiteknightonhorse/APIbase'

If you have feedback or need assistance with the MCP directory API, please join our Discord server