Skip to main content
Glama
lzinga

US Government Open Data MCP

fda_drug_events

Search FDA adverse drug event reports to identify side effects, hospitalizations, and deaths. Analyze over 20 million reports by drug name, reaction type, or seriousness level.

Instructions

Search FDA adverse drug event reports (FAERS) — side effects, hospitalizations, deaths. Over 20 million reports. Search by drug name, reaction, seriousness.

Example searches:

  • 'patient.drug.openfda.brand_name:aspirin' — events involving aspirin

  • 'patient.drug.openfda.generic_name:ibuprofen+AND+serious:1' — serious ibuprofen events

  • 'patient.reaction.reactionmeddrapt:nausea' — events where nausea was reported

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
searchNoOpenFDA search query. Examples: 'field:value', 'field:"Exact Phrase"', 'field:[20200101+TO+20231231]', '_exists_:field'. Combine with '+AND+', '+OR+', '+NOT+'.
limitNoMax results (default 10, max 100)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses the dataset size and searchable fields, which adds useful context. However, it does not mention behavioral aspects like rate limits, authentication requirements, pagination (beyond the limit parameter), or error handling. For a search tool with no annotations, this leaves gaps in understanding operational constraints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence, followed by key details (dataset size, search fields) and practical examples. Every sentence earns its place by providing essential information or illustrative guidance, with no wasted words or redundant content.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (search with two parameters), 100% schema coverage, and no output schema, the description is largely complete. It covers purpose, usage context, and parameter semantics well. However, without annotations or output details, it lacks information on behavioral traits (e.g., rate limits) and return format, which could be important for an agent invoking this tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, so the baseline is 3. The description adds value by providing concrete example queries that illustrate how to use the search parameter with specific fields (e.g., patient.drug.openfda.brand_name, serious) and operators (e.g., +AND+), enhancing understanding beyond the schema's generic examples. It also implies the limit parameter's relevance through the examples, though not explicitly.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool searches FDA adverse drug event reports (FAERS) for side effects, hospitalizations, and deaths, specifying the dataset size (over 20 million reports) and searchable fields (drug name, reaction, seriousness). It distinguishes itself from sibling tools like fda_approved_drugs or fda_drug_recalls by focusing on adverse event reports rather than approvals, labels, or recalls.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool (searching adverse drug events) and includes example queries that illustrate practical applications. However, it does not explicitly state when not to use it or name specific alternatives among the many FDA-related sibling tools, such as fda_drug_labels for drug information or fda_drug_recalls for recall data.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/lzinga/us-government-open-data-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server