Skip to main content
Glama

Server Details

FDA MCP — US Food and Drug Administration public API (free, no auth)

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-fda
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsC

Average 2.9/5 across 3 of 3 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose targeting different FDA data domains: drug events, drug labels, and food recalls. The descriptions specify unique resources (FAERS reports, package inserts, recall records) with no overlap in functionality, making tool selection unambiguous for an agent.

Naming Consistency5/5

All tools follow a consistent verb_noun pattern with 'search_' prefix and descriptive suffixes (_drug_events, _drug_labels, _food_recalls). This predictable naming convention enhances readability and agent comprehension across the tool set.

Tool Count4/5

Three tools is slightly lean but reasonable for an FDA data server, covering key public datasets. While more tools could expand scope (e.g., device recalls, inspections), the current count is well-scoped for focused queries without being overwhelming or insufficient.

Completeness3/5

The tools provide search-only functionality for three FDA domains, lacking CRUD operations (which may be intentional given public data). However, there are notable gaps: no tools for other FDA areas like devices, inspections, or approvals, and no get/update/delete operations, limiting workflow coverage.

Available Tools

3 tools
search_drug_eventsCInspect

Search FDA adverse drug event (FAERS) reports. Returns reports matching the query, including patient reactions, drug details, and outcomes.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoNumber of results to return (default 5, max 100)
queryYesSearch query using openFDA syntax (e.g., "patient.drug.medicinalproduct:aspirin" or just a drug name)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden but lacks behavioral details. It mentions the return content but doesn't cover rate limits, authentication needs, pagination, or error handling. The description doesn't contradict annotations, but it's insufficient for a tool with no annotation support.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise and front-loaded, stating the purpose and return content in two clear sentences without unnecessary details. It could be slightly improved by integrating usage context.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations and no output schema, the description is incomplete. It lacks details on behavioral traits, output format, and usage guidelines relative to siblings, making it inadequate for a search tool with two parameters.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents both parameters (query and limit). The description adds no parameter-specific semantics beyond what's in the schema, maintaining the baseline score of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool searches FDA adverse drug event reports and specifies what information is returned (patient reactions, drug details, outcomes). It distinguishes from sibling tools by focusing on drug events rather than labels or food recalls, though it doesn't explicitly contrast them.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus the sibling tools (search_drug_labels, search_food_recalls). The description implies usage for adverse event searches but doesn't specify scenarios or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_drug_labelsCInspect

Search FDA drug labeling (package inserts). Returns label sections such as indications, warnings, dosage, and adverse reactions.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoNumber of results to return (default 5, max 100)
queryYesSearch query (e.g., a drug brand name, generic name, or active ingredient)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the return content (label sections) but lacks critical details: whether this is a read-only operation, potential rate limits, authentication needs, error conditions, or pagination behavior. For a search tool with zero annotation coverage, this leaves significant gaps in understanding its operational traits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately concise with two clear sentences. The first states the action and resource, and the second specifies return content. There's no wasted verbiage, and information is front-loaded, though it could be slightly more structured by explicitly separating purpose from output details.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (search with two parameters), lack of annotations, and no output schema, the description is minimally adequate. It covers the core function and return sections but omits behavioral context, error handling, and output format details. It meets basic needs but leaves the agent under-informed about operational aspects.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds minimal parameter semantics beyond the schema. It implies the 'query' parameter can include drug names or ingredients, but the schema already describes this with 100% coverage. No additional details about parameter interactions, defaults beyond 'limit', or search logic are provided, meeting the baseline for high schema coverage without adding significant value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: searching FDA drug labeling (package inserts) and returning specific label sections. It specifies the verb 'search' and resource 'FDA drug labeling', making the function unambiguous. However, it doesn't explicitly differentiate from sibling tools like 'search_drug_events' or 'search_food_recalls' beyond the resource focus.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus its siblings. It doesn't mention alternatives, prerequisites, or exclusions. While the resource focus (drug labeling) implies some context, there's no explicit comparison to 'search_drug_events' (likely adverse events) or 'search_food_recalls', leaving the agent to infer usage scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_food_recallsCInspect

Search FDA food enforcement / recall records. Returns product recalls, reasons for recall, distribution patterns, and recall status.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoNumber of results to return (default 10, max 100)
queryNoSearch query (e.g., a product name, company, or reason for recall). Omit to get recent recalls.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions what data is returned but lacks critical behavioral details such as whether this is a read-only operation, potential rate limits, authentication requirements, error handling, or pagination behavior. For a search tool with zero annotation coverage, this is a significant gap in transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, consisting of two clear sentences that state the tool's purpose and what it returns. There is no wasted text or redundancy, making it efficient. However, it could be slightly more structured by explicitly separating purpose from output details.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (search with two parameters), no annotations, and no output schema, the description is minimally adequate. It covers the basic purpose and return data but misses behavioral context and usage guidelines. For a search tool without annotations or output schema, it should do more to explain how results are structured or limitations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the input schema already fully documents the two parameters (limit and query). The description adds no additional meaning beyond what the schema provides—it doesn't explain parameter interactions, default behaviors beyond the schema's 'default 10', or search syntax nuances. Baseline 3 is appropriate when the schema does all the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: searching FDA food enforcement/recall records and returning specific information like product recalls, reasons, distribution patterns, and status. It uses specific verbs ('search', 'returns') and identifies the resource (FDA food enforcement/recall records). However, it doesn't explicitly differentiate from sibling tools like search_drug_events or search_drug_labels, which appear to search different FDA data domains.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools or other search options, nor does it specify prerequisites, exclusions, or optimal use cases. The only implied usage is for searching FDA food recall data, but this is redundant with the purpose statement.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.