Skip to main content
Glama

Server Details

Clean FDA regulatory data: company resolution, facilities, recalls, inspections, approvals.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

48 tools
fda_bpdr_summarySearch BPDR Annual SummaryA
Read-onlyIdempotent
Inspect

Search FDA's Biological Product Deviation Report annual summary counts. This is summary-level biotech and blood/HCT/P manufacturing signal from official FDA annual reports, not per-event case detail.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results to return (1-500)
offsetNoResult offset for pagination
keywordNoKeyword to search in group_name or establishment_type
row_typeNoWhether the row is a normal establishment line, subtotal, or total
group_nameNoGroup name, for example Licensed Non-Blood Manufacturers
fiscal_yearNoMetric fiscal year, for example 2024
establishment_typeNoEstablishment type, for example Vaccine or 351 HCT/P
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover readOnly/idempotent hints, so description focuses on adding data granularity context: 'summary-level' and 'annual reports' clarifies the temporal aggregation and scope. This behavioral trait (aggregate vs. granular) is critical for tool selection and not covered by annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste. First sentence front-loads the action and resource; second sentence immediately establishes scope limitations. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite lacking an output schema, description adequately explains return values ('summary counts') and data granularity. Given the complexity of FDA regulatory data and the presence of 7 query parameters, the description successfully orients the agent to the tool's specific domain (biotech/blood manufacturing signals).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage with clear parameter definitions (e.g., 'fiscal_year', 'establishment_type'). Description mentions 'annual summary counts' which loosely contextualizes the fiscal_year parameter, but with full schema coverage, baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'Search' with clear resource 'FDA's Biological Product Deviation Report annual summary counts'. Crucially distinguishes from siblings by clarifying this is 'summary-level... not per-event case detail', differentiating it from event-level tools like fda_consumer_events or fda_inspections in the sibling list.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides implicit usage guidance by stating 'not per-event case detail', suggesting when NOT to use the tool. However, lacks explicit alternatives (e.g., 'use X for individual case details') or explicit prerequisites for the search parameters.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fda_citationsSearch Inspection CitationsA
Read-onlyIdempotent
Inspect

Search specific CFR violation citations from FDA inspections (Compliance Dashboard data, not available in openFDA API). Filter by company name, FEI number, CFR number (e.g., '21 CFR 211.68' for a specific section, or '21 CFR 211' for all cGMP violations), or keyword in citation descriptions. Returns the cited regulation, short and long descriptions of the finding, and inspection dates. Related: fda_inspections (inspection classification and dates by FEI), fda_compliance_actions (warning letters that may reference these citations).

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results to return (1-500)
keywordNoKeyword to search in citation descriptions
fei_numberNoFDA Establishment Identifier (FEI number)
company_nameNoCompany name (fuzzy match)
act_cfr_numberNoCFR number (exact match)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already establish read-only, idempotent safety. The description adds valuable context about the data provenance ('Compliance Dashboard data, not available in openFDA API') and explicitly documents the return payload ('Returns the cited regulation, short and long descriptions...') despite the absence of a formal output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with zero waste: it opens with the core function, follows with filter capabilities and examples, documents return values, and closes with sibling relationships. Every sentence conveys distinct information necessary for tool selection.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the 5-parameter search interface with good schema coverage and safety annotations, the description provides sufficient context for invocation. It compensates for the missing output schema by describing return fields, though a formal output schema would provide complete structural clarity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, the baseline is 3. The description adds significant semantic value by providing concrete syntax examples for the act_cfr_number parameter ('e.g., '21 CFR 211.68'... or '21 CFR 211'...'), clarifying how to perform broad vs. specific searches beyond the schema's 'exact match' notation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states a specific verb ('Search'), resource ('CFR violation citations'), and data source ('Compliance Dashboard data'). It explicitly distinguishes this from the openFDA API and implies distinction from siblings by focusing specifically on citation content rather than inspection classifications.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description lists related tools with parenthetical explanations of their specific functions ('fda_inspections (inspection classification and dates by FEI)', 'fda_compliance_actions (warning letters...)'), providing implicit guidance on tool selection. However, it lacks explicit 'when to use this vs. when to use that' directives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fda_clinical_result_lettersSearch Complete Response LettersA
Read-onlyIdempotent
Inspect

Search FDA Complete Response Letters (CRLs) — formal refusal-to-approve decisions on drug and biologics applications. Filter by company name (fuzzy match), application number (e.g., 'NDA 204017'), or letter type. CRLs are significant regulatory events indicating application deficiencies. Related: fda_search_drugs (drug application data including approval status).

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results to return (1-500)
offsetNoResult offset for pagination
letter_typeNoLetter type filter (e.g. 'COMPLETE RESPONSE')
company_nameNoCompany name (fuzzy match)
application_numberNoApplication number (searches array, e.g. 'NDA 204017')
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

While annotations declare readOnlyHint and idempotentHint, the description adds valuable domain context that CRLs are 'significant regulatory events indicating application deficiencies'—behavioral semantics crucial for agent reasoning. It could improve by mentioning data scope (historical range) or rate limiting.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences efficiently cover: (1) the tool's core function and definition, (2) available filters with specific examples, and (3) domain significance plus related tool reference. Every clause earns its place with no redundancy or verbosity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the 5-parameter schema with complete coverage and read-only annotations, the description adequately covers the tool's purpose and filtering capabilities. It lacks details about the return structure (list format, pagination behavior beyond parameter mention), which would be helpful since no output schema is provided, but remains sufficient for agent selection.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description consolidates filter parameters (company, application number, letter type) but largely repeats information already in the schema (fuzzy matching, example 'NDA 204017') without adding new constraints, format details, or validation rules beyond the structured definitions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool 'Search FDA Complete Response Letters (CRLs)' with a clear verb and resource. It defines CRLs as 'formal refusal-to-approve decisions on drug and biologics applications,' distinguishing them from general drug data and clarifying that 'clinical result letters' in the tool name specifically refers to CRLs.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description references the sibling tool 'fda_search_drugs (drug application data including approval status),' implying when to use each (CRLs for refusal decisions vs. general approval data). However, it lacks explicit 'when not to use' guidance or direct functional comparisons with other search tools in the FDA suite.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fda_company_compliance_timelineCompany Compliance TimelineA
Read-onlyIdempotent
Inspect

Build a reverse-chronological compliance timeline for one company and any linked subsidiaries. Combines inspections, warning letters, import alerts, import refusals, debarments, and recall/enforcement events into one dated feed.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of combined timeline events to return
companyYesCompany name to build the timeline for
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds substantial behavioral context beyond annotations: specifies reverse-chronological ordering, automatic subsidiary linkage ('any linked subsidiaries'), and enumerates the six specific event types aggregated (inspections, warning letters, import alerts, etc.). Does not contradict annotations (readOnlyHint=true, idempotentHint=true). Could improve by mentioning pagination behavior or data freshness.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two tightly constructed sentences with zero redundancy. First sentence establishes core function and scope (reverse-chronological, subsidiaries); second sentence enumerates constituent data sources. Information is front-loaded and every clause earns its place. No filler language or tautology.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of annotations covering safety profile (readOnly, idempotent) and 100% schema coverage, the description adequately explains what data is returned by listing the six event types included. Lacks explicit description of the output structure (event object fields) since no output schema is provided, but 'dated feed' implies chronological entries. Sufficient for tool selection, though return value documentation would strengthen completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema fully documents both parameters (company name and limit). The description references 'one company' and implies the event aggregation affected by the limit, but does not add syntax details, format constraints, or semantic nuances beyond what the schema already provides. Baseline 3 is appropriate given schema completeness.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent specificity: 'Build a reverse-chronological compliance timeline' provides clear verb (build), resource (compliance timeline), and scope (one company and any linked subsidiaries). The aggregation of multiple event types (inspections, warning letters, etc.) effectively distinguishes it from sibling tools like fda_inspections or fda_search_warning_letters that focus on single event types.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides implied usage guidance by stating it 'Combines' multiple event types into 'one dated feed,' suggesting use when a comprehensive timeline view is needed versus individual event queries. However, lacks explicit when-to-use/when-not-to-use statements or named sibling alternatives (e.g., does not mention when to use fda_company_full or specific search tools instead).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fda_company_fullFull Company ProfileA
Read-onlyIdempotent
Inspect

Comprehensive company profile: facilities (with addresses and operations), enforcement actions (recalls), 510(k) clearances, PMA approvals, and drug applications for a single company and its known aliases. Costs 5 credits. Excludes: inspection history, citations, compliance actions (warning letters), facility-level product lists, import refusals, and family rollups across separate child company records. For family rollups: call fda_suggest_subsidiaries first, then use fda_save_aliases for true same-company names or fda_link_subsidiaries for distinct child companies. Related: fda_suggest_subsidiaries (discover subsidiaries), fda_link_subsidiaries (create explicit family links), fda_get_facility (per-facility products, operations type, risk summary by FEI), fda_inspections (inspection history by FEI or company), fda_citations (CFR violations by FEI), fda_compliance_actions (warning letters/seizures by FEI or company), fda_search_aphis (animal health facilities for vet companies), fda_drug_shortages (active drug shortages).

ParametersJSON Schema
NameRequiredDescriptionDefault
companyYesCompany name to look up
drugs_limitNoDrug applications result limit
drugs_offsetNoDrug applications result offset
approvals_limitNoPMA approvals result limit
approvals_offsetNoPMA approvals result offset
clearances_limitNo510(k) clearances result limit
facilities_limitNoFacilities result limit
clearances_offsetNo510(k) clearances result offset
enforcement_limitNoEnforcement result limit
facilities_offsetNoFacilities result offset
enforcement_offsetNoEnforcement result offset
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare read-only/idempotent safety. The description adds critical cost information ('Costs 5 credits') and clarifies aggregation behavior (includes 'known aliases' but excludes 'family rollups across separate child company records'). Does not contradict annotations. Minor gap: no mention of rate limits or error conditions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Information-dense and well-structured with clear sections: purpose, cost, exclusions, workflow guidance, and related tools. Every clause serves a purpose. Minor deduction for density—the related tools list is long (though necessary for sibling differentiation). Front-loaded with the most critical information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Excellent coverage for a complex tool with many siblings. Explains return data categories (since no output schema exists), cost model, scope boundaries (includes/excludes), and hierarchical relationships (aliases vs subsidiaries vs child companies). Sufficient for an agent to invoke confidently despite 11 parameters and no output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage for all 11 parameters (limits/offsets/company). The description implies alias resolution behavior for the 'company' parameter ('known aliases'), adding slight semantic value, but does not elaborate on pagination patterns or parameter interactions beyond the schema definitions. Baseline 3 appropriate given exhaustive schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states it retrieves a 'Comprehensive company profile' including specific resource types (facilities, enforcement actions, 510(k) clearances, PMA approvals, drug applications) and defines the scope (single company and known aliases). It clearly distinguishes from siblings by listing specific exclusions (inspection history, citations, etc.).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit when-not guidance via the 'Excludes:' section listing unavailable data types. Offers precise workflow for family rollups ('call fda_suggest_subsidiaries first, then use...'). Names and contextualizes 9 related sibling tools with specific use cases (e.g., 'fda_inspections (inspection history by FEI or company)'), enabling correct tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fda_compliance_actionsSearch Compliance ActionsA
Read-onlyIdempotent
Inspect

Search FDA compliance enforcement actions (Compliance Dashboard data, not available in openFDA API): Warning Letters, Seizures, and Injunctions. These are the most serious regulatory outcomes, typically following OAI inspections. Filter by company name, FEI number, action type (Warning Letter/Seizure/Injunction), or date range. Related: fda_inspections (underlying inspection data by FEI), fda_citations (CFR violations cited in these actions).

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results to return (1-500)
date_toNoEnd date for action_taken_date range (YYYY-MM-DD)
date_fromNoStart date for action_taken_date range (YYYY-MM-DD)
fei_numberNoFDA Establishment Identifier (FEI number)
action_typeNoCompliance action type
company_nameNoCompany name (fuzzy match)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

While annotations confirm read-only/idempotent behavior, the description adds valuable context about data provenance (Compliance Dashboard vs openFDA) and severity classification ('most serious regulatory outcomes') that helps the agent understand the result set characteristics beyond the annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three efficiently structured sentences cover: (1) the core function and data source, (2) regulatory significance/context, and (3) filter options with tool relationships. Every clause conveys essential information without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the moderate complexity (6 optional parameters, flat schema) and absence of an output schema, the description adequately covers the tool's scope, data limitations, and ecosystem relationships. A perfect score would require describing the return structure or result interpretation guidance.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline score applies. The description mentions available filters (company name, FEI number, action type, date range) but adds no semantic depth, syntax examples, or parameter relationships beyond what the schema already documents.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description explicitly states the tool searches 'FDA compliance enforcement actions' specifying the three types (Warning Letters, Seizures, Injunctions) and clarifies the unique data source (Compliance Dashboard, not openFDA API), effectively distinguishing it from sibling tools like fda_search_enforcement.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit guidance by naming related siblings (fda_inspections for underlying data, fda_citations for violations) and explains the regulatory context ('most serious regulatory outcomes, typically following OAI inspections'), though it could more explicitly state when to prefer this over fda_search_warning_letters.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fda_consumer_eventsSearch Consumer Adverse EventsA
Read-onlyIdempotent
Inspect

Search consumer adverse events for food and cosmetic products by product area, reaction keyword, or date range (YYYYMMDD format). Returns reports including outcomes, reactions, and product details.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results to return (1-500)
offsetNoResult offset for pagination
date_toNoEnd date for date_created (YYYYMMDD)
reactionNoReaction keyword (searches reactions array)
date_fromNoStart date for date_created (YYYYMMDD)
product_areaNoFilter by product area: food or cosmetic
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnly/idempotent status, while description adds valuable return value context ('reports including outcomes, reactions, and product details') since no output schema exists. Also clarifies date format constraint (YYYYMMDD) upfront.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two well-structured sentences with zero waste. First sentence establishes capability and filters; second sentence describes return payload. Information is front-loaded and density is optimal.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Compensates adequately for missing output schema by describing return contents (outcomes, reactions, details). Given 100% schema coverage and strong annotations, description provides sufficient context for a 6-parameter search tool, though noting all parameters are optional would improve it further.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, baseline is 3. Description mentions the three main search dimensions and date format, but this largely restates schema information rather than adding significant semantic context about parameter interactions or defaults.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'Search' with clear resource 'consumer adverse events' and scope 'food and cosmetic products'. This effectively distinguishes it from sibling tools like fda_vet_events (veterinary) and fda_search_drugs (pharmaceuticals).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies usage by specifying searchable domains (food/cosmetic) and available filters (product area, reaction, date), but lacks explicit guidance on when to prefer this over related FDA event tools or prerequisites like date format requirements.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fda_device_classDevice Classification LookupA
Read-onlyIdempotent
Inspect

Lookup FDA device classification details by product code. Returns device name, device class (I/II/III), medical specialty, regulation number, review panel, submission type, and definition. Requires: product code (3-letter code from 510(k), PMA, or device product listings). Related: fda_product_code_lookup (cross-reference across 510(k) and PMA), fda_search_510k (clearances for this product code), fda_search_pma (PMA approvals for this product code).

ParametersJSON Schema
NameRequiredDescriptionDefault
product_codeYesDevice product code
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover read-only/idempotent safety properties, so description appropriately focuses on adding return value specifics (lists 7 distinct fields returned) and input provenance context (codes come from 510(k), PMA, or listings). Does not mention rate limits or auth requirements, but adds valuable behavioral context beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four sentences, each earning its place: 1) Core function, 2) Return values, 3) Input requirements, 4) Related tools. No redundancy, efficiently front-loaded with the essential lookup function.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter lookup tool with 100% schema coverage and no output schema, the description comprehensively compensates by enumerating all seven return fields. Combined with strong annotations and clear sibling differentiation, this is complete for the tool's complexity level.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

While schema has 100% coverage with 'Device product code', description adds crucial semantic context: the code is a '3-letter code' (pattern is actually 2-3 letters, but this captures the common case) and specifies authoritative sources (510(k), PMA, listings). This provenance information aids correct invocation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear specific verb ('Lookup') + resource ('FDA device classification details') + method ('by product code'). Explicitly distinguishes from siblings by focusing on classification data (class I/II/III, regulation number) rather than clearances or cross-referencing.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly names three related tools with contextual guidance: fda_product_code_lookup for cross-referencing, fda_search_510k for clearances, and fda_search_pma for approvals. This establishes clear boundaries for when to use this tool versus alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fda_device_recallsSearch Device RecallsA
Read-onlyIdempotent
Inspect

Search FDA device recalls by recalling firm (fuzzy match), product code, recall status, or date range. Returns device-specific recall details including root cause, event type, and product codes. Complements fda_search_enforcement which covers all product types. Related: fda_search_enforcement (all recalls including drugs), fda_recall_facility_trace (trace to manufacturing facility), fda_device_class (product code details).

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results to return (1-500)
offsetNoResult offset for pagination
date_toNoEnd date for event_date_initiated (YYYY-MM-DD)
date_fromNoStart date for event_date_initiated (YYYY-MM-DD)
product_codeNoProduct code
recall_statusNoRecall status
recalling_firmNoRecalling firm name (fuzzy match)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnly/idempotent/closedWorld properties. Description adds valuable behavioral context: 'fuzzy match' qualifier for recalling firm searches, and return value disclosure ('device-specific recall details including root cause, event type, and product codes') compensating for missing output schema. Does not mention pagination behavior or rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three tightly constructed sentences: functionality/params, return values, and sibling relationships. No filler words. Every clause provides distinct information (capabilities, return payload, ecosystem context).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 7 parameters with 100% schema coverage, no output schema, and comprehensive annotations, the description is appropriately complete. It compensates for missing output schema by describing return contents, differentiates from 40+ siblings, and requires no additional prerequisites or warnings beyond what's provided.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage, establishing baseline 3. Description adds value by emphasizing 'fuzzy match' behavior for recalling_firm and logically grouping parameters by function. Could have added date format examples or typical limit guidance, but adequately reinforces schema semantics.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with specific verb 'Search' and resource 'FDA device recalls', then lists exact filter dimensions (recalling firm, product code, recall status, date range). It explicitly distinguishes from sibling 'fda_search_enforcement which covers all product types', clearly delineating this tool's device-specific scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states 'Complements fda_search_enforcement which covers all product types', providing clear when-to-use guidance. Lists related tools with parenthetical explanations of their distinct purposes (fda_recall_facility_trace for manufacturing traces, fda_device_class for product code details), enabling informed tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fda_device_supply_statusSearch Device Shortages And DiscontinuancesA
Read-onlyIdempotent
Inspect

Search FDA's current medical device shortage list and discontinuance list. This is an official FDA supply-chain signal for medtech selling, covering shortage categories and permanent discontinuances that may affect customer operations or product availability.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results to return (1-500)
offsetNoResult offset for pagination
date_toNoEnd date for latest_date range (YYYY-MM-DD)
keywordNoKeyword to search descriptions, device names, notes, or reasons
categoryNoDevice category
date_fromNoStart date for latest_date range (YYYY-MM-DD)
list_typeNoWhether to search the shortage list or discontinuance list
product_codeNoFDA product code, for example DSQ or MKJ
manufacturer_nameNoManufacturer name (fuzzy match, discontinuance list only)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and idempotentHint=true. The description adds valuable business context ('official FDA supply-chain signal', 'current', 'permanent discontinuances') describing data authority and freshness, but does not disclose rate limits, pagination behavior beyond schema, or error conditions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two efficiently structured sentences with zero waste. First sentence defines the core function; second sentence provides business value context. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 9 optional parameters with complete schema documentation and no output schema, the description adequately covers the tool's purpose and data domain. It could be improved by describing the return structure or noting that all filters are optional, but it is sufficient for selection and basic usage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so parameters are fully documented in the structured schema. The description mentions 'shortage categories' aligning with the category parameter but does not add syntax details, format examples, or parameter interdependencies beyond the schema baseline.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verbs ('Search') and resources ('FDA's current medical device shortage list and discontinuance list'). It clearly distinguishes from sibling tool fda_drug_shortages by explicitly specifying 'medical device' and covering both shortages and discontinuances.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear usage context ('official FDA supply-chain signal for medtech selling', 'affect customer operations or product availability') indicating when to query this data. However, it lacks explicit comparison to siblings like fda_drug_shortages or guidance on when NOT to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fda_device_udiSearch Device UDIA
Read-onlyIdempotent
Inspect

Search the FDA Unique Device Identification (UDI) database by brand name, company/manufacturer name, product code, or DI number. Returns device identification data including brand name, company, device description, product codes, GMDN terms, sterilization info, and premarket submissions. Related: fda_device_class (classification details by product code), fda_search_510k (clearances by product code).

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoResult limit
offsetNoResult offset
di_numberNoDevice Identifier (DI) number
brand_nameNoDevice brand name (fuzzy match)
company_nameNoCompany/manufacturer name (fuzzy match)
product_codeNoFDA product code (3-letter code, e.g. OVE)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Since annotations already declare readOnlyHint=true and idempotentHint=true, the description appropriately focuses on return value disclosure, listing specific output fields (GMDN terms, sterilization info, premarket submissions) that compensate for the missing output schema. Does not mention pagination behavior or rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with zero waste: (1) search scope, (2) return payload, (3) related tools. Front-loaded with action verb and resource. No redundant or filler text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Strong completeness for a search tool: all 6 parameters documented in schema, return values described to compensate for lack of output schema, and sibling relationships clarified. Minor gap: no explicit mention of pagination behavior or fuzzy matching logic, though offset/limit params and schema descriptions imply this.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage ('Device brand name (fuzzy match)', etc.), the schema carries the semantic load. The description acknowledges the search parameters but doesn't add syntax details, validation rules, or query combinations beyond what's in the schema. Baseline 3 is appropriate when schema coverage is complete.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent specificity: states the exact verb (Search), resource (FDA UDI database), and searchable fields (brand name, company, product code, DI number). Explicitly distinguishes from siblings fda_device_class and fda_search_510k by mapping them to their distinct functions (classification details vs clearances).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear mapping of related tools to their specific purposes ('fda_device_class (classification details...)', 'fda_search_510k (clearances...)'), helping agents choose the right tool. Would be 5 with explicit 'when not to use' guidance (e.g., 'use fda_device_class instead for classification hierarchy').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fda_device_udi_lookupDevice UDI LookupA
Read-onlyIdempotent
Inspect

Search the FDA's Global Unique Device Identification Database (GUDID) by device identifier (DI/barcode), device name, company name, or brand name. Returns device details including UDI, descriptions, and company information. Costs 1 credit.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results to return (1-500)
offsetNoResult offset for pagination
device_idNoDevice Identifier (DI) — exact barcode/GTIN lookup
brand_nameNoBrand name (fuzzy search)
device_nameNoDevice/brand name (fuzzy search)
company_nameNoCompany/manufacturer name (fuzzy search)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and idempotentHint=true. The description adds valuable behavioral context: discloses cost ('Costs 1 credit'), and describes return values ('Returns device details including UDI, descriptions, and company information') despite the absence of an output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three efficient statements totaling under 30 words. Front-loaded with the core action, followed by return value description, and ends with cost disclosure. No redundant or wasted language.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriate for a search tool with no output schema: it describes what data is returned (UDI, descriptions, company info) and covers the four primary search dimensions. Could improve by noting that all parameters are optional (0 required) or explaining pagination strategy.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, documenting exact vs fuzzy search behavior and pagination controls. The description provides a narrative grouping of the four main search parameters but doesn't add semantic meaning (e.g., query syntax, wildcard behavior) beyond what the schema already specifies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb ('Search') and resource ('FDA's GUDID database') with specific searchable fields listed. However, it doesn't explicitly distinguish from the sibling tool 'fda_device_udi', which likely has overlapping functionality and could confuse tool selection.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Lists searchable fields (DI/barcode, device name, company name, brand name) implying when to use each, and notes the cost constraint ('Costs 1 credit'). Lacks explicit guidance on when to use this multi-field search versus the direct 'fda_device_udi' sibling or other device lookup tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fda_drug_labelsSearch Drug LabelsA
Read-onlyIdempotent
Inspect

Search FDA Structured Product Labeling (SPL) data — full drug package inserts. Filter by drug name, manufacturer, application number, or specific label section (e.g., indications_and_usage, warnings, adverse_reactions, boxed_warning). Returns complete label text for matching sections. Related: fda_search_drugs (application-level data), fda_search_ndc (NDC product details).

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoResult limit
offsetNoResult offset
sectionNoSpecific label section to return (e.g. indications_and_usage, warnings, adverse_reactions)
drug_nameNoBrand or generic drug name (fuzzy match)
manufacturerNoManufacturer name (fuzzy match)
application_numberNoNDA or ANDA application number
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover read-only/idempotent safety profile. Description adds valuable behavioral context: specifies return format ('complete label text for matching sections') and enumerates specific searchable label sections including 'boxed_warning' (beyond schema examples). Does not mention pagination behavior or null-result handling.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four sentences total, front-loaded with purpose and resource. Second sentence maps capabilities to parameters. Third sentence clarifies return payload. Fourth sentence handles sibling differentiation. Zero redundancy; every sentence delivers unique information not fully captured in structured fields.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequately compensates for missing output schema by describing return content ('complete label text'). Covers domain complexity (SPL data, specific FDA sections) and relates to 43 siblings. Minor gap: does not note that all 6 parameters are optional (0 required), which would help invocation strategy.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing baseline 3. Description adds value by grouping parameters conceptually ('Filter by drug name, manufacturer...') and including 'boxed_warning' as an additional section example not present in the schema's parameter description, reinforcing valid values for the section parameter.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'Search' with clear resource 'FDA Structured Product Labeling (SPL) data — full drug package inserts'. Explicitly distinguishes from siblings fda_search_drugs and fda_search_ndc by noting they provide 'application-level data' and 'NDC product details' respectively, while this tool returns 'complete label text'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Lists related tools (fda_search_drugs, fda_search_ndc) with parenthetical explanations of their distinct scopes, implicitly guiding when to use those alternatives. Lacks explicit 'when to use this' statement, though the distinction between 'full label text' vs 'application-level data' provides clear context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fda_drug_shortagesSearch Drug ShortagesA
Read-onlyIdempotent
Inspect

Search FDA drug shortages by generic name, company, status, or availability. Drug shortages signal manufacturing capacity strain, quality issues, or supply chain disruption. Useful for identifying companies with operational challenges. Related: fda_search_drugs (drug application data by company), fda_search_ndc (NDC-level product details).

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results to return (1-500)
offsetNoResult offset for pagination
statusNoShortage status filter
company_nameNoCompany name (fuzzy match)
generic_nameNoGeneric drug name (fuzzy match)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations confirm read-only/idempotent safety. Description adds valuable domain semantics beyond annotations: explains that shortages signal 'manufacturing capacity strain, quality issues, or supply chain disruption,' providing context for interpreting results. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four sentences with zero waste: purpose declaration, domain context, use case, and sibling differentiation. Front-loaded with core action. Every sentence earns its place with high information density.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Strong given complexity: explains domain concept (what shortages represent), identifies relevant siblings from the extensive FDA toolset, and leverages 100% input schema coverage. Minor gap: no output schema provided and description doesn't enumerate returned fields, though it explains the conceptual data content.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage (limit, offset, status, company_name, generic_name all documented), establishing baseline 3. Description mentions searchable fields (generic name, company, status) reinforcing the schema, though it references 'availability' which doesn't map to a specific parameter. No additional syntax or format details needed given comprehensive schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific action (Search) and resource (FDA drug shortages) with searchable dimensions (generic name, company, status). Clearly distinguishes from siblings fda_search_drugs and fda_search_ndc by contrasting their data types (application data vs NDC-level details vs shortages).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly names two related sibling tools with parenthetical descriptions of their distinct purposes, guiding selection. Provides clear use case ('Useful for identifying companies with operational challenges') that signals when to invoke this specific tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fda_facility_dossierFacility Compliance DossierA
Read-onlyIdempotent
Inspect

Compliance-first facility dossier by FEI number. Returns the facility profile plus recent inspections, citations, warning letters, import refusal history, import-alert mentions, recall context, freshness, and recommended next tools. Use this when you want the fastest FEI-level manufacturing risk view instead of the broader product-focused facility profile.

ParametersJSON Schema
NameRequiredDescriptionDefault
feiYesFDA Establishment Identifier (FEI number)
evidence_limitNoNumber of recent evidence rows to return per evidence section
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover safety (readOnlyHint/idempotentHint), so the description adds valuable behavioral context by listing specific data sections returned (inspections, citations, import refusals, recall context) and noting the 'recommended next tools' feature, which guides workflow beyond what annotations provide.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with zero waste: first establishes purpose/identifier, second details return payload comprehensively, third provides usage guidance. Perfectly front-loaded with no redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite lacking an output schema, the description comprehensively enumerates return components (profile, inspections, citations, warning letters, import history, recall context, freshness, next tools), fully characterizing the tool's scope for an FEI-level compliance investigation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing baseline 3. The description adds semantic value by listing the specific evidence sections (inspections, citations, etc.) that the 'evidence_limit' parameter controls, helping users understand what data gets truncated.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description specifies the exact function ('Compliance-first facility dossier'), the key resource (facility by FEI number), and explicitly distinguishes from the sibling 'broader product-focused facility profile' (likely fda_get_facility).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit when-to-use guidance ('when you want the fastest FEI-level manufacturing risk view') and clearly names the alternative approach to avoid ('instead of the broader product-focused facility profile').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fda_facility_productsFacility ProductsA
Read-onlyIdempotent
Inspect

List device products registered at a facility by FEI number with pagination. Returns product code, proprietary name, listing number, and classification details (device name, class, medical specialty). Note: fda_get_facility already includes products — use this only when paginating through large product lists. Drug products are not linked by FEI; use fda_search_ndc with company name instead. Requires: FEI number.

ParametersJSON Schema
NameRequiredDescriptionDefault
feiYesFDA Establishment Identifier (FEI number)
limitNoMax results to return (1-500)
offsetNoResult offset for pagination
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnly/idempotent, so description adds value by detailing return fields (product code, proprietary name, classification details) since no output schema exists. Also clarifies pagination behavior and device/drug domain boundaries.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four sentences efficiently structured: purpose/returns first, then sibling comparison, then domain constraint, then prerequisites. Zero waste, front-loaded with essential information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a pagination tool without output schema, description adequately explains return values and pagination mechanics. References to sibling tools provide necessary navigation context in the 50+ tool ecosystem.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, baseline is 3. Description adds contextual meaning by linking limit/offset to 'pagination' use case and emphasizing FEI as required input, helping the agent understand parameter relationships.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific verb 'List' with clear resource 'device products registered at a facility' and scope 'by FEI number with pagination'. Explicitly distinguishes from sibling fda_get_facility by noting when to use each.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states 'use this only when paginating through large product lists' vs fda_get_facility which 'already includes products'. Also provides alternative for drug products (fda_search_ndc) and prerequisites ('Requires: FEI number').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fda_get_facilityFacility DetailA
Read-onlyIdempotent
Inspect

Detailed facility profile by FEI number. Returns: facility name, full address, operations type (Manufacture, API, Repack, Contract Manufacture, etc.), establishment types (Manufacturer, Distributor, Specification Developer, etc.), registrant and owner/operator info, DUNS number, registration expiry, enforcement history (recalls), device products with classification, and a facility_risk_summary (inspection counts by classification, warning letters, last inspection date). Requires: FEI number — get it from fda_search_facilities or fda_company_full. Excludes: full inspection details and citations. Related: fda_inspections (inspection history by FEI), fda_citations (CFR violations by FEI), fda_compliance_actions (warning letters/seizures by FEI), fda_import_refusals (import refusal history by FEI), fda_facility_products (paginate large product lists).

ParametersJSON Schema
NameRequiredDescriptionDefault
feiYesFDA Establishment Identifier (FEI number)
products_limitNoProducts result limit
products_offsetNoProducts result offset
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations establish read-only/idempotent safety (readOnlyHint=true, idempotentHint=true). The description adds valuable behavioral context: it discloses the comprehensive data scope (enforcement history, device classifications, risk summaries) and explicitly states what data is omitted, preventing false expectations. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Uses a highly structured, scannable format with clear labels (Returns:, Requires:, Excludes:, Related:). Every clause delivers specific value—no filler text. Information is front-loaded with the return payload, exactly what an agent needs to evaluate relevance.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite having no output schema, the description comprehensively documents the return payload across ~12 distinct data categories (address, DUNS, expiry, enforcement history, etc.). It adequately compensates for the missing structured output definition and provides sufficient context for a complex facility lookup tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear descriptions. The description adds workflow context beyond the schema: it specifies that the FEI parameter must be sourced from specific sibling tools ('get it from fda_search_facilities or fda_company_full') and implies the pagination parameters serve product lists (referencing fda_facility_products for pagination).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description explicitly states it retrieves a 'Detailed facility profile by FEI number' and enumerates specific return data (facility name, address, operations type, risk summary, etc.). It clearly distinguishes scope from siblings via 'Excludes: full inspection details' and references specific related tools for those missing pieces.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit prerequisites ('Requires: FEI number — get it from fda_search_facilities or fda_company_full'), clear exclusions ('Excludes: full inspection details'), and maps specific use cases to alternatives ('Related: fda_inspections...fda_citations...fda_facility_products'). This creates a complete decision tree for the agent.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fda_import_refusalsSearch Import RefusalsA
Read-onlyIdempotent
Inspect

Search FDA import refusals (Compliance Dashboard data, not available in openFDA API). Import refusals indicate products detained at the US border. Filter by company name, FEI number, country code (e.g., CN, IN for major API source countries), or date range. Critical for evaluating international manufacturing sites and supply chain risk. Related: fda_get_facility (facility details by FEI), fda_inspections (inspection history by FEI).

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results to return (1-500)
date_toNoEnd date for refusal_date range (YYYY-MM-DD)
date_fromNoStart date for refusal_date range (YYYY-MM-DD)
fei_numberNoFDA Establishment Identifier (FEI number)
company_nameNoCompany name (fuzzy match)
country_codeNoISO country code (e.g. CN, IN)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations establish read-only/idempotent safety profile. Description adds critical behavioral context beyond annotations: data provenance ('Compliance Dashboard data'), business impact (supply chain risk evaluation), and domain semantics (border detention). Does not contradict annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four sentences efficiently structured: (1) tool identity + data source, (2) concept definition, (3) filter capabilities, (4) use case + sibling relations. Every sentence provides distinct value with no redundancy or waste. Front-loaded with critical distinction about Compliance Dashboard data.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given complex FDA regulatory domain with 40+ siblings, description adequately orients the user by naming specific related tools (fda_get_facility, fda_inspections) and explaining business context. No output schema exists, but description appropriately does not speculate on return values per guidelines.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, baseline is 3. Description adds value by mapping parameters to domain concepts ('major API source countries' for country_code, 'international manufacturing sites' for company_name/FEI) that help users understand pharmaceutical supply chain filtering, elevating above baseline.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'Search' with resource 'FDA import refusals' and immediately distinguishes from siblings by noting 'Compliance Dashboard data, not available in openFDA API'. Also clarifies that refusals indicate 'products detained at the US border', providing clear functional scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use: 'Critical for evaluating international manufacturing sites and supply chain risk'. Names specific alternatives/complementary tools: 'Related: fda_get_facility... fda_inspections'. Provides domain-specific filter examples ('CN, IN for major API source countries') guiding appropriate usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fda_inspection_observation_summarySearch Inspection Observation SummaryA
Read-onlyIdempotent
Inspect

Search FDA's official annual inspection-observation summary spreadsheets. This is aggregate Form 483 trend data by product area and citation frequency, not a full company-level 483 corpus. Use it to see which observation areas appear most often in Drugs, Devices, Foods, and other FDA program areas.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results to return (1-500)
offsetNoResult offset for pagination
cite_idNoFDA citation identifier from the spreadsheet
keywordNoFull-text query across short and long citation descriptions
fiscal_yearNoFiscal year of the spreadsheet
product_areaNoProduct or program area (e.g. Drugs, Devices, Foods)
reference_numberNoRegulatory citation reference number (e.g. 21 CFR 211.192)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and idempotentHint=true. Description adds valuable context about data granularity ('aggregate' vs 'corpus') and source material ('annual spreadsheets'), but does not disclose operational details like pagination behavior, rate limits, or data freshness.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with zero waste: first defines action and resource, second critically distinguishes data scope from siblings, third provides use case. Front-loaded with the most important distinction (aggregate vs company-level).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriately complete for a 7-parameter search tool with full schema coverage. Mentions data source and aggregation level. Lacks only description of return structure (no output schema exists), though 'spreadsheets' and 'summary' imply tabular aggregate results.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage, establishing baseline 3. Description mentions 'product area' and 'citation frequency' aligning with parameters, but does not add semantic meaning beyond what schema already provides (e.g., no clarification on keyword syntax or citation ID formats).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'Search' with resource 'annual inspection-observation summary spreadsheets' and explicitly distinguishes from siblings by clarifying this is 'aggregate Form 483 trend data... not a full company-level 483 corpus', clearly differentiating from individual inspection records.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit when-to-use guidance ('Use it to see which observation areas appear most often') and clear exclusion ('not a full company-level 483 corpus'), but does not explicitly name sibling alternatives like fda_inspections for company-level data.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fda_inspectionsSearch InspectionsA
Read-onlyIdempotent
Inspect

Search FDA inspection history from the Compliance Dashboard (not available in openFDA API). Filter by company name (fuzzy match), FEI number, classification (NAI=No Action Indicated, VAI=Voluntary Action Indicated, OAI=Official Action Indicated — most serious), state, country, city, or date range. Date filters apply to inspection_end_date. OAI inspections typically lead to warning letters. Related: fda_citations (specific CFR violations from inspections by FEI), fda_compliance_actions (warning letters following OAI inspections by FEI).

ParametersJSON Schema
NameRequiredDescriptionDefault
cityNoCity name (exact match)
limitNoMax results to return (1-500)
stateNoState code (e.g., CA, NY)
countryNoCountry code (e.g., US, DE)
date_toNoEnd date for inspection_end_date range (YYYY-MM-DD)
date_fromNoStart date for inspection_end_date range (YYYY-MM-DD)
fei_numberNoFDA Establishment Identifier (FEI number)
company_nameNoCompany name (fuzzy match)
classification_codeNoInspection classification code
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare read-only/idempotent safety. Description adds crucial domain context: data source (Compliance Dashboard), classification severity rankings (OAI as most serious), typical downstream effects (OAI leads to warning letters), and specific date field mapping (inspection_end_date).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three dense sentences with zero waste. Front-loaded with data source, followed by filter capabilities with inline classification definitions, and closed with related tool references. Every clause conveys unique information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a complex 9-parameter tool with no output schema, the description adequately covers data provenance, filter semantics, and ecosystem relationships. Minor gap: does not hint at return structure, though annotations and rich input schema reduce this burden.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage (baseline 3). Description adds semantic value by explaining classification codes (NAI/VAI/OAI meanings) and confirming fuzzy vs exact matching behavior, enriching parameter understanding beyond schema text.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description explicitly states the tool searches FDA inspection history from the Compliance Dashboard, distinguishing it from the openFDA API. It identifies the specific resource (inspections) and action (search/filter) with precise scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly names sibling alternatives (fda_citations, fda_compliance_actions) with their specific use cases ('specific CFR violations', 'warning letters following OAI inspections'), guiding the agent on when to use this tool versus related ones.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fda_ires_enforcementSearch iRES EnforcementA
Read-onlyIdempotent
Inspect

Search iRES enforcement recalls with cross-references to openFDA enforcement data. Filter by company name (fuzzy match), recall number, product type (e.g., Drugs, Devices), or date range. Returns detailed recall info including event classification, product codes, and quantities. Related: fda_search_enforcement (openFDA recall data), fda_recall_facility_trace (trace recall to manufacturing facility).

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results to return (1-500)
offsetNoResult offset for pagination
date_toNoEnd date for enforcement_report_date range (YYYY-MM-DD)
date_fromNoStart date for enforcement_report_date range (YYYY-MM-DD)
company_nameNoCompany name (fuzzy match)
product_typeNoProduct type (e.g. Drugs, Devices)
recall_numberNoRecall number
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

While annotations declare readOnlyHint=true and idempotentHint=true, the description adds valuable behavioral context: it discloses 'fuzzy match' search behavior for company names and details the return payload ('event classification, product codes, and quantities') to compensate for the missing output schema. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four sentences with zero waste: (1) purpose/scope, (2) filter parameters, (3) return values, (4) related tools. Information is front-loaded with the core action, and every sentence earns its place by adding distinct information not redundant with structured metadata.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of an output schema, the description adequately compensates by describing return contents. It addresses the tool's place in the FDA ecosystem (iRES vs openFDA) and covers the 7 optional parameters. Minor gap: it doesn't clarify whether filters are combined with AND/OR logic or mention pagination behavior explicitly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description lists filterable fields but largely repeats information already present in the schema (e.g., 'fuzzy match' and 'e.g., Drugs, Devices' are verbatim from schema descriptions). It adds no additional syntax guidance or parameter interdependencies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a specific verb ('Search') and resource ('iRES enforcement recalls'), immediately clarifying the tool's function. It distinguishes itself from siblings by emphasizing 'cross-references to openFDA enforcement data,' clearly differentiating it from fda_search_enforcement.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The 'Related' section explicitly names siblings (fda_search_enforcement, fda_recall_facility_trace) and their distinct purposes ('openFDA recall data' vs 'trace recall to manufacturing facility'), providing clear context for tool selection. It could be improved by explicitly stating when to prefer iRES over openFDA data.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fda_lookup_companyCompany LookupA
Read-onlyIdempotent
Inspect

Quick company lookup: facilities (with addresses and operations) and enforcement actions (recalls) for a single company and its known aliases. Costs 1 credit. Excludes: 510(k) clearances, PMA approvals, drug applications, inspection history, and subsidiary data. Related: fda_company_full (adds clearances/approvals/drugs for 5 credits), fda_suggest_subsidiaries (discover related entities), fda_get_facility (per-facility products and operations by FEI).

ParametersJSON Schema
NameRequiredDescriptionDefault
companyYesCompany name to look up
facilities_limitNoFacilities result limit
enforcement_limitNoEnforcement result limit
facilities_offsetNoFacilities result offset
enforcement_offsetNoEnforcement result offset
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations establish read-only, idempotent safety. The description adds critical behavioral context not in annotations: the credit cost (1 credit), that the search automatically includes 'known aliases,' and explicitly lists exclusions (negative scope). Does not contradict annotations. Could be improved by mentioning rate limits or caching behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely information-dense yet readable. Structure follows: [Operation]: [Data returned] [Scope]. [Cost]. [Exclusions]. [Alternatives]. Every clause earns its place—'Quick' justifies the 1-credit cost vs alternatives, exclusions prevent misuse, and related tools guide escalation paths. Zero redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a lookup tool with no output schema, the description adequately explains return content (facilities with addresses/operations, recalls). It addresses cost model, exclusions, and sibling relationships. Could improve by noting the alias resolution behavior or pagination implications, but given the rich schema and annotations, the description provides sufficient context for correct invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema fully documents all 5 parameters (company, facilities_limit, enforcement_limit, offsets). The description mentions 'facilities' and 'enforcement actions' which conceptually map to the limit/offset parameter pairs, but does not add semantic guidance beyond the schema (e.g., pagination strategies). Baseline 3 is appropriate for complete schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly defines the operation ('Quick company lookup'), the specific resources returned ('facilities with addresses and operations' and 'enforcement actions/recalls'), and the scope ('single company and its known aliases'). It clearly distinguishes from sibling tools by contrasting with fda_company_full (which adds clearances/approvals for 5 credits) and fda_get_facility (per-facility detail).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit guidance on when to use alternatives via the 'Related' section and 'Excludes' list: use fda_company_full when needing 510(k)/PMA/drug data (5 credits), use fda_suggest_subsidiaries for entity discovery, and avoid this tool if needing inspection history or subsidiary data. The credit cost ('Costs 1 credit') enables cost-benefit decision making.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fda_manufacturing_risk_summaryManufacturing Risk SummaryA
Read-onlyIdempotent
Inspect

Build a manufacturing and compliance summary for one company using FDA facilities, inspections, warning letters, OII records, import-risk signals, debarments, and recalls. Use this when you want the company-level picture first, then follow the linked granular tools for deeper inspection.

ParametersJSON Schema
NameRequiredDescriptionDefault
companyYesCompany name to summarize
evidence_limitNoMax recent records to return per evidence section
facility_limitNoMax facilities to return
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true and idempotentHint=true. The description adds valuable context beyond these by specifying exactly which data sources are aggregated and implying this is a composite view. It could be improved by noting whether the summary is real-time or cached, and whether it returns structured sections per evidence type, but the data source transparency is helpful.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, zero waste. First sentence front-loads the action and scope with specific data sources. Second sentence provides workflow guidance. Every word earns its place; no redundancy with title or schema fields.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the high complexity (aggregating 7 distinct FDA datasets) and absence of an output schema, the description adequately explains the conceptual output ('summary') and constituent data sources. It could improve by describing the return structure (e.g., whether it returns risk scores, counts, or full records), but the coverage of data sources and clear positioning among 40+ siblings makes it sufficiently complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description implicitly references the 'company' parameter ('for one company') but does not add semantic detail about parameter syntax or behavior beyond what the schema already provides. No additional parameter guidance is needed given the comprehensive schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verbs ('Build') and resources ('manufacturing and compliance summary') and explicitly lists the seven FDA data sources aggregated (facilities, inspections, warning letters, OII records, import-risk signals, debarments, recalls). It clearly distinguishes from siblings like fda_facility_dossier or fda_search_warning_letters by positioning this as the comprehensive company-level aggregation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit when-to-use guidance ('when you want the company-level picture first') and clear workflow guidance ('then follow the linked granular tools for deeper inspection'). This effectively guides the agent to use this as an entry point before drilling down into specific sibling tools like fda_inspections or fda_search_debarments.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fda_product_code_lookupProduct Code Cross-ReferenceA
Read-onlyIdempotent
Inspect

Cross-reference a device product code across classification details, 510(k) clearances, and PMA approvals. Returns classification info plus paginated lists of all clearances and approvals for that product code. Use to understand the regulatory landscape for a specific device type. Requires: product code.

ParametersJSON Schema
NameRequiredDescriptionDefault
product_codeYesDevice product code
approvals_limitNoPMA approvals result limit
approvals_offsetNoPMA approvals result offset
clearances_limitNo510(k) clearances result limit
clearances_offsetNo510(k) clearances result offset
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations establish readOnly/idempotent safety properties, while the description adds crucial behavioral context: it returns 'paginated lists' (explaining the purpose of the four limit/offset parameters) and specifies the three distinct data components returned (classification, clearances, approvals). No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four tightly constructed sentences progress logically: action definition, return value specification, use case statement, and prerequisite declaration. No redundant phrases or tautologies; every clause provides unique information not fully captured by the schema or annotations alone.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the 100% schema coverage and clear annotations, the description adequately compensates for the missing output schema by detailing the return structure ('classification info plus paginated lists'). It sufficiently equips an agent to understand the tool's behavior without overwhelming detail.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the structured documentation already explains all five parameters including pagination constraints. The description reinforces the required status of 'product_code' but does not add semantic meaning beyond what the schema provides for the optional pagination parameters, warranting the baseline score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verbs ('cross-reference') and resources ('device product code', 'classification details', '510(k) clearances', 'PMA approvals') to clearly define the tool's function. It distinguishes itself from sibling search tools like fda_search_510k by emphasizing aggregation across multiple regulatory datasets for a specific known product code.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides a clear use case ('understand the regulatory landscape for a specific device type') and explicitly states the prerequisite ('Requires: product code'). While it implies this is for known product codes versus exploratory search, it does not explicitly name alternative tools like fda_search_by_product for when the product code is unknown.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fda_recall_facility_traceRecall-to-Facility TraceA
Read-onlyIdempotent
Inspect

Trace a recall to its candidate manufacturing facility with explicit confidence levels. Matches by firm name, NDC lookup, and facility registration data. Returns the recall details, matched facility candidates with FEI numbers and confidence scores, and match methodology. Requires: recall_number from fda_search_enforcement or fda_ires_enforcement. Related: fda_get_facility (full detail for matched FEI), fda_inspections (inspection history for matched FEI), fda_compliance_actions (warning letters for matched FEI).

ParametersJSON Schema
NameRequiredDescriptionDefault
recall_numberYesRecall number
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare read-only/idempotent status. Description adds valuable context: matching methodology (firm name, NDC lookup, facility registration data) and return structure (FEI numbers, confidence scores, match methodology). Does not mention rate limits or error conditions, preventing a 5.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Efficient four-sentence structure: (1) purpose, (2) methodology, (3) return values, (4) prerequisites and related tools. No redundancy; every clause adds information not present in structured fields.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists, so description appropriately documents return values (recall details, facility candidates with FEI/confidence, methodology). Explains matching algorithm. Tool chaining is fully specified. Adequate for a complex trace operation with 1 parameter.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage for the single parameter. Description adds crucial sourcing context beyond the schema: specifies that recall_number must come from fda_search_enforcement or fda_ires_enforcement, helping the agent successfully invoke the tool.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description opens with specific verb 'Trace' plus clear resources (recall → manufacturing facility) and distinguishing detail (explicit confidence levels). Distinguishes from siblings like fda_get_facility (which requires FEI directly) and search tools (which find recalls but don't trace to facilities).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states prerequisite: 'Requires: recall_number from fda_search_enforcement or fda_ires_enforcement'. Provides clear chaining guidance via 'Related:' section naming three specific follow-up tools (fda_get_facility, fda_inspections, fda_compliance_actions) and what each provides.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fda_resolve_companyResolve Company NameA
Read-onlyIdempotent
Inspect

Resolve a company name to its canonical company_id and list all known aliases. Returns the canonical slug, match confidence, and alias names. Read-only lookup — does not discover new aliases. Related: fda_suggest_subsidiaries (discover potential subsidiaries not yet aliased), fda_company_full (full profile using the resolved name).

ParametersJSON Schema
NameRequiredDescriptionDefault
companyYesCompany name to resolve
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover read-only/idempotent safety, so description adds domain-specific behavioral context: 'does not discover new aliases' clarifies the closed-world nature (consistent with openWorldHint=false). Lists return values (slug, confidence, aliases) compensating for missing output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four efficient sentences: core function, return values, behavioral constraint, and sibling references. Every clause provides unique value; no redundancy with structured metadata.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a simple lookup tool: describes outputs despite no output schema, notes read-only nature, and maps relationships to relevant siblings. Minor gap: could hint at behavior when company not found (though openWorldHint=false implies closed domain).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage ('Company name to resolve'), establishing baseline 3. Description implies the input may be a variant/alias requiring resolution, but does not add format constraints, examples, or validation rules beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific verb (resolve) + resource (company name) + scope (canonical company_id and aliases). Explicitly distinguishes from siblings fda_suggest_subsidiaries and fda_company_full by contrasting their functions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Excellent explicit guidance: states limitation 'does not discover new aliases' and names two specific alternatives with their distinct purposes ('discover potential subsidiaries', 'full profile using the resolved name').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fda_save_aliasesSave Company AliasesA
Idempotent
Inspect

Save normalized alias names for a parent company, updating confidence and tracking collisions. Use this for true name variants of the same company record. If a collision says the alias already belongs to another company_id, use fda_link_subsidiaries instead of forcing the alias. Typical workflow: call fda_suggest_subsidiaries first, review results, then call this tool with confirmed same-entity alias names.

ParametersJSON Schema
NameRequiredDescriptionDefault
aliasesYesAlias entries to save
parent_companyYesCanonical parent company name
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare write/idempotent status, but description adds valuable behavioral context: 'updating confidence', 'tracking collisions', and the specific collision handling logic where aliases may belong to other company_ids. Does not mention return values or rate limits, so not a 5.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences total with zero waste. Front-loaded with purpose, followed by usage constraints and alternatives, ending with workflow. Every sentence conveys distinct, high-value information without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a write operation with collision handling, the description adequately covers the business logic (confidence updates, collision tracking) and decision points (when to use siblings). Lacks output description, but no output schema exists; given the complexity, it could optionally mention what confirmation/data is returned upon save.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage (parent_company, aliases array with alias/source/confidence all documented). Description references 'normalized alias names' and 'parent company' but does not add syntax, format details, or semantic constraints beyond what the schema already provides. Baseline 3 is appropriate given high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description opens with specific verb 'Save' and resource 'normalized alias names for a parent company'. It clearly distinguishes from sibling tools by contrasting with fda_link_subsidiaries for collision cases and fda_suggest_subsidiaries for workflow entry points.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit when-to-use ('true name variants of the same company record'), explicit alternative for collisions ('use fda_link_subsidiaries instead'), and complete workflow guidance ('call fda_suggest_subsidiaries first...then call this tool').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fda_search_510kSearch 510(k) ClearancesA
Read-onlyIdempotent
Inspect

Search FDA 510(k) clearances across all companies. Filter by company name (fuzzy match), product code, decision code (e.g., SESE=substantially equivalent), clearance type (Traditional, Special, Abbreviated), and date range. Returns clearance number (K-number), applicant, device name, decision date, and product code. Related: fda_device_class (product code details and classification), fda_product_code_lookup (cross-reference a product code across 510(k) and PMA), fda_search_pma (PMA approvals for higher-risk devices).

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results to return (1-500)
offsetNoResult offset for pagination
companyNoCompany name (fuzzy search)
to_dateNoEnd date for decision_date range (YYYY-MM-DD)
from_dateNoStart date for decision_date range (YYYY-MM-DD)
product_codeNoDevice product code
decision_codeNoDecision code (e.g., SESE, SESD)
clearance_typeNoClearance type (e.g., Traditional, Special)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations confirm read-only/idempotent safety; description adds critical behavioral context including 'fuzzy match' for company searches and documents return fields (K-number, applicant, device name) since no output schema exists. Minor gap: no mention of pagination behavior or rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three tightly constructed sentences: search scope/filters, return values, and related tools. Every sentence earns its place with zero redundancy; information is front-loaded with purpose and scoped appropriately.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Comprehensive for a search tool with no output schema: describes return structure, enumerates all available filters, provides domain-specific examples (SESE, Traditional/Special/Abbreviated), and contextualizes within the FDA tool ecosystem. No gaps given the complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

While schema has 100% description coverage, the description adds valuable domain semantics not in schema: explains SESE means 'substantially equivalent' and adds 'Abbreviated' as a clearance type example, enriching parameter understanding beyond the schema's basic field descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description opens with specific verb ('Search') and resource ('FDA 510(k) clearances'), explicitly scopes to 'across all companies,' and distinguishes from sibling tools like fda_search_pma by focusing on 510(k) clearances specifically.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly names three related tools (fda_device_class, fda_product_code_lookup, fda_search_pma) and clarifies when to use each: fda_search_pma for 'higher-risk devices' versus this tool for 510(k) clearances, providing clear alternative selection guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fda_search_aphisSearch APHIS Vet BiologicsA
Read-onlyIdempotent
Inspect

Search APHIS veterinary biologics establishments (animal health facilities) by company, state, or establishment type. Returns license number, company name, full address, establishment type (Licensee/Permittee), divisions, and subsidiaries. Covers vaccine manufacturers, diagnostic kit producers, and other veterinary biological product facilities. Related: fda_vet_events (veterinary adverse events by species/drug).

ParametersJSON Schema
NameRequiredDescriptionDefault
typeNoEstablishment type
limitNoMax results to return (1-500)
stateNoState code (e.g., CA, NY)
offsetNoResult offset for pagination
companyNoCompany name (fuzzy search)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnly/idempotent hints, so description appropriately focuses on compensating for missing output schema by detailing return fields (license number, address, divisions, subsidiaries). Adds domain context about facility types covered. Does not mention rate limits or auth requirements.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four tightly constructed sentences: core action, return values, domain scope, and sibling relationship. Every sentence earns its place with zero redundancy. Well front-loaded with primary action.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema exists, description effectively compensates by enumerating returned data fields. Annotations cover safety profile. Could strengthen by mentioning pagination behavior or default limits, though these are documented in schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage with examples (e.g., 'CA, NY' for state) and constraints documented. Description maps parameters to search dimensions ('by company, state, or establishment type') but adds minimal semantic value beyond what schema already provides. Baseline 3 appropriate given high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'Search' with clear resource 'APHIS veterinary biologics establishments' and scope (animal health facilities). It distinguishes from sibling fda_vet_events by contrasting facility searches versus adverse event tracking, and clarifies domain coverage (vaccine manufacturers, diagnostic kit producers).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear context by referencing related tool fda_vet_events to delineate scope (facilities vs. adverse events). However, lacks explicit 'when not to use' guidance or comparison to other facility search tools like fda_search_facilities.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fda_search_by_productSearch by Product NameA
Read-onlyIdempotent
Inspect

Search across FDA device and drug datasets by product name (device name, trade name, generic name, or brand name). Searches device classifications, 510(k) clearances, PMA approvals, and NDC records simultaneously. Use when you know a product name but not which dataset it's in. Returns matches from each dataset with product codes and company names.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results to return (1-500)
offsetNoResult offset for pagination
product_nameYesProduct or brand name (fuzzy search)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnly/idempotent status, so description adds valuable behavioral context: specifies exact datasets searched (classifications, 510(k), PMA, NDC), notes simultaneous searching, and discloses return content ('matches from each dataset with product codes and company names') compensating for missing output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four well-structured sentences: scope definition, specific datasets covered, usage guidance, and return value description. Every sentence earns its place; information is front-loaded with no redundancy or waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite no output schema, description compensates by detailing what returns ('matches from each dataset with product codes and company names'). With 100% input schema coverage and clear annotations, description provides sufficient context for invocation, though error behaviors remain unspecified.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, establishing baseline 3. Description references 'product name' in context but does not add syntax, format details, or semantic meaning beyond what the schema already provides for limit, offset, or product_name parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description provides specific verb ('Search'), resource ('FDA device and drug datasets'), and distinguishes from siblings by emphasizing cross-dataset scope ('simultaneously') and the specific datasets covered (510(k), PMA, NDC, classifications), clearly positioning it against specific single-dataset sibling tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use ('when you know a product name but not which dataset it's in'), implicitly contrasting with specific search siblings (fda_search_510k, fda_search_ndc, etc.). Lacks explicit 'when not to use' or named alternatives, but context is clear enough for selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fda_search_debarmentsSearch FDA DebarmentsA
Read-onlyIdempotent
Inspect

Search current FDA debarment lists across drug applications, drug imports, and food imports. These are rare but very high-severity compliance signals for people or firms barred from certain FDA-regulated activities.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameNoPerson or firm name (fuzzy match)
limitNoMax results to return (1-500)
offsetNoResult offset for pagination
date_toNoEnd date for effective_date range (YYYY-MM-DD)
date_fromNoStart date for effective_date range (YYYY-MM-DD)
list_typeNoDebarment list type
subject_typeNoWhether the record is a person or a firm
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations confirm readOnly/idempotent status, but the description adds valuable behavioral context: it notes the data covers 'current' lists and characterizes results as high-severity compliance signals. This helps the agent understand the data quality and urgency level without contradicting the safety annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two tightly constructed sentences with zero waste. First sentence establishes scope and resource; second sentence provides domain context about data severity. Information is front-loaded with the action verb, and every clause earns its place by aiding tool selection or interpretation.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complex regulatory domain, 100% schema coverage, and absence of output schema, the description provides sufficient context by explaining what debarments represent (compliance signals) and their severity. It appropriately compensates for missing return value documentation by clarifying the domain concept.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, the baseline is 3. The description adds semantic value by explicitly grouping the list_type enum values ('drug applications, drug imports, and food imports') and subject_type options ('people or firms'), providing categorical context that helps the agent understand parameter relationships beyond individual field descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'Search' with clear resource 'FDA debarment lists' and explicit scope 'across drug applications, drug imports, and food imports.' The second sentence defines debarments as barring people/firms from activities, clearly distinguishing from sibling tools like fda_search_enforcement or fda_citations that handle different compliance actions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear context that these are 'rare but very high-severity compliance signals,' which guides the agent to use this for serious compliance investigations rather than routine checks. While it doesn't explicitly name sibling alternatives, the severity qualifier effectively signals when this tool is appropriate versus general enforcement searches.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fda_search_drugsSearch Drug ApplicationsA
Read-onlyIdempotent
Inspect

Search Drugs@FDA applications across all companies. Filter by sponsor name (fuzzy match), application number, brand name, or submission status. Returns application details including products (brand names, dosage forms, active ingredients) and submissions (approval dates, status). Related: fda_search_ndc (NDC-level product details including labeler and packaging), fda_drug_labels (structured product labeling/package inserts), fda_clinical_result_letters (Complete Response Letters — FDA refusal-to-approve decisions), fda_drug_shortages (active drug shortage data).

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results to return (1-500)
offsetNoResult offset for pagination
statusNoSubmission status (searches submissions JSONB)
companyNoCompany name (fuzzy search)
brand_nameNoBrand name (searches products JSONB)
application_numberNoApplication number
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnly/idempotent (safety), while description adds valuable behavioral context: fuzzy matching logic for company names, JSONB search mechanics for status/brand fields, and detailed return structure (products with ingredients, submissions with approval dates). No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Perfect information density. Three sentences cover: 1) Core function, 2) Filter options, 3) Return payload. Related tools section follows as structured metadata. Every clause earns its place; no redundancy with schema or annotations.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists, but description compensates by detailing the nested return structure (products with brand/dosage/ingredients, submissions with dates/status). Given 6 optional parameters and complex sibling ecosystem, the description provides sufficient context for invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear descriptions ('Company name (fuzzy search)', 'Submission status (searches submissions JSONB)'). Description aggregates these filters but adds minimal semantic depth beyond the schema since the schema already documents fuzzy/JSONB behavior. Baseline 3 appropriate for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent specificity: states exact verb (Search), resource (Drugs@FDA applications), scope (across all companies), and distinguishes from 4 sibling tools via the 'Related' section which explains each alternative's specific focus (NDC-level, labeling, Complete Response Letters, shortages).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Lists 4 related tools with parenthetical explanations of their distinct purposes (e.g., 'NDC-level product details' vs 'application details'), helping agents select the correct granularity. Would be 5 with explicit 'Use this when...' phrasing, but the differentiation is clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fda_search_enforcementSearch RecallsA
Read-onlyIdempotent
Inspect

Search FDA enforcement actions (recalls) for drugs, devices, and food across all companies. Filter by company name (fuzzy match), recall classification (Class I=most serious/Class II/Class III), date range, or status (Ongoing/Terminated). Returns recall details including product description, reason, and distribution pattern. Related: fda_recall_facility_trace (trace a recall to its manufacturing facility by recall_number), fda_ires_enforcement (iRES recall data with cross-references), fda_device_recalls (device-specific recall data).

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results to return (1-500)
offsetNoResult offset for pagination
statusNoRecall status
companyNoCompany or firm name (fuzzy search)
to_dateNoEnd date for report_date range (YYYY-MM-DD)
from_dateNoStart date for report_date range (YYYY-MM-DD)
classificationNoRecall classification severity
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate read-only/idempotent operations. The description adds valuable behavioral context by disclosing that company name uses fuzzy matching and that Class I recalls are 'most serious' (semantic meaning not in schema). It also compensates for the missing output schema by listing what fields are returned ('product description, reason, and distribution pattern'). Does not mention pagination behavior or rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Perfectly structured with four efficient sentences: purpose statement, available filters, return value disclosure, and sibling differentiation. Every sentence earns its place. Information is front-loaded with the core function, followed by specific filter capabilities and alternatives.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a search tool with 7 optional parameters and no output schema, the description is highly complete. It compensates for the lack of output schema by describing return contents and provides sufficient filter context. Could be improved by explicitly stating that all filters are optional (though implied by 'Filter by... or...').

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is met. The description adds semantic value beyond the schema by explaining that Class I is 'most serious' (the schema only lists values without severity context) and clarifying the date range filtering capability. It also implies the optional nature of filters by presenting them as available filtering options rather than requirements.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the specific action ('Search FDA enforcement actions (recalls)') and scope ('drugs, devices, and food across all companies'). It effectively distinguishes from siblings in the 'Related' section by contrasting this broad search with facility-specific tracing (fda_recall_facility_trace), iRES data (fda_ires_enforcement), and device-specific recalls (fda_device_recalls).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Excellent guidance provided through the 'Related' section which explicitly names three alternative tools with brief explanations of their specific use cases (e.g., 'trace a recall to its manufacturing facility' vs this general search). This helps agents select the correct tool based on whether they need facility tracing, cross-referenced iRES data, or device-specific results.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fda_search_facilitiesSearch FacilitiesA
Read-onlyIdempotent
Inspect

Search FDA-registered facilities by name, city, state, or country. Searches drug (DECRS) and device registration databases. Returns FEI number, name, address, and source. Use the operations parameter to filter by manufacturing type (e.g., 'Contract Manufacture', 'API', 'Repack'). Use country filter (ISO code: US, DE, IN, CN, IE) to map a company's global manufacturing footprint. Excludes: products at facility, inspection history, enforcement actions. Related: fda_get_facility (full facility detail by FEI including products and operations type), fda_inspections (inspection data by FEI), fda_citations (CFR violations by FEI).

ParametersJSON Schema
NameRequiredDescriptionDefault
cityNoCity name
limitNoMax results to return (1-500)
stateNoState code (e.g. CA, NY)
offsetNoResult offset for pagination
companyNoFacility or company name (fuzzy search)
countryNoISO country code (e.g. US, DE)
operationsNoDECRS operations keyword
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations confirm read-only/idempotent nature. Description adds valuable context: specifies data sources (DECRS operations database), discloses limitations (excludes products/inspections/enforcement), and clarifies return structure. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Efficiently structured with zero waste: search capability → return values → parameter usage tips → exclusions → related tools. Every sentence provides distinct value regarding functionality, parameters, or tool selection.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Complete despite no output schema: describes return fields (FEI, name, address, source), covers all 7 parameters via schema+description, annotations cover behavioral safety, and sibling relationships are thoroughly mapped. Appropriate for a search utility of this complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

While schema has 100% coverage, description adds crucial domain-specific examples not in schema: operations parameter examples ('Contract Manufacture', 'API', 'Repack') and extended country codes (IN, CN, IE) plus usage context ('map global manufacturing footprint'). Adds meaningful value beyond structured definitions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear specific verb ('Search') + resource ('FDA-registered facilities') + scope (by name, city, state, country). Distinguishes from siblings by specifying it searches drug (DECRS) and device databases, and explicitly lists what it returns (FEI number, name, address, source).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Excellent differentiation from alternatives. Explicitly names three related tools (fda_get_facility, fda_inspections, fda_citations) with their specific purposes, and clearly states exclusions ('products at facility, inspection history, enforcement actions') guiding users to the correct tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fda_search_family_facilitiesSearch Family FacilitiesA
Read-onlyIdempotent
Inspect

Search FDA-registered facilities across a parent company and any explicitly linked subsidiaries. Use this when you know the parent company but the FDA records may sit under child entities like Actavis, Watson, or Cephalon. Supports optional city, state, country, and operations filters.

ParametersJSON Schema
NameRequiredDescriptionDefault
cityNoCity name
limitNoMax results to return (1-500)
stateNoState code (e.g. CA, NY)
offsetNoResult offset for pagination
companyYesParent company name
countryNoISO country code (e.g. US, DE)
operationsNoDECRS operations keyword
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and idempotentHint=true, covering safety profile. Description adds valuable scope limitation 'explicitly linked subsidiaries' (warning that not all subsidiaries are included) and confirms the filterable nature of the search. Does not address rate limits, response format, or empty result behavior, but provides adequate context given annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three efficient sentences with zero waste: action/scope definition, usage context with concrete examples, and filter enumeration. Front-loaded with the core capability and immediately distinguishes the tool's unique value proposition.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a search tool with 100% schema coverage and present annotations, the description adequately explains the hierarchical search logic (parent + subsidiaries) and filtering capabilities. No output schema exists, but description doesn't need to explain return values per guidelines. Minor gap: doesn't mention pagination behavior (limit/offset), though schema covers it.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, establishing a baseline of 3. Description consolidates the geographic and operational parameters as 'optional city, state, country, and operations filters,' which adds conceptual grouping but no additional semantic detail beyond the schema (e.g., doesn't explain DECRS operations keyword further).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'Search' with clear resource 'FDA-registered facilities' and unique scope 'across a parent company and any explicitly linked subsidiaries.' The examples (Actavis, Watson, Cephalon) effectively distinguish this from the sibling tool fda_search_facilities by clarifying the hierarchical expansion behavior.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit when-to-use guidance: 'Use this when you know the parent company but the FDA records may sit under child entities.' This clearly positions the tool against alternatives (like fda_search_facilities) by specifying the parent-company-known-but-records-scattered use case. Lacks explicit 'when not to use' but the guidance is clear enough to infer the alternative.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fda_search_import_alertsSearch FDA Import AlertsA
Read-onlyIdempotent
Inspect

Search FDA Import Alerts by firm, alert number, red-list versus green-list status, country, keyword, or date. This is a stronger manufacturing and supplier-risk signal than one-off import refusals because it captures standing alert status and the specific firms currently listed under each alert.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results to return (1-500)
offsetNoResult offset for pagination
countryNoCountry name
date_toNoEnd date for publish-date range (YYYY-MM-DD)
keywordNoKeyword to search in the alert title, reason, charge, or product notes
date_fromNoStart date for publish-date range (YYYY-MM-DD)
firm_nameNoFirm name (fuzzy match)
list_statusNoWhether the firm is on the red list or green list
alert_numberNoImport alert number, for example 66-40
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true and idempotentHint=true. The description adds valuable domain context beyond these safety flags: it explains that results capture 'standing alert status' and 'specific firms currently listed', and characterizes the data as a risk signal. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero redundancy. First sentence front-loads the core action and search capabilities; second sentence provides domain value proposition and sibling differentiation. Every clause earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 9 parameters with 100% schema coverage and strong annotations, the description provides adequate domain context. It partially compensates for the missing output schema by describing what the data represents ('standing alert status', 'firms currently listed'), though it could explicitly mention the return structure format.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, establishing a baseline of 3. The description maps searchable concepts to parameters (e.g., 'red-list versus green-list status' aligns with list_status enum) but does not add syntax details, format examples, or constraints beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a specific verb ('Search') and resource ('FDA Import Alerts'), then enumerates exact searchable dimensions. It explicitly distinguishes this from sibling tool fda_import_refusals by contrasting 'standing alert status' with 'one-off import refusals', providing clear scope differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear comparative context by stating this offers a 'stronger manufacturing and supplier-risk signal than one-off import refusals', implicitly guiding when to select this tool over fda_import_refusals. However, it lacks explicit negative guidance (when NOT to use) or prerequisite conditions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fda_search_ndcSearch NDC DirectoryA
Read-onlyIdempotent
Inspect

Search the National Drug Code (NDC) directory by labeler company, brand name, product NDC, or application number. Returns labeler name, brand name, generic name, dosage form, route, active ingredients, DEA schedule, listing type, and packaging details. Drug products are not linked by FEI; use this tool with company name to find drugs at a company. Related: fda_search_drugs (application-level data with submissions), fda_drug_labels (full product labeling), fda_search_nsde (NSDE cross-reference).

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results to return (1-500)
offsetNoResult offset for pagination
companyNoLabeler company name (fuzzy search)
brand_nameNoBrand name
product_ndcNoProduct NDC
application_numberNoApplication number
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover read-only/idempotent safety, so the description adds value by enumerating return fields (labeler name, dosage form, DEA schedule, etc.) since no output schema exists. It also notes the FEI linkage constraint. Could improve by mentioning pagination behavior, but effectively compensates for missing output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four sentences with zero waste: (1) purpose, (2) return values, (3) usage constraint, (4) sibling differentiation. Front-loaded with the core action, highly information-dense without verbosity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema, the description comprehensively lists returned data fields. It adequately addresses the complex sibling ecosystem (40+ tools) by explicitly naming and differentiating relevant alternatives. Complete for a search tool with good input schema coverage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description lists the four searchable fields (company, brand_name, product_ndc, application_number) but does not add semantic details beyond the schema (e.g., it omits mention of limit/offset pagination parameters or fuzzy search behavior already documented in schema).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a specific verb ('Search') and resource ('National Drug Code (NDC) directory'), clearly defining scope. It distinguishes from siblings by contrasting with fda_search_drugs (application-level), fda_drug_labels (full labeling), and fda_search_nsde (cross-reference), preventing tool selection errors.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use the tool ('use this tool with company name to find drugs at a company') and clarifies limitations ('Drug products are not linked by FEI'). Names three related tools with parenthetical descriptions of their distinct purposes, creating clear decision boundaries.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fda_search_nsdeSearch NSDEA
Read-onlyIdempotent
Inspect

Search the National Standard Drug Element (NSDE) database by brand/proprietary name, application number, or package NDC. Returns proprietary name, active ingredients, dosage form, route, and marketing information. Related: fda_search_ndc (NDC directory), fda_search_drugs (Drugs@FDA application data).

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results to return (1-500)
offsetNoResult offset for pagination
package_ndcNoPackage NDC (exact match)
proprietary_nameNoBrand/proprietary name (fuzzy match)
application_numberNoApplication number (exact match)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover safety (readOnlyHint, idempotentHint) and scope (openWorldHint). Description adds valuable behavioral context by listing specific return fields (proprietary name, active ingredients, dosage form, route, marketing information) not present in annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three efficient statements: search scope, return values, and related tools. Every sentence earns its place. Information is front-loaded with the core purpose in the first clause.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 100% schema coverage and comprehensive annotations, the description provides adequate context by explaining the return data structure. Minor gap: could explicitly note that zero parameters are required, though this is inferable from the schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage with clear explanations (including exact vs fuzzy match distinctions). Description mentions the searchable fields but doesn't add semantic details beyond what the schema already provides, which is appropriate given complete schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb 'Search' + resource 'National Standard Drug Element (NSDE) database' + exact searchable fields (brand/proprietary name, application number, package NDC). Clearly distinguishes from siblings by name and describes return data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Lists related tools 'fda_search_ndc (NDC directory)' and 'fda_search_drugs (Drugs@FDA application data)' providing clear context for differentiation. However, lacks explicit when-to-use/when-not-to-use guidance or prerequisites (e.g., noting that all parameters are optional).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fda_search_oii_recordsSearch OII Reading Room RecordsA
Read-onlyIdempotent
Inspect

Search recent FDA Office of Inspections and Investigations reading-room records by company, FEI, record type, country, establishment type, or publish date. This is official FDA document-index metadata with direct links to the posted records, plus incremental extracted document text when available, useful for finding recent 483-style inspection evidence by account.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results to return (1-500)
stateNoState name
offsetNoResult offset for pagination
countryNoCountry name
date_toNoEnd date for publish_date range (YYYY-MM-DD)
keywordNoKeyword to search in the FDA-provided excerpt and extracted document text
date_fromNoStart date for publish_date range (YYYY-MM-DD)
fei_numberNoFDA Establishment Identifier (FEI number)
record_typeNoRecord type, for example 483
company_nameNoCompany name (fuzzy match)
establishment_typeNoEstablishment type, for example Sterile Drug Manufacturer
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

While annotations cover safety (readOnlyHint) and idempotency, the description adds valuable behavioral context: data recency ('recent'), data format ('document-index metadata with direct links'), optional enrichment ('incremental extracted document text when available'), and domain-specific content type ('483-style').

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two well-structured sentences with zero waste. First sentence front-loads the action and search capabilities; second sentence explains data format and specific use case without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite lacking an output schema, the description adequately explains the return value structure (metadata, direct links, extracted text) and domain context (483 forms). Covers the complexity of 11 optional parameters effectively, though specific response fields are not detailed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, establishing a baseline of 3. The description lists searchable dimensions (company, FEI, record type, etc.) which confirms the schema structure but does not add substantial semantic meaning beyond what the schema already documents.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the verb (Search) and resource (FDA Office of Inspections and Investigations reading-room records) and distinguishes from siblings by specifying '483-style inspection evidence' and 'reading-room records' versus other FDA inspection or enforcement data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear context for when to use the tool ('useful for finding recent 483-style inspection evidence by account'), establishing the specific use case of searching for Form 483 inspection observations. Lacks explicit mention of alternatives like fda_inspections or fda_inspection_observation_summary.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fda_search_opdp_untitled_lettersSearch OPDP Untitled LettersA
Read-onlyIdempotent
Inspect

Search official FDA OPDP untitled letters for pharmaceutical promotion and advertising issues. Filter by company, product, issue date, close-out availability, or keyword in the extracted untitled-letter text when available.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results to return (1-500)
offsetNoResult offset for pagination
date_toNoEnd date for issued_date range (YYYY-MM-DD)
keywordNoKeyword to search in the extracted untitled-letter text
date_fromNoStart date for issued_date range (YYYY-MM-DD)
company_nameNoCompany name (fuzzy match)
has_close_outNoWhether the record has a linked close-out letter
product_issueNoProduct or issue text from the OPDP table (partial match)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond annotations by noting keyword search works on 'extracted untitled-letter text when available,' alerting agents to potential data gaps. This complements the readOnlyHint/idempotentHint annotations by explaining data availability limitations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two well-structured sentences with zero waste. First sentence establishes purpose and domain; second enumerates filtering capabilities. Front-loaded with the most critical information (what the tool searches) before listing parameters.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema, the description adequately covers the tool's purpose and filtering capabilities for a search operation. All 8 parameters are optional (0 required), which the description accommodates by presenting them as optional filters. Could marginally improve by indicating result cardinality or typical response content.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is appropriately met. The description adds semantic grouping by listing filters (company, product, issue date, etc.) that map to schema parameters, but doesn't add syntax details or format guidance beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verb 'Search' with clear resource 'FDA OPDP untitled letters' and domain scope 'pharmaceutical promotion and advertising issues.' It effectively distinguishes from sibling tools like fda_search_warning_letters by specifying 'untitled letters' and 'OPDP' (Office of Prescription Drug Promotion).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage through specific domain filtering capabilities but lacks explicit when-to-use guidance or named alternatives. It mentions filterable fields (company, product, date, close-out, keyword) which hints at use cases, but doesn't contrast with fda_search_warning_letters or other enforcement tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fda_search_pmaSearch PMA ApprovalsA
Read-onlyIdempotent
Inspect

Search FDA Pre-Market Approval (PMA) records across all companies. PMA is required for high-risk (Class III) devices. Filter by company name (fuzzy match), product code, and date range. Returns PMA number, applicant, trade name, decision date, and product code. Related: fda_device_class (product code details), fda_search_510k (510(k) clearances for lower-risk devices), fda_product_code_lookup (cross-reference a product code).

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results to return (1-500)
offsetNoResult offset for pagination
companyNoCompany name (fuzzy search)
to_dateNoEnd date for decision_date range (YYYY-MM-DD)
from_dateNoStart date for decision_date range (YYYY-MM-DD)
product_codeNoDevice product code
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

While annotations declare read-only and idempotent properties, the description adds crucial behavioral context: it specifies 'fuzzy match' search behavior for company names and details the exact return fields (PMA number, applicant, trade name, etc.) since no output schema exists. It also clarifies the regulatory scope (Class III devices) which affects search applicability.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description comprises four efficient sentences: purpose statement, regulatory context, filtering capabilities, and return values—each delivering distinct information without redundancy. The 'Related' section adds three targeted cross-references in minimal space, creating a dense but readable structure.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of an output schema, the description compensates by enumerating all returned fields (PMA number, applicant, trade name, decision date, product code). It also provides essential FDA regulatory context (Class III devices) and pagination awareness through parameter descriptions, making it complete for a complex regulatory search tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema already fully documents parameter formats (e.g., 'YYYY-MM-DD' patterns, 'fuzzy search' for company). The description aggregates these capabilities ('Filter by company name...') but does not add semantic meaning, syntax examples, or validation rules beyond what the structured schema provides, warranting the baseline score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly defines the operation ('Search'), resource ('FDA Pre-Market Approval (PMA) records'), and scope ('across all companies'). It distinguishes this tool from siblings by noting PMA is for 'high-risk (Class III) devices' and explicitly referencing `fda_search_510k` for lower-risk alternatives.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit alternative selection guidance by naming `fda_search_510k` with the context 'for lower-risk devices' and `fda_product_code_lookup` for cross-referencing. This clearly signals when to use this tool versus siblings based on device risk classification and specific information needs.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fda_search_recall_textFull-Text Search RecallsA
Read-onlyIdempotent
Inspect

Full-text search across recall reasons and product descriptions using PostgreSQL text search. Finds recalls mentioning specific terms (e.g. 'salmonella contamination', 'mislabeled', 'sterility'). Supports multi-word queries ranked by relevance. Filter by classification, product_type, or date range. Related: fda_search_enforcement (search by company name, classification, status), fda_recall_facility_trace (trace a recall to its manufacturing facility).

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results to return (1-500)
queryYesSearch terms (e.g. 'salmonella contamination', 'mislabeled dosage')
offsetNoResult offset for pagination
date_toNoEnd date for report_date range (YYYY-MM-DD)
date_fromNoStart date for report_date range (YYYY-MM-DD)
product_typeNoFilter by product type
search_fieldNoWhich field to search: reason_for_recall, product_description, or both (default: both)both
classificationNoFilter by recall classification severity
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true and idempotentHint=true; the description adds valuable behavioral context by noting results are 'ranked by relevance' and specifying the PostgreSQL text search implementation, which hints at query syntax capabilities. Does not contradict annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Five sentences covering mechanism, examples, ranking behavior, filters, and related tools. Every sentence earns its place with zero redundancy. Well-structured flow from core function to specific examples to alternatives.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 8 well-documented parameters, annotations covering safety/idempotency, and no output schema, the description is appropriately complete. It covers search scope, filtering capabilities, and tool relationships. Minor gap: could explicitly mention pagination support, though offset/limit parameters are well-documented in schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, the baseline is 3. The description adds concrete examples for the query parameter ('salmonella contamination', 'mislabeled', 'sterility') and natural language elaboration on filterable fields (classification, product_type, date range), providing helpful context for formulating searches.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool performs 'Full-text search across recall reasons and product descriptions' using PostgreSQL text search, providing specific searchable content (reasons, product descriptions) and distinguishing itself from siblings by contrasting with fda_search_enforcement (company name search) and fda_recall_facility_trace (facility tracing).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly names and differentiates alternatives: fda_search_enforcement is for 'search by company name, classification, status' while fda_recall_facility_trace is to 'trace a recall to its manufacturing facility', providing clear guidance on when to use text search versus company-based or facility-based lookups.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fda_search_warning_lettersSearch Warning LettersA
Read-onlyIdempotent
Inspect

Search official FDA warning letters with full-text content from the FDA website. Use keyword search for the actual letter body, or filter by company name, issuing office, subject, MARCS-CMS number, product type, or letter issue date. This adds narrative context beyond fda_compliance_actions, which only contains dashboard metadata.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results to return (1-500)
offsetNoResult offset for pagination
date_toNoEnd date for letter_issue_date range (YYYY-MM-DD)
keywordNoFull-text query for the warning letter body and subject
subjectNoSubject line text (partial match)
date_fromNoStart date for letter_issue_date range (YYYY-MM-DD)
company_nameNoCompany name (fuzzy match)
product_typeNoProduct type from the letter page (e.g. Drugs, Devices, Food)
issuing_officeNoIssuing office or center name (partial match)
marcs_cms_numberNoMARCS-CMS case number shown on the letter page
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and idempotentHint=true, establishing safe read behavior. Description adds data source context ('from the FDA website') and content scope ('full-text'), but does not disclose rate limits, error handling, or pagination behavior details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences each with distinct purpose: tool definition, parameter usage pattern, and sibling differentiation. No redundant words or filler content.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Comprehensive for a search tool with 100% schema coverage and good annotations. Explicitly addresses the sibling relationship which is critical given the large tool set. Lacks only minor details like noting all parameters are optional or describing pagination behavior explicitly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline applies. Description provides narrative grouping of parameters (keyword for 'letter body' vs filters for metadata fields) but does not add semantic details beyond what schema property descriptions already specify.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific verb 'Search' and resource 'FDA warning letters with full-text content', explicitly distinguishing from sibling tool fda_compliance_actions by contrasting 'narrative context' vs 'dashboard metadata'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly names alternative tool (fda_compliance_actions) and clarifies when to use each: use this tool for 'full-text content' and 'narrative context', use the sibling for 'dashboard metadata' only.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fda_substance_lookupSubstance LookupA
Read-onlyIdempotent
Inspect

Look up FDA substance data by UNII code (exact match) or substance name (fuzzy match). Returns substance name, UNII, substance class, molecular formula, and related details. Use to identify active pharmaceutical ingredients.

ParametersJSON Schema
NameRequiredDescriptionDefault
uniiNoUNII code (exact match)
limitNoMax results to return (1-500)
offsetNoResult offset for pagination
substance_nameNoSubstance name (fuzzy match)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds crucial behavioral details beyond annotations: specifies matching semantics (exact vs fuzzy), and lists return fields (substance name, UNII, class, molecular formula) compensating for missing output schema. Annotations already confirm read-only/idempotent status.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with zero waste: lookup method, return values, use case. Information is front-loaded with the core action and parameters. Every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a lookup tool with 100% schema coverage and good annotations, description is adequate. It compensates for missing output schema by listing returned fields. Could note that at least one search parameter is required, but schema optional flags make this inferable.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage, establishing baseline 3. Description reinforces the 'either/or' search pattern (UNII or name) but does not add syntax examples, validation rules, or semantic constraints beyond what schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description specifies exact actions ('Look up'), resource ('FDA substance data'), and lookup methods ('UNII code exact match' vs 'substance name fuzzy match'). It distinguishes from siblings like fda_search_drugs by focusing on substance-level data (ingredients) rather than drug products.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides implied usage context ('Use to identify active pharmaceutical ingredients'), which hints at when to use it, but lacks explicit when-not guidance or comparisons to sibling tools like fda_search_drugs or fda_drug_labels that also contain ingredient information.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fda_suggest_subsidiariesDiscover SubsidiariesA
Read-onlyIdempotent
Inspect

Discover subsidiary and related company names using FDA datasets first, then supplement with external corporate hierarchy sources (SEC EDGAR Exhibit 21 and GLEIF) when available. Costs 2 credits. Returns FDA name candidates, evidence-backed company-record suggestions, EDGAR subsidiaries, GLEIF subsidiaries, existing aliases, and facility coverage stats. The coverage.unlinked_feis count indicates how many facilities may be missing from the current alias set. The workflow is conservative and explainable: it validates candidates against FDA company records instead of auto-linking them. Note: EDGAR and GLEIF may lag recent acquisitions or divestitures, so missing external results do not rule out FDA-visible subsidiaries. Recommended workflow: 1. fda_suggest_subsidiaries, 2. fda_link_subsidiaries for distinct child companies or fda_save_aliases for true same-company variants, 3. fda_manufacturing_risk_summary or fda_search_family_facilities. Related: fda_link_subsidiaries (persist explicit family links), fda_save_aliases (persist same-entity names), fda_manufacturing_risk_summary (family-aware company rollup), fda_search_family_facilities (family-aware FEI search).

ParametersJSON Schema
NameRequiredDescriptionDefault
cikNoSEC CIK (optional, auto-resolved if omitted)
companyYesCompany name to analyze
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Beyond annotations (readOnly, openWorld, idempotent), description adds critical cost disclosure ('Costs 2 credits'), data quality warnings ('EDGAR and GLEIF may lag'), validation approach ('conservative and explainable... instead of auto-linking'), and interprets specific return fields ('coverage.unlinked_feis count indicates...').

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Information-dense and well-structured: purpose → cost → return values → behavioral notes → workflow → related tools. Minor redundancy between workflow section and final 'Related' list prevents a 5, but every sentence serves a distinct purpose in guiding the agent.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Excellent compensation for missing output schema by detailing return structure ('Returns FDA name candidates... facility coverage stats') and explaining field semantics ('unlinked_feis count'). Workflow guidance and sibling differentiation provide complete context for a complex multi-source discovery operation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (both 'company' and 'cik' are well-documented in schema), establishing baseline 3. Description provides contextual mention of SEC EDGAR relevance for CIK but does not significantly augment parameter semantics beyond schema definitions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb ('Discover') + resource ('subsidiary and related company names') with clear scope (FDA datasets first, then SEC EDGAR/GLEIF). Explicitly distinguishes from siblings by contrasting 'suggest' (this tool) with 'persist' operations (fda_link_subsidiaries, fda_save_aliases).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit 3-step workflow ('1. fda_suggest_subsidiaries, 2. fda_link_subsidiaries... or fda_save_aliases...'). Clearly defines when to use each sibling tool ('for distinct child companies' vs 'for true same-company variants'), eliminating ambiguity about tool selection sequence.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fda_tobacco_problemsSearch Tobacco Problem ReportsA
Read-onlyIdempotent
Inspect

Search tobacco problem reports by product type or health problem keyword. Date range in YYYYMMDD format. Returns reports including tobacco product details and reported health problems.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results to return (1-500)
offsetNoResult offset for pagination
date_toNoEnd date_submitted (YYYYMMDD)
date_fromNoStart date_submitted (YYYYMMDD)
product_typeNoTobacco product type (searches tobacco_products array)
health_problemNoReported health problem keyword
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description aligns with annotations (readOnly/search operation) and adds valuable behavioral context not in annotations: the specific date format requirement and the composition of returned data ('reports including tobacco product details and reported health problems'). No contradictions with the readOnlyHint or idempotentHint annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is optimally concise with two efficient sentences. The first establishes the core search capability and filters; the second covers date formatting and return value structure. Every word earns its place with zero redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of an output schema, the description adequately compensates by describing what the search returns (reports with product details and health problems). It covers the optional nature of all 6 parameters implicitly by presenting them as search filters. Could mention pagination behavior explicitly, though limit/offset schema covers this.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is met. The description reinforces the YYYYMMDD date format and groups the semantic concepts (product type vs health problem), but does not add significant meaning beyond what the schema property descriptions already provide for the 6 parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the action ('Search') and resource ('tobacco problem reports'), and specifies searchable dimensions (product type, health problem keyword). It effectively distinguishes this from siblings like fda_consumer_events through its specific tobacco focus, though it could explicitly clarify the difference between 'problem reports' and general consumer events.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage by specifying the searchable fields (product type, health problem, date range), but lacks explicit 'when to use' guidance or contrasts with alternatives like fda_consumer_events. The date format hint (YYYYMMDD) provides practical usage context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fda_vet_eventsSearch Veterinary Adverse EventsA
Read-onlyIdempotent
Inspect

Search veterinary adverse events (animal drug safety reports) by species, drug name, reaction, serious flag, or date range (YYYYMMDD format). Returns event reports including animal details, drug information, and adverse reactions. Related: fda_search_aphis (veterinary biologics facilities and establishments).

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results to return (1-500)
offsetNoResult offset for pagination
date_toNoEnd receive_date (YYYYMMDD)
seriousNoFilter by serious adverse event
speciesNoAnimal species (searches animal JSONB)
date_fromNoStart receive_date (YYYYMMDD)
drug_nameNoDrug name (searches drug JSONB)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and idempotentHint=true. The description adds valuable behavioral context by specifying what the returned reports contain (animal details, drug information, adverse reactions) to compensate for the missing output schema, and reinforces the YYYYMMDD date format requirement.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three efficient sentences: first defines the search capability and filters, second describes return content, third provides sibling context. Every sentence earns its place with no redundant filler.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema, the description appropriately describes return contents (animal details, drug info, reactions). With 100% schema coverage and clear annotations, the description provides sufficient context, though it could mention pagination behavior or result limits.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, establishing a baseline of 3. The description lists searchable fields but incorrectly includes 'reaction' which is not a defined parameter in the schema (only species, drug_name, serious, date range, limit, offset exist).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses the specific verb 'Search' with the clear resource 'veterinary adverse events (animal drug safety reports)' and explicitly distinguishes from sibling fda_search_aphis by contrasting 'adverse events' with 'veterinary biologics facilities' in the related section.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit differentiation from fda_search_aphis, clarifying that tool covers facilities/establishments while this tool covers adverse events. However, it does not address when to use this versus fda_consumer_events or other similar reporting tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources