openfda-mcp-server
Server Details
Query FDA data on drugs, food, devices, and recalls via openFDA. STDIO or Streamable HTTP.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- cyanheads/openfda-mcp-server
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.8/5 across 7 of 7 tools scored.
Each tool has a clearly distinct purpose targeting specific FDA data domains: drug labels, NDC lookups, adverse events, device clearances, drug approvals, recalls, and a general count tool. There is no overlap in functionality, making tool selection straightforward for an agent.
All tool names follow a consistent 'openfda_' prefix with descriptive verb_noun patterns (e.g., openfda_get_drug_label, openfda_search_adverse_events). This uniformity enhances predictability and readability across the toolset.
With 7 tools, the server is well-scoped for querying FDA data, covering key areas like drugs, devices, adverse events, and recalls. Each tool serves a distinct and necessary function without bloat or redundancy.
The toolset provides comprehensive coverage for searching and retrieving FDA data across multiple domains, including drugs, devices, and adverse events. A minor gap is the lack of update or delete operations, but this is reasonable given the server's read-only, data-querying purpose.
Available Tools
7 toolsopenfda_countOpenfda CountARead-onlyInspect
Aggregate and tally unique values for any field across any openFDA endpoint. Returns ranked term-count pairs sorted by count descending.
| Name | Required | Description | Default |
|---|---|---|---|
| count | Yes | Field to count. Append .exact for whole-phrase counting (e.g. "patient.reaction.reactionmeddrapt.exact", "openfda.brand_name.exact") | |
| limit | No | Number of top terms to return (default 100, max 1000) | |
| search | No | Filter query to scope the count (e.g. patient.drug.medicinalproduct:"metformin") | |
| endpoint | Yes | Full openFDA endpoint path (e.g. "drug/event", "device/classification") |
Output Schema
| Name | Required | Description |
|---|---|---|
| meta | Yes | Response metadata |
| message | No | Guidance when results are empty |
| results | Yes | Term-count pairs sorted by count descending |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The annotation already declares readOnlyHint=true, so the agent knows this is a safe read operation. The description adds useful behavioral context about the output format (ranked term-count pairs, sorted descending) and the aggregation scope ('any field'), but doesn't mention rate limits, authentication needs, or pagination behavior. With annotations covering safety, this provides moderate additional value.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose and output format. Every word earns its place with no redundancy or unnecessary elaboration.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (aggregation across multiple endpoints), the presence of both comprehensive annotations (readOnlyHint) and an output schema (implied by context signals), the description is complete enough. It clearly states the tool's purpose, scope, and output format, which complements the structured data without needing to explain return values or safety aspects.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already fully documents all parameters. The description doesn't add any parameter-specific semantics beyond what's in the schema (e.g., it doesn't explain the syntax for 'count' or 'search' fields). Baseline 3 is appropriate when the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('aggregate and tally unique values'), the resource ('any field across any openFDA endpoint'), and the output format ('ranked term-count pairs sorted by count descending'). It distinguishes itself from sibling tools by focusing on aggregation/counting rather than searching or looking up specific records.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context through 'any field across any openFDA endpoint' and 'aggregate and tally unique values', suggesting this is for statistical analysis rather than retrieving specific records. However, it doesn't explicitly state when to use this versus the search-oriented sibling tools (e.g., openfda_search_adverse_events) or provide explicit exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
openfda_get_drug_labelOpenfda Get Drug LabelARead-onlyInspect
Look up FDA drug labeling (package inserts / SPL documents). Check indications, warnings, dosage, contraindications, active ingredients, or any structured label section.
| Name | Required | Description | Default |
|---|---|---|---|
| skip | No | Number of results to skip for pagination (0-25000). Default 0. | |
| sort | No | Sort expression (field:asc or field:desc). Example: effective_time:desc. Unrecognized fields are silently ignored by the API — results return in default order. | |
| limit | No | Maximum number of results to return (1-1000). Default 5. Labels are large. | |
| search | Yes | Query targeting label fields. Examples: openfda.brand_name:"aspirin", openfda.generic_name:"metformin", openfda.manufacturer_name:"pfizer", set_id:"uuid". |
Output Schema
| Name | Required | Description |
|---|---|---|
| meta | Yes | Pagination and freshness metadata. |
| message | No | Human-readable note when the result set is empty. |
| results | Yes | Array of drug label records. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, so the agent knows this is a safe read operation. The description adds useful context about what can be looked up (specific label sections) and mentions that 'labels are large' in the schema, but doesn't disclose additional behavioral traits like rate limits, authentication needs, or error handling beyond what annotations provide.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise (two sentences) and front-loaded with the core purpose. Every word earns its place by specifying the resource, action, and use cases without any wasted text or redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (4 parameters, 1 required), excellent schema coverage (100%), presence of annotations (readOnlyHint), and existence of an output schema, the description provides complete contextual information. It clearly states what the tool does and when to use it, which is sufficient alongside the structured data.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already fully documents all four parameters with descriptions, defaults, and constraints. The description doesn't add any parameter-specific information beyond what's in the schema, maintaining the baseline score of 3 when schema coverage is complete.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('look up') and resource ('FDA drug labeling'), with explicit examples of what can be checked (indications, warnings, dosage, etc.). It distinguishes this tool from siblings by focusing on drug labeling rather than counts, NDC lookups, adverse events, or other FDA data types.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool ('look up FDA drug labeling') and implies usage for checking specific label sections. However, it doesn't explicitly state when NOT to use it or name specific alternatives among the sibling tools, though the context makes sibling differentiation reasonably clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
openfda_lookup_ndcOpenfda Lookup NdcARead-onlyInspect
Look up drugs in the NDC (National Drug Code) Directory. Identify drug products by NDC code, find active ingredients, packaging details, or manufacturer info.
| Name | Required | Description | Default |
|---|---|---|---|
| skip | No | Number of records to skip for pagination (0-25000, default 0) | |
| sort | No | Sort expression (field:asc or field:desc). Example: listing_expiration_date:desc. Unrecognized fields are silently ignored by the API — results return in default order. | |
| limit | No | Maximum number of records to return (1-1000, default 10) | |
| search | Yes | openFDA search query. Examples: product_ndc:"0363-0218", brand_name:"aspirin", generic_name:"metformin", openfda.manufacturer_name:"walgreen", active_ingredients.name:"ASPIRIN" |
Output Schema
| Name | Required | Description |
|---|---|---|
| meta | Yes | Response metadata |
| message | No | Guidance when results are empty or search can be refined |
| results | Yes | NDC directory records with product and packaging details |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations provide readOnlyHint=true, indicating a safe read operation. The description adds value by specifying the types of information retrievable (active ingredients, packaging details, manufacturer info) and the search scope (NDC Directory), which goes beyond the annotation. However, it does not disclose behavioral traits like rate limits, API error handling, or pagination details beyond what the schema covers.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose in the first sentence, followed by specific use cases. It uses two concise sentences with zero waste, efficiently covering lookup capabilities without redundancy. Every sentence earns its place by adding clarity on search targets.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (4 parameters, 100% schema coverage, read-only annotation, and output schema), the description is largely complete. It covers the purpose and search scope adequately. However, it could improve by mentioning when to use this tool over siblings or detailing output structure, though the output schema mitigates the latter gap.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents all parameters (search, skip, limit, sort). The description adds minimal semantics by mentioning examples of search queries (e.g., product_ndc, brand_name) but does not provide additional meaning beyond the schema's detailed descriptions. Baseline 3 is appropriate as the schema handles parameter documentation effectively.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Look up drugs'), resource ('NDC Directory'), and scope ('Identify drug products by NDC code, find active ingredients, packaging details, or manufacturer info'). It distinguishes itself from siblings like openfda_count or openfda_search_adverse_events by focusing on drug product lookup rather than counting, label retrieval, or adverse event searches.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for drug product lookup in the NDC Directory, but does not explicitly state when to use this tool versus alternatives like openfda_get_drug_label or openfda_search_drug_approvals. It provides context on what can be searched (e.g., NDC code, brand name) but lacks explicit guidance on exclusions or comparative use cases.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
openfda_search_adverse_eventsOpenfda Search Adverse EventsARead-onlyInspect
Search adverse event reports across drugs, food, and devices. Use to investigate safety signals, find reports for a specific product, or explore reactions by demographics.
| Name | Required | Description | Default |
|---|---|---|---|
| skip | No | Number of records to skip for pagination (0-25000, default 0) | |
| sort | No | Sort expression (field:asc or field:desc). Example: receivedate:desc. Unrecognized fields are silently ignored by the API — results return in default order. | |
| limit | No | Maximum number of records to return (1-1000, default 10) | |
| search | No | Elasticsearch query string. Examples: patient.drug.medicinalproduct:"aspirin", patient.reaction.reactionmeddrapt:"nausea" AND serious:"1". Omit to browse recent. | |
| category | Yes | Product category — each has different field schemas in the response |
Output Schema
| Name | Required | Description |
|---|---|---|
| meta | Yes | Response metadata |
| message | No | Guidance when results are empty or search can be refined |
| results | Yes | Adverse event records — fields vary by category (drug: patient/reactions/drugs, device: device details/event type, food: products/outcomes) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations include 'readOnlyHint': true, indicating a safe read operation. The description adds value by mentioning the tool's use for 'investigate safety signals' and 'explore reactions by demographics,' which provides context beyond annotations. However, it lacks details on rate limits, authentication needs, or specific behavioral traits like pagination handling (implied by parameters but not described).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise and front-loaded, consisting of two sentences that efficiently convey purpose and usage. Every sentence adds value without redundancy, making it easy to understand quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (5 parameters, 1 required), rich schema (100% coverage), annotations (readOnlyHint), and output schema (present), the description is mostly complete. It covers purpose and usage well but could benefit from more behavioral context or sibling differentiation to achieve a perfect score.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, meaning all parameters are well-documented in the schema. The description does not add specific parameter details beyond what the schema provides, such as explaining 'category' enums or 'search' query syntax. Thus, it meets the baseline of 3 without compensating for any gaps.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Search adverse event reports across drugs, food, and devices.' It specifies the verb ('Search') and resource ('adverse event reports'), and mentions the scope ('across drugs, food, and devices'). However, it does not explicitly differentiate from sibling tools like 'openfda_count' or 'openfda_search_device_clearances', which reduces the score from a 5.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear usage contexts: 'investigate safety signals, find reports for a specific product, or explore reactions by demographics.' This gives practical scenarios for when to use the tool. It does not explicitly state when not to use it or name alternatives among siblings, so it falls short of a perfect score.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
openfda_search_device_clearancesOpenfda Search Device ClearancesARead-onlyInspect
Search FDA device premarket notifications — 510(k) clearances and PMA approvals.
| Name | Required | Description | Default |
|---|---|---|---|
| skip | No | Pagination offset (0-25000). | |
| sort | No | Sort expression (field:asc or field:desc). Example: decision_date:desc. Unrecognized fields are silently ignored by the API — results return in default order. | |
| limit | No | Maximum number of records to return (1-1000). | |
| search | No | openFDA search query. Examples: applicant:"medtronic", advisory_committee_description:"cardiovascular", product_code:"DXN", openfda.device_name:"catheter". Omit to browse recent. | |
| pathway | Yes | Premarket pathway. 510(k) is most common (174K+ records). PMA is for higher-risk devices. |
Output Schema
| Name | Required | Description |
|---|---|---|
| meta | Yes | Response metadata |
| message | No | Guidance when results are empty or search can be refined |
| results | Yes | 510(k) clearance or PMA approval records |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true, which the description doesn't contradict (searching is a read operation). The description adds value by specifying the resource scope ('device premarket notifications') and the two specific regulatory pathways, which helps the agent understand what data is being accessed beyond just a generic search.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose without any wasted words. It directly communicates what the tool does in a clear and structured manner.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has annotations (readOnlyHint), a rich input schema with 100% coverage, and an output schema (implied by context signals), the description is reasonably complete. It specifies the resource and pathway types, though it could better differentiate from siblings or provide more behavioral context like result format hints.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents all parameters. The description doesn't add any parameter-specific details beyond what's in the schema, such as examples or usage tips for the 'search' parameter. Baseline 3 is appropriate when the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Search'), the resource ('FDA device premarket notifications'), and the specific subtypes ('510(k) clearances and PMA approvals'). It distinguishes this tool from siblings by focusing on device clearances rather than drugs, adverse events, or recalls.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for searching device clearances, but provides no explicit guidance on when to use this tool versus alternatives like openfda_search_drug_approvals or openfda_search_adverse_events. It mentions the two pathway types but doesn't explain when to choose one over the other beyond the schema's description.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
openfda_search_drug_approvalsOpenfda Search Drug ApprovalsBRead-onlyInspect
Search the Drugs@FDA database for drug application approvals (NDAs and ANDAs). Returns application details, sponsor info, and full submission history.
| Name | Required | Description | Default |
|---|---|---|---|
| skip | No | Number of records to skip for pagination (0-25000, default 0) | |
| sort | No | Sort expression (field:asc or field:desc). Example: submissions.submission_status_date:desc. Unrecognized fields are silently ignored by the API — results return in default order. | |
| limit | No | Maximum number of records to return (1-1000, default 10) | |
| search | No | openFDA search query. Examples: openfda.brand_name:"humira", sponsor_name:"pfizer", submissions.submission_type:"ORIG" AND submissions.review_priority:"PRIORITY". Omit to browse recent. |
Output Schema
| Name | Required | Description |
|---|---|---|
| meta | Yes | Response metadata |
| message | No | Guidance when results are empty or search can be refined |
| results | Yes | Drug application records with submission history |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds context about what data is returned (application details, sponsor info, full submission history) which goes beyond the readOnlyHint annotation. However, it doesn't mention rate limits, authentication requirements, or other behavioral traits like error handling or response format details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose and return values with zero wasted words. Every element serves a clear purpose in communicating the tool's function.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the readOnlyHint annotation, 100% schema coverage, and presence of an output schema, the description provides adequate context for this search tool. It could be more complete by explaining when to choose this over sibling tools, but covers the essential purpose and return data.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema already fully documents all 4 parameters. The description doesn't add any parameter-specific information beyond what's in the schema, so it meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches the Drugs@FDA database for drug application approvals (NDAs and ANDAs) and returns specific details. It distinguishes the resource (drug approvals) and verb (search), but doesn't explicitly differentiate from siblings like openfda_search_adverse_events beyond the database name.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus sibling tools like openfda_get_drug_label or openfda_search_adverse_events. It doesn't mention prerequisites, alternatives, or exclusions for this specific search functionality.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
openfda_search_recallsOpenfda Search RecallsBRead-onlyInspect
Search enforcement reports and recall actions across drugs, food, and devices.
| Name | Required | Description | Default |
|---|---|---|---|
| skip | No | Pagination offset (0-25000). | |
| sort | No | Sort expression (field:asc or field:desc). Example: report_date:desc. Unrecognized fields are silently ignored by the API — results return in default order. | |
| limit | No | Maximum number of records to return (1-1000). | |
| search | No | openFDA search query. Examples: classification:"Class I", recalling_firm:"pfizer", reason_for_recall:"undeclared allergen". | |
| category | Yes | Product category | |
| endpoint | No | Report type. Default enforcement. The recall endpoint is only available for devices. | enforcement |
Output Schema
| Name | Required | Description |
|---|---|---|
| meta | Yes | Response metadata |
| message | No | Guidance when results are empty or search can be refined |
| results | Yes | Enforcement/recall records |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true, indicating a safe read operation. The description adds context about searching 'enforcement reports and recall actions' and the scope ('drugs, food, and devices'), which clarifies the tool's domain. However, it doesn't disclose behavioral traits like rate limits, authentication needs, or error handling beyond what annotations provide. No contradiction with annotations exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose without unnecessary words. It's appropriately sized for a search tool, with every part earning its place by specifying the action, resource, and scope concisely.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (6 parameters, 1 required), rich annotations (readOnlyHint), and the presence of an output schema, the description is reasonably complete. It covers the basic purpose and scope, and with annotations and schema handling safety and parameters, gaps are minimal. However, it lacks usage guidelines, which slightly reduces completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with all parameters well-documented in the input schema (e.g., 'skip' for pagination, 'search' for openFDA queries). The description doesn't add meaning beyond the schema, as it lacks parameter details. With high schema coverage, the baseline score of 3 is appropriate, as the schema carries the burden.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose as 'Search enforcement reports and recall actions across drugs, food, and devices.' This specifies the verb ('search'), resource ('enforcement reports and recall actions'), and scope ('across drugs, food, and devices'). However, it doesn't explicitly differentiate from sibling tools like 'openfda_search_adverse_events' or 'openfda_search_device_clearances' beyond the general domain.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools, specify use cases, or outline prerequisites. The agent must infer usage from the tool name and description alone, which is insufficient for optimal selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!