Skip to main content
Glama
salwks

mcp-techTrend

fda_recalls_recent

Read-onlyIdempotent

Obtain FDA medical device recalls, filtered by class (1 most serious, 3 least). Supports openFDA wildcard queries for partial matches.

Instructions

Recent FDA medical device recalls via openFDA. Optionally filter by class (1=most serious, 3=least). Note: openFDA query syntax uses token-exact matching on string fields — for partial matches use wildcards (e.g. product_description:mammog*).

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
queryNo
daysNo
class_levelNo
max_resultsNo
response_formatNomarkdown

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • The async function that executes the fda_recalls_recent tool logic. It queries the openFDA recall API, supports filtering by days, class_level, and query. Returns formatted markdown results.
    async def fda_recalls_recent(
        query: str | None = None,
        days: int = 90,
        class_level: str | None = None,
        max_results: int = 20,
        response_format: ResponseFormat = ResponseFormat.MARKDOWN,
    ) -> str:
        try:
            args = FDARecallInput(
                query=query,
                days=days,
                class_level=class_level,
                max_results=max_results,
                response_format=response_format,
            )
            end = _utc_now().strftime("%Y%m%d")
            start = (_utc_now() - timedelta(days=args.days)).strftime("%Y%m%d")
            parts = [f"event_date_initiated:[{start}+TO+{end}]"]
            if args.query:
                parts.append(args.query)
            if args.class_level:
                roman = _RECALL_CLASS_TO_ROMAN.get(args.class_level, args.class_level)
                parts.append(f"classification:Class+{roman}")
            search = _build_openfda_search(parts)
            api_key = os.environ.get("OPENFDA_API_KEY")
            url = f"{OPENFDA_RECALL}?search={search}&limit={args.max_results}&sort=event_date_initiated:desc"
            if api_key:
                url += f"&api_key={api_key}"
            data = await _http_get_json(url, ttl=TTL_STATIC)
            items = data.get("results", []) if isinstance(data, dict) else []
            cls_tag = f" · Class {args.class_level}" if args.class_level else ""
            header = f"FDA Recalls — 최근 {args.days}일{cls_tag} ({len(items)}건)"
            return _format(items, args.response_format, render_md=lambda x: _render_recall_md(x, header))
        except httpx.HTTPStatusError as e:
            if e.response.status_code == 404:
                cls_tag = f" · Class {args.class_level}" if args.class_level else ""
                header = f"FDA Recalls — 최근 {args.days}일{cls_tag} (0건)"
                return _format([], args.response_format, render_md=lambda x: _render_recall_md(x, header))
            return _handle_error(e, "fda_recalls_recent")
        except Exception as e:
            return _handle_error(e, "fda_recalls_recent")
  • Pydantic model FDARecallInput — validates input parameters for fda_recalls_recent: query (optional), days (1-365), class_level (Arabic 1-3 or Roman I-III), max_results (1-100), response_format.
    class FDARecallInput(BaseModel):
        model_config = ConfigDict(str_strip_whitespace=True, extra="forbid")
        query: str | None = Field(None, max_length=300)
        days: int = Field(90, ge=1, le=365)
        # Accept Arabic ("1","2","3") or Roman ("I","II","III") — normalized internally.
        class_level: str | None = Field(None, pattern=r"^(?:[123]|I{1,3})$")
        max_results: int = Field(20, ge=1, le=100)
        response_format: ResponseFormat = ResponseFormat.MARKDOWN
  • The @_maybe_tool decorator registers fda_recalls_recent with FastMCP under the 'fda_recalls' source gating. It is only registered if 'fda_recalls' is in TRENDS_ENABLED_SOURCES.
    @_maybe_tool(
        source="fda_recalls",
        name="fda_recalls_recent",
        description=(
            "Recent FDA medical device recalls via openFDA. Optionally filter by class "
            "(1=most serious, 3=least). Note: openFDA query syntax uses token-exact "
            "matching on string fields — for partial matches use wildcards "
            "(e.g. `product_description:mammog*`)."
        ),
        annotations={
            "readOnlyHint": True,
            "destructiveHint": False,
            "openWorldHint": True,
            "idempotentHint": True,
        },
    )
  • Lookup dict normalizing class_level input (Arabic '1','2','3' or Roman 'I','II','III') to Roman numeral for querying the openFDA recall API.
    _RECALL_CLASS_TO_ROMAN: dict[str, str] = {
        "1": "I", "2": "II", "3": "III",
        "I": "I", "II": "II", "III": "III",
    }
  • Renders the FDA recall results as markdown. Shows product description, recall number, classification, date, company, status, and reason for recall.
    def _render_recall_md(items: list[dict[str, Any]], header: str) -> str:
        if not items:
            return f"# {header}\n\n_결과 없음_"
        lines = [f"# {header}", f"_총 {len(items)}건_", ""]
        for i, r in enumerate(items, 1):
            lines.append(
                f"## {i}. {_trim(r.get('product_description'), 120)}\n"
                f"- 회수번호 `{r.get('recall_number', '?')}` · {r.get('classification', '?')} · "
                f"{r.get('event_date_initiated', '?')}\n"
                f"- 회사: {r.get('recalling_firm', '?')} · 상태: {r.get('recall_status', '?')}\n"
                f"- 사유: {_trim(r.get('reason_for_recall'), 400)}\n"
            )
        return "\n".join(lines)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only and idempotent behavior. Description adds value by revealing openFDA's token-exact matching and wildcard usage, which is critical for querying. Does not mention response format but output schema exists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with purpose, no unnecessary words. Efficiently communicates core intent and a key nuance (query syntax).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With 5 optional parameters and an output schema, the description covers the main purpose and critical query behavior. Days and max_results are intuitive, and response_format is handled by the schema. Slightly lacking in explicit return value description, but output schema fills that gap.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so description compensates by explaining class_level (1=most serious, 3=least) and query wildcards. However, days, max_results, and response_format are not described, though they are somewhat inferable from defaults.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states tool retrieves recent FDA medical device recalls via openFDA, and mentions optional class filter. This distinguishes it from siblings like fda_510k_recent which deals with 510k clearances.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides guidance on filtering by class and explains query syntax with wildcards for partial matches. Does not explicitly say when not to use but given the distinct sibling tools, usage context is clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/salwks/mcp-techTrend'

If you have feedback or need assistance with the MCP directory API, please join our Discord server