Skip to main content
Glama

DeepRecall - Product Safety Intelligence

Server Details

Search 120,000+ recalled products from 8 global safety agencies using AI similarity.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
adrida/deeprecall-mcp
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

2 tools
get_data_sourcesAInspect
Get information about available recall data sources.

Returns a list of all supported regulatory agencies and their coverage.
This is a free call that does not consume API credits.

Returns:
    Dictionary with data sources and their descriptions
ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively adds context beyond the input schema by stating it's 'a free call that does not consume API credits,' which informs about cost and rate limit implications. It also describes the return format ('Dictionary with data sources and their descriptions'), though it doesn't detail error handling or authentication needs. This provides useful behavioral insights for a tool with zero annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded: it starts with the core purpose, then details the return value and behavioral traits (free call), all in three concise sentences. Every sentence adds value without repetition or fluff, making it efficient and easy to parse for an AI agent.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (0 parameters, no annotations, no output schema), the description is complete enough. It covers purpose, return format, and cost behavior. However, it doesn't specify if the tool is read-only or safe (though implied by 'get' and 'free'), and with no output schema, it could benefit from more detail on the dictionary structure (e.g., key-value examples). Still, it's largely adequate for this simple tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0 parameters with 100% coverage, so the description doesn't need to compensate for missing param info. The description adds no parameter semantics (as there are none), which is appropriate. Baseline for 0 params is 4, as the description focuses on output and usage without unnecessary param details.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get information about available recall data sources' and specifies it returns 'a list of all supported regulatory agencies and their coverage.' This is specific (verb+resource) and distinguishes it from the sibling 'search_recalls,' which likely searches actual recall data rather than metadata about sources. It doesn't explicitly contrast with the sibling, but the distinction is implied by the different resources (sources vs. recalls).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage by stating it 'Returns a list of all supported regulatory agencies and their coverage' and notes it's 'a free call that does not consume API credits,' suggesting it's safe for frequent use. However, it lacks explicit guidance on when to use this tool versus the sibling 'search_recalls' (e.g., for metadata lookup vs. actual data retrieval) or any prerequisites. The context is clear but not comprehensive.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_recallsAInspect
Search for recalled products similar to your query.

This tool searches DeepRecall's global product safety database using AI-powered
multimodal matching. Provide a text description and/or product images to find
similar recalled products.

Use Cases:
- Pre-purchase safety checks: Before buying, verify if similar products were recalled
- Supplier vetting: Check if a supplier's products have safety issues
- Marketplace compliance: Verify products against recall databases
- Consumer protection: Identify potentially hazardous products

Data Sources:
- us_cpsc: US Consumer Product Safety Commission
- us_fda: US Food and Drug Administration
- safety_gate: EU Safety Gate (Europe)
- uk_opss: UK Office for Product Safety & Standards
- canada_recalls: Health Canada Recalls
- oecd: OECD GlobalRecalls portal
- rappel_conso: French Consumer Recalls
- accc_recalls: Australian Competition and Consumer Commission

Cost: 1 API credit per search

Args:
    content_description: Text description of the product (e.g., "children's toy with small parts")
    image_urls: List of product image URLs for visual matching (1-10 images)
    filter_by_data_sources: Limit search to specific agencies (optional)
    top_k: Number of results (1-100, default: 10)
    model_name: Fusion model - fuse_max (recommended), fuse_flex, or fuse
    input_weights: Weights for [text, images], must sum to 1.0
    api_key: Your DeepRecall API key (optional if provided via X-API-Key header)

Returns:
    Search results with matched recalls, scores, and product details

Example:
    search_recalls(
        content_description="baby crib with drop-side rails",
        top_k=5
    )
ParametersJSON Schema
NameRequiredDescriptionDefault
top_kNo
api_keyNo
image_urlsNo
model_nameNofuse_max
input_weightsNo
content_descriptionNo
filter_by_data_sourcesNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and does well by disclosing cost (1 API credit per search), data sources, and the multimodal nature of the search. It could improve by mentioning rate limits, authentication requirements beyond the API key, or error conditions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with clear sections (description, use cases, data sources, cost, args, returns, example). Some redundancy exists (repeating API key info in args), but overall efficient with each section earning its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 7-parameter tool with no annotations and no output schema, the description provides comprehensive parameter explanations, use cases, and data sources. It could improve by describing the return format more specifically since there's no output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description compensates excellently by explaining all 7 parameters in the 'Args' section, providing meaning, constraints, and examples. It adds significant value beyond the bare schema with no titles or descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool searches for recalled products using AI-powered multimodal matching, specifying both text and image inputs. It distinguishes itself from the sibling 'get_data_sources' by focusing on search functionality rather than data source retrieval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit use cases (pre-purchase safety checks, supplier vetting, marketplace compliance, consumer protection) and lists specific data sources. It clearly indicates when this tool is appropriate versus the sibling tool which retrieves data sources.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.