DeepRecall - Product Safety Intelligence
Server Details
Search 120,000+ recalled products from 8 global safety agencies using AI similarity.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- adrida/deeprecall-mcp
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
2 toolsget_data_sourcesAInspect
Get information about available recall data sources.
Returns a list of all supported regulatory agencies and their coverage.
This is a free call that does not consume API credits.
Returns:
Dictionary with data sources and their descriptions| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively adds context beyond the input schema by stating it's 'a free call that does not consume API credits,' which informs about cost and rate limit implications. It also describes the return format ('Dictionary with data sources and their descriptions'), though it doesn't detail error handling or authentication needs. This provides useful behavioral insights for a tool with zero annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded: it starts with the core purpose, then details the return value and behavioral traits (free call), all in three concise sentences. Every sentence adds value without repetition or fluff, making it efficient and easy to parse for an AI agent.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (0 parameters, no annotations, no output schema), the description is complete enough. It covers purpose, return format, and cost behavior. However, it doesn't specify if the tool is read-only or safe (though implied by 'get' and 'free'), and with no output schema, it could benefit from more detail on the dictionary structure (e.g., key-value examples). Still, it's largely adequate for this simple tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0 parameters with 100% coverage, so the description doesn't need to compensate for missing param info. The description adds no parameter semantics (as there are none), which is appropriate. Baseline for 0 params is 4, as the description focuses on output and usage without unnecessary param details.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Get information about available recall data sources' and specifies it returns 'a list of all supported regulatory agencies and their coverage.' This is specific (verb+resource) and distinguishes it from the sibling 'search_recalls,' which likely searches actual recall data rather than metadata about sources. It doesn't explicitly contrast with the sibling, but the distinction is implied by the different resources (sources vs. recalls).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage by stating it 'Returns a list of all supported regulatory agencies and their coverage' and notes it's 'a free call that does not consume API credits,' suggesting it's safe for frequent use. However, it lacks explicit guidance on when to use this tool versus the sibling 'search_recalls' (e.g., for metadata lookup vs. actual data retrieval) or any prerequisites. The context is clear but not comprehensive.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_recallsAInspect
Search for recalled products similar to your query.
This tool searches DeepRecall's global product safety database using AI-powered
multimodal matching. Provide a text description and/or product images to find
similar recalled products.
Use Cases:
- Pre-purchase safety checks: Before buying, verify if similar products were recalled
- Supplier vetting: Check if a supplier's products have safety issues
- Marketplace compliance: Verify products against recall databases
- Consumer protection: Identify potentially hazardous products
Data Sources:
- us_cpsc: US Consumer Product Safety Commission
- us_fda: US Food and Drug Administration
- safety_gate: EU Safety Gate (Europe)
- uk_opss: UK Office for Product Safety & Standards
- canada_recalls: Health Canada Recalls
- oecd: OECD GlobalRecalls portal
- rappel_conso: French Consumer Recalls
- accc_recalls: Australian Competition and Consumer Commission
Cost: 1 API credit per search
Args:
content_description: Text description of the product (e.g., "children's toy with small parts")
image_urls: List of product image URLs for visual matching (1-10 images)
filter_by_data_sources: Limit search to specific agencies (optional)
top_k: Number of results (1-100, default: 10)
model_name: Fusion model - fuse_max (recommended), fuse_flex, or fuse
input_weights: Weights for [text, images], must sum to 1.0
api_key: Your DeepRecall API key (optional if provided via X-API-Key header)
Returns:
Search results with matched recalls, scores, and product details
Example:
search_recalls(
content_description="baby crib with drop-side rails",
top_k=5
)| Name | Required | Description | Default |
|---|---|---|---|
| top_k | No | ||
| api_key | No | ||
| image_urls | No | ||
| model_name | No | fuse_max | |
| input_weights | No | ||
| content_description | No | ||
| filter_by_data_sources | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and does well by disclosing cost (1 API credit per search), data sources, and the multimodal nature of the search. It could improve by mentioning rate limits, authentication requirements beyond the API key, or error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with clear sections (description, use cases, data sources, cost, args, returns, example). Some redundancy exists (repeating API key info in args), but overall efficient with each section earning its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 7-parameter tool with no annotations and no output schema, the description provides comprehensive parameter explanations, use cases, and data sources. It could improve by describing the return format more specifically since there's no output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description compensates excellently by explaining all 7 parameters in the 'Args' section, providing meaning, constraints, and examples. It adds significant value beyond the bare schema with no titles or descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches for recalled products using AI-powered multimodal matching, specifying both text and image inputs. It distinguishes itself from the sibling 'get_data_sources' by focusing on search functionality rather than data source retrieval.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit use cases (pre-purchase safety checks, supplier vetting, marketplace compliance, consumer protection) and lists specific data sources. It clearly indicates when this tool is appropriate versus the sibling tool which retrieves data sources.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!