Skip to main content
Glama
domdomegg

openfoodfacts-mcp

Search products (standard)

search_products_standard
Read-only

Search food products by keyword, brand, category, or nutrition grade using structured filters. Returns exact result counts with strict AND matching for accurate product lookup.

Instructions

Search Open Food Facts with structured filters. Best for simple keyword queries and brand/category filtering. Returns exact result counts and well-populated products. If you have a barcode, use get_product instead.

How search works: strict AND against a keyword index built from product_name, generic_name, brands, categories, origins, labels. One unmatched query word → zero results.

Tips:

  • Prefer 2-3 distinctive words over the full product name

  • Put brand names in brands_tags, not the query text

  • Brand normalization is generous: "sainsburys", "sainsbury's", "sainsbury-s" all match

  • For fresh produce, use brands_tags + categories_tags rather than text search

  • sort_by=popularity works well here (not supported in search_products_lucene)

If you get zero results, try dropping words or using search_products_lucene which has more flexible text matching.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
queryNoSearch terms. Strict AND: every word must exist in the product's indexed keywords, so prefer 2-3 distinctive words over the full product name. Use words as they appear on the pack (don't strip plurals or possessives — the search normalizes both sides). Put brand names in brands_tags instead of here.
categories_tagsNoFilter by category tag (e.g. "en:breakfast-cereals", "en:tomatoes"). Best way to find fresh produce: text-searching "banana" matches thousands of banana-flavoured products, but categories_tags "en:bananas" finds actual bananas.
brands_tagsNoFilter by brand. Input is normalized, so "sainsburys", "sainsbury's", "sainsbury-s" all match the same brand — no need to know the exact tag slug. More reliable than putting the brand in the query text.
nutrition_grades_tagsNoFilter by Nutri-Score grade (a, b, c, d, e)
sort_byNoSort order
pageNoPage number (default: 1)
page_sizeNoResults per page (default: 24, max: 100)
fieldsNoFields to return per product. Defaults to: code, product_name, brands, categories, nutriscore_grade, nova_group, image_url, quantity
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Beyond the readOnlyHint annotation, the description details the strict AND search behavior, the index fields, normalization specifics, and tips for fresh produce. It fully discloses the query mechanics and potential pitfalls, giving the agent a thorough understanding of what to expect.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with sections (purpose, how search works, tips) and bullet points for clarity. While somewhat lengthy, every part adds useful information, and the structure makes it easy to scan. A minor deduction for length, but justified by the need to cover complex behavior.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 8 parameters, no output schema, and multiple siblings, the description is thorough. It explains search mechanics, tips, and sibling differentiation. It could briefly note the pagination structure in the response, but the input schema covers page/page_size, and the focus is on usage optimization. Overall, it provides sufficient context for effective tool invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

All parameters have schema descriptions (100% coverage), so baseline is 3. The description adds significant value by explaining the strict AND behavior for 'query', giving concrete examples for 'categories_tags' (en:bananas vs text search) and 'brands_tags' (normalization examples), and noting default fields. This enriches the schema's meaning.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Search Open Food Facts with structured filters' and distinguishes from sibling tools like get_product (for barcode) and search_products_lucene (for flexible text matching). It provides a specific verb and resource along with use case differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly advises when to use: 'Best for simple keyword queries and brand/category filtering' and when not: 'If you have a barcode, use get_product instead'. Also includes fallback guidance: 'If you get zero results, try using search_products_lucene'. This provides clear context for selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/domdomegg/openfoodfacts-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server