Skip to main content
Glama
cfpramod

open-museum-mcp

search_artworks

Search multiple open-access museum collections to find reuse-safe artwork records with verified rights. Filter by museum, image availability, and creation date range for precise queries like 'Dutch genre painting 1640–1680'.

Instructions

Search across registered open-access museum collections. Returns artwork records that pass source-specific rights verification (ambiguous records excluded by default). Supports an optional date-range filter for researcher queries like "Dutch genre painting 1640–1680".

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
queryYesFree-text query.
museumNoOptional museum code. Currently registered: met, cleveland, aic, wikimedia (Commons), europeana (federated European institutions; requires EUROPEANA_API_KEY env var).
has_imageNoRestrict to records with an image URL. Defaults to true. Note: some museums (e.g. The Met) only expose images-only search server-side.
limitNo
year_minNoOptional inclusive lower bound on artwork creation year. Negative for BCE (e.g. -500 = 500 BCE). Records with no parseable date are excluded when any year bound is set.
year_maxNoOptional inclusive upper bound on artwork creation year. Negative for BCE. Records with no parseable date are excluded when any year bound is set.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It discloses that ambiguous records are excluded by default, that rights verification is applied, and that records with no parseable date are excluded when year bounds are set. The parameter description for has_image also notes museum-specific behavior (Met only exposes images-only search server-side). This is good but could mention authentication or rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise, with two sentences front-loading the main purpose and then adding a usage example. No unnecessary words or repetition.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 6 parameters and no output schema, the description covers core functionality and filtering but lacks details on return format, pagination, or ordering. It is adequate but not fully complete for a search tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is high (83%), so baseline is 3. The description adds value by giving a specific date-range usage example ('Dutch genre painting 1640–1680') that illustrates the year_min and year_max parameters. It also provides context about rights verification, though not directly per parameter.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool searches across registered open-access museum collections and returns artwork records with rights verification. It uses specific verb and resource, distinguishing it from siblings like get_artwork (retrieval by ID) and discover_random (random selection).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides a concrete example of when to use the date-range filter ('researcher queries like Dutch genre painting 1640–1680'), which helps with usage context. However, it does not explicitly state when not to use this tool or contrast it with sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/cfpramod/open-museum-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server