Skip to main content
Glama

art

Server Details

Art MCP — Metropolitan Museum of Art Collection API (free, no auth)

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-art
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 3.7/5 across 3 of 3 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose: get_artwork retrieves details for a specific artwork by ID, get_departments lists all departments, and search_artworks finds artworks by keyword. There is no overlap in functionality, making it easy for an agent to select the correct tool without confusion.

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern using snake_case: get_artwork, get_departments, and search_artworks. The naming is predictable and readable, with no deviations in style or convention.

Tool Count3/5

With only 3 tools, the server feels thin for an art collection domain, as it lacks operations like creating, updating, or deleting artworks, which might be expected for a full CRUD lifecycle. However, for a read-only public API, the count is borderline but reasonable.

Completeness3/5

The tools cover basic read operations (get and search) and department listing, but there are notable gaps such as no update or delete capabilities, and no tools for managing collections or artists beyond retrieval. This limits agents to querying only, which may cause failures in more complex workflows.

Available Tools

3 tools
get_artworkAInspect

Get full details for a Metropolitan Museum artwork by its object ID, including title, artist, date, medium, department, and image URL.

ParametersJSON Schema
NameRequiredDescriptionDefault
object_idYesMet Museum object ID (e.g., 436535)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses that the tool retrieves details (a read operation) and lists specific fields returned (title, artist, etc.), which is useful context. However, it lacks information on error handling (e.g., invalid ID), rate limits, or authentication needs, leaving behavioral gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that efficiently conveys the tool's purpose, resource, key parameter, and returned fields without any redundant information, making it front-loaded and appropriately sized.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (single parameter, no output schema, no annotations), the description is adequate but incomplete. It covers the basic operation and fields returned, but lacks details on output format (e.g., JSON structure), error cases, or performance considerations, which could aid the agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents the 'object_id' parameter with an example. The description adds no additional parameter semantics beyond what the schema provides, such as format constraints or validation rules, meeting the baseline for high coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get full details') and resource ('Metropolitan Museum artwork'), and distinguishes from sibling tools by specifying it retrieves details for a single artwork by object ID, unlike 'get_departments' (list departments) or 'search_artworks' (search multiple artworks).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by specifying 'by its object ID,' which suggests this tool is for retrieving known artworks, not searching. However, it does not explicitly state when to use alternatives like 'search_artworks' for unknown IDs or 'get_departments' for department-level data.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_departmentsBInspect

Get the list of all departments in the Metropolitan Museum of Art.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool retrieves a list but doesn't mention whether it's read-only, has rate limits, requires authentication, or describes the return format. For a tool with zero annotation coverage, this is a significant gap in transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose without any wasted words. It's appropriately sized for a simple tool with no parameters, making it easy to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (0 parameters, no output schema), the description is minimally adequate but lacks depth. It doesn't explain the return format or behavioral traits, which are important for a tool with no annotations. This leaves gaps in understanding how to use the output effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has 0 parameters, and the schema description coverage is 100%, so no parameter information is needed. The description appropriately doesn't discuss parameters, earning a high baseline score for not adding unnecessary details.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('Get') and resource ('list of all departments in the Metropolitan Museum of Art'), making the purpose unambiguous. However, it doesn't explicitly differentiate from sibling tools like 'get_artwork' or 'search_artworks', which prevents a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'get_artwork' or 'search_artworks'. It lacks context about use cases, prerequisites, or exclusions, leaving the agent without directional cues for tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_artworksAInspect

Search the Metropolitan Museum of Art collection by keyword. Returns details for up to 5 matching artworks including title, artist, date, medium, and image URL.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesSearch query (e.g., "sunflowers", "ancient egypt", "monet")
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and does well by disclosing key behavioral traits: it's a read operation (implied by 'Search'), returns up to 5 results (limitation), and specifies the return format details (title, artist, date, medium, image URL). However, it doesn't mention rate limits, authentication needs, or error conditions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste: first sentence states purpose and scope, second sentence specifies return format and limitation. Perfectly front-loaded and appropriately sized for this simple tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple search tool with 1 parameter, 100% schema coverage, and no output schema, the description is quite complete: it explains what the tool does, what it returns, and result limitations. The main gap is lack of explicit sibling differentiation, but overall it provides sufficient context for an agent to use it correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% (the single parameter 'query' is well-described in schema), so baseline is 3. The description adds no additional parameter semantics beyond what's in the schema, but doesn't need to compensate for gaps.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Search'), target resource ('Metropolitan Museum of Art collection'), and scope ('by keyword'). It distinguishes from siblings get_artwork (likely retrieves single artwork) and get_departments (likely lists departments) by specifying search functionality with keyword matching.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context (searching by keyword) but doesn't explicitly state when to use this tool versus alternatives like get_artwork or get_departments. No guidance on when-not-to-use scenarios or prerequisites is provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.