art
Server Details
Art MCP — Metropolitan Museum of Art Collection API (free, no auth)
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-art
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.7/5 across 3 of 3 tools scored.
Each tool has a clearly distinct purpose: get_artwork retrieves details for a specific artwork by ID, get_departments lists all departments, and search_artworks finds artworks by keyword. There is no overlap in functionality, making it easy for an agent to select the correct tool without confusion.
All tool names follow a consistent verb_noun pattern using snake_case: get_artwork, get_departments, and search_artworks. The naming is predictable and readable, with no deviations in style or convention.
With only 3 tools, the server feels thin for an art collection domain, as it lacks operations like creating, updating, or deleting artworks, which might be expected for a full CRUD lifecycle. However, for a read-only public API, the count is borderline but reasonable.
The tools cover basic read operations (get and search) and department listing, but there are notable gaps such as no update or delete capabilities, and no tools for managing collections or artists beyond retrieval. This limits agents to querying only, which may cause failures in more complex workflows.
Available Tools
3 toolsget_artworkAInspect
Get full details for a Metropolitan Museum artwork by its object ID, including title, artist, date, medium, department, and image URL.
| Name | Required | Description | Default |
|---|---|---|---|
| object_id | Yes | Met Museum object ID (e.g., 436535) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses that the tool retrieves details (a read operation) and lists specific fields returned (title, artist, etc.), which is useful context. However, it lacks information on error handling (e.g., invalid ID), rate limits, or authentication needs, leaving behavioral gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence that efficiently conveys the tool's purpose, resource, key parameter, and returned fields without any redundant information, making it front-loaded and appropriately sized.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (single parameter, no output schema, no annotations), the description is adequate but incomplete. It covers the basic operation and fields returned, but lacks details on output format (e.g., JSON structure), error cases, or performance considerations, which could aid the agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents the 'object_id' parameter with an example. The description adds no additional parameter semantics beyond what the schema provides, such as format constraints or validation rules, meeting the baseline for high coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get full details') and resource ('Metropolitan Museum artwork'), and distinguishes from sibling tools by specifying it retrieves details for a single artwork by object ID, unlike 'get_departments' (list departments) or 'search_artworks' (search multiple artworks).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by specifying 'by its object ID,' which suggests this tool is for retrieving known artworks, not searching. However, it does not explicitly state when to use alternatives like 'search_artworks' for unknown IDs or 'get_departments' for department-level data.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_departmentsBInspect
Get the list of all departments in the Metropolitan Museum of Art.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool retrieves a list but doesn't mention whether it's read-only, has rate limits, requires authentication, or describes the return format. For a tool with zero annotation coverage, this is a significant gap in transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose without any wasted words. It's appropriately sized for a simple tool with no parameters, making it easy to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (0 parameters, no output schema), the description is minimally adequate but lacks depth. It doesn't explain the return format or behavioral traits, which are important for a tool with no annotations. This leaves gaps in understanding how to use the output effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has 0 parameters, and the schema description coverage is 100%, so no parameter information is needed. The description appropriately doesn't discuss parameters, earning a high baseline score for not adding unnecessary details.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Get') and resource ('list of all departments in the Metropolitan Museum of Art'), making the purpose unambiguous. However, it doesn't explicitly differentiate from sibling tools like 'get_artwork' or 'search_artworks', which prevents a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'get_artwork' or 'search_artworks'. It lacks context about use cases, prerequisites, or exclusions, leaving the agent without directional cues for tool selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_artworksAInspect
Search the Metropolitan Museum of Art collection by keyword. Returns details for up to 5 matching artworks including title, artist, date, medium, and image URL.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | Search query (e.g., "sunflowers", "ancient egypt", "monet") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and does well by disclosing key behavioral traits: it's a read operation (implied by 'Search'), returns up to 5 results (limitation), and specifies the return format details (title, artist, date, medium, image URL). However, it doesn't mention rate limits, authentication needs, or error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste: first sentence states purpose and scope, second sentence specifies return format and limitation. Perfectly front-loaded and appropriately sized for this simple tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple search tool with 1 parameter, 100% schema coverage, and no output schema, the description is quite complete: it explains what the tool does, what it returns, and result limitations. The main gap is lack of explicit sibling differentiation, but overall it provides sufficient context for an agent to use it correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% (the single parameter 'query' is well-described in schema), so baseline is 3. The description adds no additional parameter semantics beyond what's in the schema, but doesn't need to compensate for gaps.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Search'), target resource ('Metropolitan Museum of Art collection'), and scope ('by keyword'). It distinguishes from siblings get_artwork (likely retrieves single artwork) and get_departments (likely lists departments) by specifying search functionality with keyword matching.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context (searching by keyword) but doesn't explicitly state when to use this tool versus alternatives like get_artwork or get_departments. No guidance on when-not-to-use scenarios or prerequisites is provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!