art
Server Details
Art MCP — Metropolitan Museum of Art Collection API (free, no auth)
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-art
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.9/5 across 8 of 8 tools scored. Lowest: 2.9/5.
The tools have clear distinctions in their core purposes, such as art-related queries (get_artwork, search_artworks, get_departments) versus general utility (ask_pipeworx, discover_tools) and memory operations (remember, recall, forget). However, there is some overlap between ask_pipeworx and discover_tools, as both help find information or tools, which could cause confusion about when to use each. Additionally, the art tools are well-separated from others, reducing ambiguity within that subset.
Most tools follow a consistent verb_noun pattern, such as get_artwork, search_artworks, get_departments, recall, and forget, which makes them predictable and easy to understand. The main deviation is ask_pipeworx, which uses a verb_noun format but with a proprietary name, and discover_tools, which fits the pattern but stands out slightly due to its broader scope. Overall, the naming is largely consistent with only minor inconsistencies.
With 8 tools, the count is reasonable and well-scoped for a server that combines art museum access with general query and memory functions. It avoids being too thin or overly complex, though the inclusion of both ask_pipeworx and discover_tools might be slightly redundant. The number supports diverse tasks without overwhelming an agent, fitting the server's purpose effectively.
For the art domain, the server provides good coverage with search, retrieval, and department listing, but lacks update or delete operations for artworks, which is acceptable since it's a read-only public API. However, the broader utility tools (ask_pipeworx, discover_tools) and memory functions create a mixed domain, making it harder to assess overall completeness. There are no major gaps for the stated functionalities, but the integration of different domains feels somewhat incomplete.
Available Tools
8 toolsask_pipeworxAInspect
Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | Your question or request in natural language |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key traits: the tool interprets natural language questions, selects appropriate data sources, and returns results. However, it lacks details on limitations such as rate limits, error handling, or specific data source constraints, which would be helpful for an agent to anticipate potential issues.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and concise, with three sentences that each serve a clear purpose: stating the tool's function, explaining its advantage over alternatives, and providing examples. There is no redundant information, and it is front-loaded with the core functionality, making it easy for an agent to quickly understand the tool's role.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (interpreting natural language to select data sources) and the absence of annotations and output schema, the description does a good job of explaining the tool's behavior and use cases. However, it could be more complete by mentioning potential limitations or the types of data sources available, which would help an agent better assess when to use this tool versus more specific siblings.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, with the parameter 'question' fully documented in the schema as 'Your question or request in natural language.' The description adds minimal value beyond this, only reinforcing that questions should be in 'plain English' without providing additional syntax or format details. This meets the baseline score of 3 for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Ask a question in plain English and get an answer from the best available data source.' It specifies the verb ('ask'), resource ('answer'), and mechanism ('Pipeworx picks the right tool, fills the arguments'), distinguishing it from sibling tools like discover_tools or search_artworks by emphasizing natural language input without manual tool selection.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool: 'No need to browse tools or learn schemas — just describe what you need.' It contrasts with alternatives by implying that other tools require browsing or schema knowledge, and includes examples like 'What is the US trade deficit with China?' to illustrate appropriate use cases.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discover_toolsAInspect
Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of tools to return (default 20, max 50) | |
| query | Yes | Natural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes the tool's behavior ('search... by describing what you need', 'returns the most relevant tools with names and descriptions'), though it lacks details on potential limitations like rate limits or authentication requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded, with two clear sentences that efficiently convey purpose and usage guidelines without unnecessary elaboration or redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (2 parameters, no output schema, no annotations), the description is largely complete for a search tool, covering purpose and usage well, though it could benefit from more behavioral details like error handling or result format specifics.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents both parameters (query and limit). The description adds minimal value beyond the schema by implying natural language input for the query but doesn't provide additional syntax or format details.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('search', 'returns') and resource ('Pipeworx tool catalog'), distinguishing it from sibling tools like get_artwork or search_artworks by focusing on tool discovery rather than artwork or department data retrieval.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
It provides explicit guidance on when to use this tool ('Call this FIRST when you have 500+ tools available and need to find the right ones for your task'), including a specific threshold (500+ tools) and context (finding tools for a task), with no misleading or contradictory advice.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forgetCInspect
Delete a stored memory by key.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It states this is a deletion operation, implying it's destructive, but doesn't specify whether deletions are permanent, reversible, require specific permissions, or have side effects (e.g., cascading deletions). This is a significant gap for a mutation tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with zero wasted words. It's front-loaded with the core action and resource, making it immediately scannable and easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive tool with no annotations and no output schema, the description is inadequate. It doesn't explain what happens after deletion (e.g., confirmation message, error if key doesn't exist), behavioral constraints, or implications. Given the complexity of a deletion operation, more context is needed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents the single 'key' parameter. The description adds no additional semantic context beyond what the schema provides (e.g., what constitutes a valid key, key format, or examples). Baseline 3 is appropriate when the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Delete') and the resource ('a stored memory by key'), making the purpose immediately understandable. It doesn't explicitly differentiate from sibling tools like 'recall' or 'remember', but the verb 'Delete' provides a clear distinction from retrieval operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., needing an existing memory to delete), exclusions, or relationships with sibling tools like 'recall' (which likely retrieves memories) or 'remember' (which likely creates memories).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_artworkAInspect
Get full details for a specific artwork. Provide the object ID from search results. Returns title, artist, date, medium, department, and high-resolution image URL.
| Name | Required | Description | Default |
|---|---|---|---|
| object_id | Yes | Met Museum object ID (e.g., 436535) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses that the tool retrieves details (a read operation) and lists specific fields returned (title, artist, etc.), which is useful context. However, it lacks information on error handling (e.g., invalid ID), rate limits, or authentication needs, leaving behavioral gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence that efficiently conveys the tool's purpose, resource, key parameter, and returned fields without any redundant information, making it front-loaded and appropriately sized.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (single parameter, no output schema, no annotations), the description is adequate but incomplete. It covers the basic operation and fields returned, but lacks details on output format (e.g., JSON structure), error cases, or performance considerations, which could aid the agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents the 'object_id' parameter with an example. The description adds no additional parameter semantics beyond what the schema provides, such as format constraints or validation rules, meeting the baseline for high coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get full details') and resource ('Metropolitan Museum artwork'), and distinguishes from sibling tools by specifying it retrieves details for a single artwork by object ID, unlike 'get_departments' (list departments) or 'search_artworks' (search multiple artworks).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by specifying 'by its object ID,' which suggests this tool is for retrieving known artworks, not searching. However, it does not explicitly state when to use alternatives like 'search_artworks' for unknown IDs or 'get_departments' for department-level data.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_departmentsBInspect
List all departments in the Met collection (e.g., "Paintings", "Sculpture", "Photographs"). Use department names to filter search_artworks results.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool retrieves a list but doesn't mention whether it's read-only, has rate limits, requires authentication, or describes the return format. For a tool with zero annotation coverage, this is a significant gap in transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose without any wasted words. It's appropriately sized for a simple tool with no parameters, making it easy to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (0 parameters, no output schema), the description is minimally adequate but lacks depth. It doesn't explain the return format or behavioral traits, which are important for a tool with no annotations. This leaves gaps in understanding how to use the output effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has 0 parameters, and the schema description coverage is 100%, so no parameter information is needed. The description appropriately doesn't discuss parameters, earning a high baseline score for not adding unnecessary details.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Get') and resource ('list of all departments in the Metropolitan Museum of Art'), making the purpose unambiguous. However, it doesn't explicitly differentiate from sibling tools like 'get_artwork' or 'search_artworks', which prevents a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'get_artwork' or 'search_artworks'. It lacks context about use cases, prerequisites, or exclusions, leaving the agent without directional cues for tool selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recallAInspect
Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.
| Name | Required | Description | Default |
|---|---|---|---|
| key | No | Memory key to retrieve (omit to list all keys) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions that memories can be retrieved from 'earlier in the session or in previous sessions,' which adds useful context about persistence. However, it doesn't cover potential limitations like error handling, rate limits, or what happens if a key doesn't exist, leaving gaps in behavioral understanding.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core functionality in the first sentence and follows with usage guidance. Every sentence earns its place by providing essential information without redundancy, making it efficient and well-structured for quick understanding.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description does a decent job covering purpose and usage. However, it lacks details on return values (e.g., format of retrieved memories or listed keys) and doesn't address potential errors or constraints, which are important for a tool with session persistence. This leaves some gaps in completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, so the baseline is 3. The description adds value by explaining the semantics of omitting the key: 'omit to list all keys,' which clarifies the dual functionality beyond the schema's technical definition. This extra context justifies a score above the baseline.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Retrieve a previously stored memory by key, or list all stored memories (omit key).' This specifies the verb ('retrieve'/'list') and resource ('memory'), making it easy to understand what the tool does. However, it doesn't explicitly differentiate from sibling tools like 'remember' or 'forget', which prevents a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool: 'Use this to retrieve context you saved earlier in the session or in previous sessions.' It also includes conditional usage instructions: 'omit key' to list all memories, which helps distinguish between retrieval modes. This covers both context and alternatives effectively.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rememberAInspect
Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key (e.g., "subject_property", "target_ticker", "user_preference") | |
| value | Yes | Value to store (any text — findings, addresses, preferences, notes) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key traits: it's a storage operation (implying mutation), specifies persistence behavior ('Authenticated users get persistent memory; anonymous sessions last 24 hours'), and hints at session scope. However, it doesn't cover error cases or limits on key/value size.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded: the first sentence states the core purpose, and subsequent sentences add valuable context without redundancy. Every sentence earns its place by providing essential usage guidelines and behavioral details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (storage with session/persistence nuances), no annotations, and no output schema, the description is largely complete. It covers purpose, usage, and key behavioral aspects like persistence rules. A minor gap is the lack of output details (e.g., confirmation message), but this is acceptable without an output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already fully documents the two parameters (key and value). The description adds no additional parameter-specific semantics beyond what the schema provides, such as examples or constraints not in the schema. Baseline 3 is appropriate when the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with a specific verb ('Store') and resource ('key-value pair in your session memory'), and distinguishes it from sibling tools like 'recall' (which likely retrieves) and 'forget' (which likely removes). It explicitly mentions what gets stored and where.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool ('save intermediate findings, user preferences, or context across tool calls') and includes context about alternatives by distinguishing it from sibling tools like 'recall'. It also specifies usage conditions for authenticated vs. anonymous users.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_artworksAInspect
Search the Met's collection by keyword or department (e.g., "Paintings", "Sculpture"). Returns up to 5 matching artworks with title, artist, date, medium, and image URL.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | Search query (e.g., "sunflowers", "ancient egypt", "monet") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and does well by disclosing key behavioral traits: it's a read operation (implied by 'Search'), returns up to 5 results (limitation), and specifies the return format details (title, artist, date, medium, image URL). However, it doesn't mention rate limits, authentication needs, or error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste: first sentence states purpose and scope, second sentence specifies return format and limitation. Perfectly front-loaded and appropriately sized for this simple tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple search tool with 1 parameter, 100% schema coverage, and no output schema, the description is quite complete: it explains what the tool does, what it returns, and result limitations. The main gap is lack of explicit sibling differentiation, but overall it provides sufficient context for an agent to use it correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% (the single parameter 'query' is well-described in schema), so baseline is 3. The description adds no additional parameter semantics beyond what's in the schema, but doesn't need to compensate for gaps.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Search'), target resource ('Metropolitan Museum of Art collection'), and scope ('by keyword'). It distinguishes from siblings get_artwork (likely retrieves single artwork) and get_departments (likely lists departments) by specifying search functionality with keyword matching.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context (searching by keyword) but doesn't explicitly state when to use this tool versus alternatives like get_artwork or get_departments. No guidance on when-not-to-use scenarios or prerequisites is provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!