Jira
Server Details
Jira MCP Pack
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-jira
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.9/5 across 9 of 9 tools scored. Lowest: 2.9/5.
The set mixes general-purpose memory tools (forget, recall, remember) and a meta-tool (ask_pipeworx) with Jira-specific tools, causing confusion. 'ask_pipeworx' overlaps with the entire Jira toolset by claiming to automatically handle tasks, while 'discover_tools' is only useful for large tool catalogs, not this server.
Jira tools follow a jira_verb_noun pattern, but the other tools (ask_pipeworx, discover_tools, forget, recall, remember) use plain verb or verb_noun without prefix, breaking consistency. The mix of snake_case and plain words is a minor inconsistency.
9 tools is within the typical range, but the server includes 5 non-Jira tools (memory, meta) that seem out of scope for a Jira-specific server. This dilutes the focus and suggests unnecessary tools.
The Jira subset covers basic get/list/search for issues and projects, but lacks critical operations like create, update, delete issues, or manage comments, transitions, and users. The additional tools do not address these gaps, making the surface incomplete for Jira management.
Available Tools
9 toolsask_pipeworxAInspect
Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | Your question or request in natural language |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description explains that Pipeworx picks the right tool and fills arguments, revealing its internal orchestration behavior. With no annotations provided, the description carries full burden and does a good job disclosing that the tool may invoke other tools and return results automatically.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise with three sentences that front-load the core purpose, then explain the mechanism, and provide examples. Every sentence adds value without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (one required parameter, no output schema), the description is complete enough. It covers what the tool does, how it works, and provides examples. The only minor gap is that it doesn't mention potential limitations (e.g., scope of questions), but it's not necessary for this simple tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema already provides a clear description for the single parameter 'question' (100% coverage). The description adds value by providing examples of valid questions and clarifying that it accepts natural language, but the baseline of 3 is appropriate since the schema already explains the parameter sufficiently.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: it answers questions in plain English by selecting the best data source. The verb 'ask' and resource 'answer' are specific, and the examples ('What is the US trade deficit with China?') distinguish it from sibling tools that are more specialized (e.g., Jira tools).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says 'No need to browse tools or learn schemas — just describe what you need,' which indicates when to use this tool (as a general-purpose question-answer tool) and implies not to use it when you need to use a specific tool directly. However, it does not explicitly state when not to use it or mention alternatives, but the context of sibling tools provides implicit guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discover_toolsAInspect
Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of tools to return (default 20, max 50) | |
| query | Yes | Natural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description explains that the tool 'returns the most relevant tools with names and descriptions,' which sets expectations about the response format. Since no annotations are provided, the description carries the full burden of behavioral disclosure. It lacks details on ordering, latency, or whether the query is semantic or keyword-based, but the overall behavior is sufficiently clear for an agent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is three sentences, each adding distinct value: the first states the core purpose, the second explains the output, and the third gives a clear usage directive. No extraneous information. Front-loaded with the action and resource.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (simple search with two parameters, no output schema, no nested objects), the description is complete. It explains the input format, output content, and the strategic context (call first). The return value is described as 'names and descriptions,' which is sufficient for a discovery tool. No output schema is needed because the output is a list of tool definitions.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% (both 'query' and 'limit' are documented in the schema). The description adds value by explaining the format of 'query' with examples ('natural language description of what you want to do (e.g., "analyze housing market trends")') and providing context for 'limit' (default and max values). This goes beyond the schema's bare descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Search the Pipeworx tool catalog by describing what you need.' It specifies the action ('search'), the resource ('Pipeworx tool catalog'), and the return value ('most relevant tools with names and descriptions'). It also differentiates from siblings by implying a discovery/filtering role, which is distinct from the other tools like ask_pipeworx, recall, or jira_* tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says 'Call this FIRST when you have 500+ tools available and need to find the right ones for your task.' This provides a clear when-to-use directive and indicates that it's a preliminary step before selecting a specific tool. No exclusions or alternatives are needed as it is a unique discovery tool among siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forgetCInspect
Delete a stored memory by key.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It states the tool deletes a memory, implying irreversibility, but does not clarify side effects (e.g., whether related data is also affected) or confirm if deletion is permanent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that conveys the core action without extraneous words. It is front-loaded and earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The tool is simple (one required parameter, no output schema), but the description lacks context about the return value (e.g., success indication), error cases (e.g., key not found), or concurrency implications. Given its simplicity, more completeness is expected.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already describes the 'key' parameter. The description does not add further meaning beyond what the schema provides, which is acceptable given full coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Delete a stored memory by key' uses a specific verb ('Delete') and resource ('stored memory'), making the purpose clear. However, it does not differentiate from sibling tools like 'remember' or 'recall', which could also be related to memory management.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives like 'remember' or 'recall'. The description does not mention prerequisites, caveats, or scenarios where deletion is appropriate.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
jira_get_issueAInspect
Get full details for a Jira issue by key (e.g., 'PROJ-123'). Returns description, status, assignee, priority, comments, attachments, and linked issues.
| Name | Required | Description | Default |
|---|---|---|---|
| fields | No | Comma-separated field names to include (optional) | |
| issue_key | Yes | Issue key (e.g., "PROJ-123") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so description carries burden. It states the tool returns full issue details, but does not disclose if it requires authentication, rate limits, or any side effects. With no annotations, more detail would be helpful.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is a single sentence, concise and front-loaded with the main purpose. Could be slightly more structured but no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple read operation with 2 parameters and no output schema, the description is adequate. It states the input and what is returned. However, it could mention that the optional 'fields' parameter allows selecting specific fields to reduce response size.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so both parameters are described in the schema. The description does not add additional meaning beyond what the schema provides. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool gets a single Jira issue by its key, includes an example, and specifies 'Returns full issue details.' This distinguishes it from siblings like jira_search (which searches) and jira_list_projects (which lists projects).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Description implies when to use (when you have an issue key and want full details), but does not explicitly say when not to use or mention alternatives like jira_search for listing issues.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
jira_get_projectAInspect
Get details for a specific Jira project by key (e.g., 'PROJ') or ID. Returns name, description, lead, issue types, and custom fields.
| Name | Required | Description | Default |
|---|---|---|---|
| project_key | Yes | Project key (e.g., "PROJ") or numeric project ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description must disclose behavior. It states it retrieves details, which is a read operation, but does not mention any side effects or permission requirements. Adequate but minimal.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, concise and front-loaded with the purpose. No extraneous words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With no output schema, description does not explain what fields are returned (e.g., name, description, lead). For a simple project get, this might be acceptable, but more detail could help an agent judge if the tool meets its needs.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with a single parameter 'project_key' described as 'Project key (e.g., "PROJ") or numeric project ID'. Description does not add further meaning beyond the schema, so baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description uses clear verb 'get' and specifies resource 'specific Jira project by key or ID'. Distinguishes from sibling 'jira_list_projects' which retrieves multiple projects. Could be more precise about output (e.g., fields returned).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Description implies use when a specific project's details are needed, but does not explicitly contrast with jira_search or jira_get_issue. No guidance on when not to use or alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
jira_list_projectsAInspect
List all accessible Jira projects. Returns project keys, names, descriptions, and types. Use before searching to discover available projects.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description discloses the scope ('accessible to the authenticated user') and indicates it's a list operation. With no annotations provided, the description carries the burden of behavioral transparency and does so adequately, though it doesn't mention pagination or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that is concise and front-loaded with the key action and resource. Every word adds value, with no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (no parameters, no output schema), the description is complete enough. It states what the tool does and its scope. The absence of return value description is acceptable since there is no output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and there are zero parameters, so the description needs to add no additional parameter info. A baseline of 4 is appropriate since the schema fully covers the parameter semantics.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('List') and a clear resource ('all Jira projects') and adds the scope 'accessible to the authenticated user', which clearly distinguishes it from sibling tools like jira_get_project (single project) or jira_search (issues).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies when to use this tool: to list all projects. It doesn't explicitly state when not to use it or mention alternatives, but the context of sibling tools (e.g., jira_get_project for a single project) provides implicit differentiation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
jira_searchAInspect
Search Jira issues using JQL queries. Returns issue keys, summaries, status, assignee, and priority. Use to find tasks by project, status, assignee, or custom criteria.
| Name | Required | Description | Default |
|---|---|---|---|
| jql | Yes | JQL query (e.g., "project = PROJ AND status = Open ORDER BY created DESC") | |
| fields | No | Comma-separated field names to include (e.g., "summary,status,assignee") | |
| max_results | No | Maximum results to return (default 20, max 100) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the burden. It states the tool searches and returns results, but does not mention if it is read-only, whether it can be used with JQL injections, or any side effects. The behavior is generic.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence, concise and front-loaded with the purpose. Every word adds value without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description does not specify the exact fields returned or the structure of results. For a search tool, this might be acceptable, but it lacks details on error handling or pagination.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so each parameter is described. The description does not add significant meaning beyond the schema; it repeats 'JQL' but does not elaborate on the query language beyond the example in the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches Jira issues using JQL and returns key fields. It distinguishes itself from sibling tools like jira_get_issue (single issue) and jira_list_projects (list projects).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for searching issues via JQL but does not explicitly contrast with other tools or provide when-not-to-use guidance. No alternatives or exclusions are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recallAInspect
Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.
| Name | Required | Description | Default |
|---|---|---|---|
| key | No | Memory key to retrieve (omit to list all keys) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Since no annotations are provided, the description carries the full burden. It discloses that the tool is read-only (retrieve/list) and specifies that it works across sessions. This is sufficient for a simple memory retrieval tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence of 22 words, front-loaded with the action and resource. No superfluous information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (1 optional parameter, no output schema, no nested objects), the description is complete. It covers usage and behavior. Could add a note about the return format, but not essential.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with one parameter. The description adds context by explaining that omitting 'key' lists all memories, which is beyond the schema description. This is clear and useful.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Retrieve' and the resource 'stored memory', and distinguishes between retrieving by key and listing all memories. It effectively differentiates from sibling tools like 'remember' and 'forget'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains when to use the tool ('retrieve context you saved earlier') and implies when not to use it (when you need to list all memories, omit key). However, it does not explicitly exclude alternatives or mention when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rememberAInspect
Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key (e.g., "subject_property", "target_ticker", "user_preference") | |
| value | Yes | Value to store (any text — findings, addresses, preferences, notes) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses key behavioral traits: authenticated users get persistent memory, anonymous sessions last 24 hours. This adds value beyond what schema provides.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise with three sentences, no waste. It is front-loaded with the core action and then adds usage context and behavioral notes.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple key-value nature with 2 required parameters, no output schema, and no nested objects, the description provides sufficient context about memory persistence and session types. It could mention that values are overwritten on same key, but overall it is complete enough.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description does not add additional parameter meaning beyond the schema examples, but it is not necessary given full coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool stores a key-value pair in session memory, using specific verbs 'store' and 'save'. It distinguishes itself from siblings like 'recall' and 'forget' by specifying the action of saving data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use the tool: to save intermediate findings, user preferences, or context across tool calls. It does not explicitly exclude alternatives, but it does imply usage scenarios that differentiate it from 'forget' and 'recall'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!