changeoracle
Server Details
ChangeOracle - 10 change management tools: CAB workflow, risk, impact, post-implementation.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- ToolOracle/changeoracle
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 2.7/5 across 10 of 10 tools scored. Lowest: 1.8/5.
Each tool has a clearly distinct purpose, covering different aspects of change management such as risk assessment, compliance, rollback, testing, and health checks. No two tools overlap in functionality.
All tool names follow a consistent verb_noun pattern using snake_case (e.g., approval_check, register_change, list_changes). This makes the tool set predictable and easy to navigate for both agents and humans.
With 10 tools, the server is well-scoped for the change management domain. Each tool addresses a specific operation without being excessive or insufficient, covering registration, analysis, compliance, and reporting.
The tool set covers the core lifecycle of change management: registration, listing, risk assessment, approval, rollback planning, test evidence, and compliance. A minor gap is the lack of a tool to update an existing change, but this is often handled via re-registration or external workflows.
Available Tools
10 toolsapproval_checkCInspect
Verify approval chain, SoD compliance, and prerequisites.
| Name | Required | Description | Default |
|---|---|---|---|
| change_id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The verb 'verify' implies a read-only operation, but no annotations exist to confirm safety (e.g., readOnlyHint). The description does not disclose behavioral traits like authentication requirements, side effects, or error states.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence, concise and front-loaded with key verbs. However, given the tool's complexity and sibling overlap, it is too terse, missing critical details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Without an output schema or parameter descriptions, and with a sibling tool (sod_check) that overlaps, the description is incomplete. It fails to explain return values, how it differs from sod_check, or what 'prerequisites' entails.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0% description coverage and the tool description does not explain the 'change_id' parameter. No additional meaning is provided beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool verifies approval chain, SoD compliance, and prerequisites, distinguishing it from the sibling sod_check which likely focuses only on SoD. However, 'prerequisites' is vague, slightly reducing clarity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool vs alternatives like sod_check or change_risk_assess. The description lacks usage context, exclusions, or prerequisites for invocation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
change_risk_assessCInspect
Automated risk assessment based on type, scope, and controls.
| Name | Required | Description | Default |
|---|---|---|---|
| change_id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided. Description does not disclose side effects, permissions needed, rate limits, or what the risk assessment entails. Behavioral transparency is minimal.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, not verbose, but overly vague. Conciseness is acceptable but at the expense of informativeness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Without output schema or behavioral details, the description is incomplete for an agent to correctly use the tool. Does not explain what the risk assessment returns or how to interpret results.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage 0% - description does not explain the change_id parameter. Mentions 'type, scope, controls' which are not in the schema, adding confusion rather than clarity.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description states 'Automated risk assessment based on type, scope, and controls' but the only parameter is change_id, not type/scope/controls. Purpose is somewhat clear but misleading because the mentioned aspects are not parameters.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus siblings like approval_check or change_stats. No when-not-to-use or alternatives provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
change_statsBInspect
Dashboard: volume, risk distribution, test/rollback coverage, emergency ratio.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description must carry full behavioral disclosure. It states it returns dashboard metrics but does not mention that it is read-only, or any permissions, data freshness, or side effects. The name 'stats' hints at safety, but the description does not explicitly confirm.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is one sentence and concise, listing key metrics. However, it could be structured for easier parsing (e.g., bullet points). It is appropriately sized but not optimally structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Description explains the tool returns a dashboard with volume, risk distribution, test/rollback coverage, and emergency ratio. Without output schema, it provides adequate but incomplete context (e.g., aggregation period not specified).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has zero parameters (100% coverage), so description need not add parameter details. It lists the metrics returned, adding value beyond the empty schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description states 'Dashboard' and lists metrics (volume, risk distribution, etc.), clearly indicating it provides an overview of change statistics. It distinguishes from sibling tools like list_changes (individual changes) and change_risk_assess (focused assessment). However, it lacks an explicit verb like 'retrieve' or 'get'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Description implies use for high-level stats, but offers no explicit guidance on when to use this tool versus siblings. No exclusions or alternatives mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
health_checkDInspect
Server status.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description must fully disclose behavior. 'Server status.' does not convey whether the operation is read-only, requires authentication, or what side effects (if any) occur.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
While only two words, the description is too terse to be useful. It appears under-specified rather than intentionally concise, missing opportunities to add value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simplicity of the tool (no params, no output schema) and the presence of siblings, the description fails to position the tool's role in the broader context (e.g., as a preliminary check before changes).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has zero parameters, so the input schema is trivially complete. The description adds no meaning beyond the schema, but baseline for 0 parameters is 4. It does not mislead.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Server status.' adds minimal information beyond the tool name 'health_check'. It does not specify what type of status is checked (e.g., uptime, latency) or what response to expect, which is vague.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidelines are provided about when to use this tool versus siblings like 'approval_check' or 'change_risk_assess'. There is no context for prerequisites or typical usage scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_changesCInspect
List changes with optional filters.
| Name | Required | Description | Default |
|---|---|---|---|
| search | No | ||
| status | No | ||
| risk_level | No | ||
| change_type | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must disclose behavioral traits. It only says 'list changes', which implies a read operation, but does not explicitly state read-only, safety, side effects, auth requirements, or rate limits. This is insufficient for an agent to understand the tool's behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise (4 words), which makes it easy to parse. However, it is too terse to be informative. A better balance between conciseness and content would be achieved by adding a few more words of context.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (4 filter parameters with enums) and lack of output schema and annotations, the description is incomplete. It does not explain return format, pagination, default behavior, or the meaning of filters. The agent lacks sufficient context to use the tool effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 4 parameters with 0% description coverage, meaning no parameter descriptions exist in the schema. The description adds only 'optional filters', which does not explain the purpose or effect of each parameter (search, status, risk_level, change_type). This minimally adds meaning beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'list' and the resource 'changes', and mentions optional filters. However, it lacks specificity about what a 'change' is (e.g., change request, record), which could cause ambiguity. The name and description distinguish it from sibling tools that perform different actions like approval_check or change_stats.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It does not mention any prerequisites, exclusions, or context for selecting this tool over siblings like change_stats (which might also list data).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
patch_complianceBInspect
Patch lag analysis — overdue patches against scheduled deadlines.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It says 'analysis' but does not disclose whether the tool is read-only or has side effects. The behavior is ambiguous, and the agent cannot determine if it is safe to invoke without further context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence with no unnecessary words. It is concise and front-loaded with the core purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The tool has no output schema, so the description should explain what the tool returns. It does not mention return values or format. For a simple analysis tool with no inputs, this is a gap, but the complexity is low.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, so the baseline is 4. The description adds no parameter details, but none are needed since no parameters exist. Schema coverage is 100% with an empty object.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it's for patch lag analysis, comparing overdue patches against deadlines. The verb 'analysis' and resource 'patch compliance' are specific. However, it does not explicitly differentiate from sibling tools, though the siblings are distinct enough that confusion is unlikely.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives like approval_check or change_risk_assess. The context is implied by the name, but the description offers no explicit when or when-not instructions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
register_changeCInspect
Register an ICT change request with full DORA traceability.
| Name | Required | Description | Default |
|---|---|---|---|
| notes | No | ||
| title | Yes | ||
| tester | No | ||
| approver | No | ||
| patch_id | No | ||
| priority | No | ||
| change_id | No | ||
| requestor | No | ||
| test_plan | No | ||
| risk_level | No | ||
| change_type | No | ||
| description | No | ||
| implementer | No | ||
| cve_reference | No | ||
| justification | No | ||
| rollback_plan | No | ||
| scheduled_date | No | ||
| affected_systems | No | ||
| affected_services | No | ||
| implementation_plan | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden of behavioral disclosure. It only says 'Register', implying creation, but lacks details on side effects (e.g., triggering approvals, notifications), required permissions, or any irreversible actions. This is insufficient for a mutation tool with 20 parameters.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, focused sentence that conveys the core purpose efficiently. However, it is arguably too terse given the tool's complexity, missing opportunities to provide further clarity without becoming verbose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's high complexity (20 parameters, no output schema, no annotations), the description is severely incomplete. It does not explain what happens upon successful registration, what the return value is, or any constraints (e.g., required fields beyond title). The agent has insufficient information to use the tool correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, meaning the schema provides no explanations for the 20 parameters. The description does not compensate by explaining any parameter's meaning, role, or format. Parameter names like 'cve_reference' or 'rollback_plan' are somewhat self-explanatory, but the tool fails to add value beyond the raw names.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Register') and the resource ('an ICT change request') with a specific quality ('full DORA traceability'). It distinguishes itself from siblings like 'approval_check' or 'change_risk_assess', which handle different aspects of change management.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. There is no mention of prerequisites, exclusions, or typical use cases. The description only states what the tool does, not when to choose it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rollback_planCInspect
Validate rollback plan readiness for a change.
| Name | Required | Description | Default |
|---|---|---|---|
| change_id | Yes | ||
| rollback_tested | No | ||
| backup_confirmed | No | ||
| communication_plan | No | ||
| estimated_rollback_minutes | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description does not disclose behavioral traits like side effects or whether the tool is read-only. Given no annotations, it should clarify if validation modifies state or triggers actions, which it fails to do.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single concise sentence with no unnecessary words. However, it might be too brief for the complexity of 5 parameters.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With 5 parameters, no output schema, and no annotations, the description is severely incomplete. It does not address return values, prerequisites, or how to interpret results.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0% and the description does not explain any parameter. For 5 parameters, the description must add meaning but provides none, leaving agents to infer parameter purposes.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'validate' and the resource 'rollback plan readiness for a change', which is specific and distinct from sibling tools like 'approval_check' or 'test_evidence'. However, it could be more explicit about what 'readiness' entails.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives, such as 'approval_check' or 'test_evidence'. The description lacks context for appropriate usage scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
sod_checkAInspect
Detect Segregation of Duties violations across all changes.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It does not disclose whether the operation is read-only, triggers side effects, or how results are returned, though the purpose is clear.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
A single, direct sentence with no wasted words. Front-loaded with the key action and resource.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description is minimal and lacks details about what constitutes a violation, output format, or prerequisites. However, for a simple detection tool with no parameters, it adequately conveys the core purpose.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
There are no parameters, so the description cannot add meaning beyond the schema. Baseline for 0 parameters is 4.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'detect' and the resource 'Segregation of Duties violations' with scope 'across all changes'. This distinguishes it from sibling tools like approval_check and change_risk_assess.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for detecting SoD violations but does not explicitly state when to use this tool versus alternatives or provide exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
test_evidenceDInspect
Record or view test execution evidence for a change.
| Name | Required | Description | Default |
|---|---|---|---|
| add | No | ||
| notes | No | ||
| result | No | ||
| tester | No | ||
| change_id | Yes | ||
| test_date | No | ||
| test_type | No | ||
| environment | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must convey behavioral traits. It only says 'Record or view', but does not explain side effects (e.g., does recording overwrite?), required permissions, or any constraints. The agent gets no insight into what happens beyond the action.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is short (one sentence), but conciseness should not sacrifice utility. It lacks structure and essential details. A better description would add parameter hints while remaining efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With 8 parameters (1 required), no output schema, no annotations, and no parameter descriptions, the description is severely incomplete. The agent cannot determine how to use the tool effectively, especially for parameters with enums.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, yet the description adds no meaning to parameters. For example, 'result' enum ('pass', 'fail', 'partial') is not explained, nor is 'test_type' or 'environment'. The agent cannot infer correct values without additional context.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states 'Record or view test execution evidence for a change', which indicates the tool handles test evidence, but the dual nature ('record or view') is ambiguous. It distinguishes from sibling tools like approval_check or list_changes by specifying 'test execution evidence', but could be more precise.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives. With siblings like approval_check and list_changes, the description does not clarify whether to use this for recording vs. viewing or when to prefer another tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!