incidentoracle
Server Details
IncidentOracle - 12-tool incident management MCP: triage, BaFin DORA reporting, RCA.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- ToolOracle/incidentoracle
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.2/5 across 12 of 12 tools scored.
Each tool targets a distinct step in the DORA incident management lifecycle, from logging and classification to reporting and deadline tracking. There is no ambiguity between tools like classify_incident, log_incident, and major_incident_check.
All tool names follow a consistent verb_noun pattern using snake_case (e.g., classify_incident, deadline_tracker, final_report). The naming is predictable and aligns with the domain language.
With 12 tools, the set is well-scoped for a DORA incident management server. Each tool serves a clear purpose, covering the full incident workflow without unnecessary duplication or gaps.
The tool set covers the entire incident lifecycle: logging, classification, reclassification, major notifications, reports, and monitoring. Minor gaps exist (e.g., no explicit tool to update non-classification incident details), but these are not critical for core DORA compliance.
Available Tools
12 toolsclassify_incidentCInspect
Classify an incident against the 6 DORA criteria (RTS 2024/1772). Determines if MAJOR (triggers 4h/72h/1m reporting) or NON-MAJOR.
| Name | Required | Description | Default |
|---|---|---|---|
| data_losses | No | Confidential/personal data affected? | |
| incident_id | Yes | ||
| duration_hours | No | ||
| clients_affected | No | % of clients affected | |
| geographic_spread | No | Number of EU member states | |
| economic_impact_eur | No | ||
| criticality_of_services | No | Critical functions affected? |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full responsibility for behavioral disclosure. It tells the classification logic but omits whether the tool mutates state, is idempotent, or returns any output. No side effects or authorization needs are mentioned.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, front-loaded with the primary purpose, and efficiently includes specific reporting thresholds. No redundant words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 7 parameters, no output schema, and no annotations, the description is insufficient. It does not cover parameter usage, return values, or behavioral impacts. Sibling tools exist but are not differentiated.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 57% with some parameters lacking descriptions. The tool description does not explain how the parameters map to the 6 DORA criteria, nor does it compensate for the missing schema descriptions. It adds no value beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'classify', the resource 'incident', and the specific criteria (DORA RTS 2024/1772). It also distinguishes the outcome as MAJOR or NON-MAJOR with concrete reporting thresholds. However, it does not differentiate from sibling tools like 'major_incident_check' or 'reclassify'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It lacks context on prerequisites, exclusions, or specific scenarios where classification is appropriate.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
cyber_threat_notifyCInspect
Voluntary notification of a significant cyber threat (Art. 19(2)). Uses ITS 2025/302 Annex III template.
| Name | Required | Description | Default |
|---|---|---|---|
| iocs | No | ||
| ttps | No | ||
| title | Yes | ||
| source | No | ||
| mitigation | No | ||
| description | No | ||
| threat_type | No | ||
| affected_systems | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, and the description fails to disclose any behavioral traits such as side effects, authentication needs, rate limits, or what happens upon submission (e.g., logging, alerting). The description is extremely minimal.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is very concise with two sentences that front-load the purpose and template reference. However, it sacrifices necessary detail for brevity, making it less informative than it could be.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 8 parameters, no output schema, and no annotations, the description is highly incomplete. It does not cover what the tool returns, how to fill parameters, or the process after submission. Critical information is missing for correct invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 8 parameters with 0% description coverage in the schema. The description does not explain any parameter meanings, formats, or relationships. It only states the tool's purpose, leaving the agent with no guidance on how to populate the fields.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: voluntary notification of a significant cyber threat. It references a specific legal article (Art. 19(2)) and a template (ITS 2025/302 Annex III), distinguishing it from sibling tools like initial_notification or intermediate_report.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for voluntary notifications of significant cyber threats, but does not explicitly state when to use it versus alternatives or when not to use it. It lacks guidance on prerequisites or exclusion criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
deadline_trackerAInspect
Track all active MAJOR incident reporting deadlines. Shows overdue and upcoming.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It implies read-only behavior (tracking, showing) but does not disclose any behavioral traits like real-time updates, caching, or limitations. Minimal value beyond the obvious.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences. Front-loaded with main action in first sentence. No redundant words, every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given zero parameters and no output schema, description adequately covers the tool's purpose and output (overdue/upcoming deadlines). Slightly lacking in details like what counts as 'active' or how deadlines are defined, but acceptable for a simple list tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Tool has 0 parameters, so schema coverage is 100% vacuously. Per guidelines, baseline is 4 for no params. Description adds no parameter info (unnecessary) but does not need to.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states it tracks active MAJOR incident reporting deadlines and shows overdue/upcoming. Specific verb ('Track') and resource ('deadlines') with scope ('MAJOR incident reporting') distinguishes it from sibling tools like classify_incident or health_check.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives. For example, sibling 'major_incident_check' might be related but no comparison or when-not-to-use advice is given.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
final_reportCInspect
Generate the 1-month final report with root cause analysis and lessons learned.
| Name | Required | Description | Default |
|---|---|---|---|
| incident_id | Yes | ||
| resolved_at | No | ||
| total_cost_eur | No | ||
| lessons_learned | No | ||
| recovery_actions | No | ||
| root_cause_final | No | ||
| preventive_measures | No | ||
| client_communication | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, so the description carries full responsibility for behavioral disclosure. It only states the action without any mention of side effects, authorization needs, idempotency, or output format. It fails to inform whether the report generation is read-only or has other implications.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that immediately states the action and key content. It is concise, front-loaded, and contains no superfluous words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 8 parameters (1 required), no output schema, and no schema descriptions, the description is woefully inadequate. It does not explain what parameters are needed, their formats, or what the report contains. Critical context for correct invocation is missing.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It mentions 'root cause analysis' and 'lessons learned', which correspond to 'root_cause_final' and 'lessons_learned' parameters, but ignores the other six parameters like 'resolved_at' and 'total_cost_eur'. The description adds minimal meaning and does not explain parameter roles or constraints.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Generate' and the resource '1-month final report' and specifies content 'root cause analysis and lessons learned'. It distinguishes from sibling tools like 'intermediate_report' by indicating 'final' and '1-month', though it does not explicitly mention incidents despite schema and sibling context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives like 'intermediate_report' or 'initial_notification'. There are no context notes or exclusions, leaving the agent without decision support for tool selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
health_checkCInspect
Server status.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description bears full responsibility. 'Server status' implies a read operation but does not disclose potential side effects, authentication needs, or rate limits. The return format is also unspecified.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely short at two words, which is concise but sacrifices informativeness. It is front-loaded, but more content could improve clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (no parameters, no output schema), the description is minimally adequate. However, it could better specify the server context or the expected output format.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has zero parameters, so the baseline score is 4. The description adds no parameter information, but none is needed since the schema already covers all.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Server status' indicates the tool returns status information about a server, which distinguishes it from sibling incident tools. However, it lacks a specific verb and doesn't detail what aspects of status are covered.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No usage guidelines are provided. The description does not specify when to use this tool versus alternatives, nor any predecessor or context requirements.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
incident_logCInspect
Full incident register with filters (status, classification, severity, search).
| Name | Required | Description | Default |
|---|---|---|---|
| search | No | ||
| status | No | ||
| severity | No | ||
| classification | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description must disclose behavior. It states the tool provides a 'full register with filters', implying a safe read operation, but lacks details on performance, pagination, or any side effects. The description does not contradict any annotations (none provided).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that front-loads the purpose and key filters. It is concise, though could benefit from a brief addition on usage context without becoming verbose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 4 parameters, no output schema, and no annotations, the description is too sparse. It does not specify the return format, how the filters combine, or how this tool differs from siblings. Agents lack sufficient context for correct invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It lists the filter parameters but provides no additional semantics (e.g., how 'search' is applied, whether filters are AND/OR). The parameter names and enum values in the schema are self-explanatory, but the description adds minimal value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly communicates that the tool retrieves a full incident register with filtering capabilities, listing the filter fields (status, classification, severity, search). It implies a read operation, though it does not explicitly use a verb like 'list' or 'query'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus its siblings (e.g., log_incident for creating, incident_stats for aggregates). The description assumes the agent infers the use case from the name and context, but explicit differentiation is missing.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
incident_statsAInspect
Dashboard: total/open/major incidents, overdue deadlines, by severity/status.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description implies a read-only aggregation operation (dashboard), which is transparent given the context. It does not explicitly state that it is non-destructive or requires no authentication, but the behavior is clearly non-mutating and safe.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise single-sentence description that front-loads the key function ('Dashboard') and lists the specific statistics provided. No unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with no parameters and no output schema, the description adequately explains what statistics are returned. It lacks details on output format or pagination, but given the simplicity, it is reasonably complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has zero parameters, and the schema coverage is 100%. The description does not need to add parameter meaning. Baseline for no parameters is 4.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it is a dashboard providing statistics: total, open, major incidents, overdue deadlines, and breakdowns by severity/status. It uses a specific resource ('incident stats') and clearly distinguishes from siblings that handle individual incidents or classifications.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives. It does not specify that it is for high-level overviews or compare to other tools that may provide similar information.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
initial_notificationBInspect
Generate the 4h initial notification for a MAJOR incident (ITS 2025/302 Annex I). Must be submitted within 4h of classification, max 24h after detection.
| Name | Required | Description | Default |
|---|---|---|---|
| authority | No | Competent authority (e.g., BaFin, FMA) | |
| entity_lei | No | ||
| entity_name | No | ||
| incident_id | Yes | ||
| affected_states | No | Comma-separated EU member states | |
| discovery_method | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden for behavioral disclosure. It only states that the tool generates a notification without explaining side effects, required permissions, rate limits, or what happens after invocation (e.g., if the notification is sent or stored). This leaves significant ambiguity for the agent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise: two sentences covering action, regulatory context, and timing. Every sentence adds value without redundancy. It is front-loaded with the core purpose, making it easy for the agent to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and no annotations, the description should provide a more complete picture. It omits what the notification contains, how optional parameters affect output, and any response details. The agent may be left guessing about the tool's full behavior and dependencies.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is only 33% (2 of 6 parameters have descriptions), and the description adds no parameter-level explanation beyond the schema. The agent lacks guidance on how to fill optional fields like 'entity_lei' or 'discovery_method', which is critical given their absence from the description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool generates the 4-hour initial notification for MAJOR incidents, referencing a specific regulation (ITS 2025/302 Annex I). The verb 'Generate' and resource 'initial notification' are specific, and the tool is clearly distinguished from siblings like 'intermediate_report' and 'final_report' by its focus on the initial notification.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides timing constraints (submission within 4h of classification, max 24h after detection) but does not explicitly state when to use this tool versus alternatives like 'cyber_threat_notify' or 'classify_incident'. Usage context is implied but not clarified with exclusions or alternative recommendations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
intermediate_reportBInspect
Generate the 72h intermediate report for a MAJOR incident (ITS 2025/302). Must include action plan if incident is not yet resolved.
| Name | Required | Description | Default |
|---|---|---|---|
| root_cause | No | ||
| action_plan | No | ||
| incident_id | Yes | ||
| recovery_status | No | ||
| description_update | No | ||
| containment_actions | No | ||
| expected_resolution | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description bears full burden for behavioral traits. It mentions a conditional requirement (action plan if unresolved) but omits whether the tool is destructive, idempotent, or requires permissions. The verb 'Generate' suggests creation, but side effects are unspecified.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, focused sentence with no redundancy. It is appropriately concise, though it could benefit from additional structure (e.g., separating purpose from conditions).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (7 parameters, no annotations or output schema), the description is incomplete. It does not explain parameter purposes, return values, or prerequisites, leaving significant gaps for an AI agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, yet the description provides no parameter-level explanations. It only mentions 'action plan' in the context of a condition. The 7 parameters remain undocumented, failing to compensate for the schema gap.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it generates a 72h intermediate report for a MAJOR incident, with specific condition ('Must include action plan if not resolved'). This distinguishes it from siblings like initial_notification or final_report.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies use for major incidents at the 72-hour mark, but does not explicitly state when to use or avoid this tool, nor does it mention alternative tools (e.g., major_incident_check). Context is clear but lacks exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
log_incidentBInspect
Log a new ICT-related incident. First step in the DORA incident management process (Art. 17).
| Name | Required | Description | Default |
|---|---|---|---|
| team | No | ||
| notes | No | ||
| owner | No | ||
| title | Yes | ||
| severity | No | ||
| data_losses | No | ||
| description | No | ||
| detected_at | No | ISO datetime of detection | |
| incident_id | No | ||
| bcm_activated | No | ||
| duration_hours | No | ||
| affected_systems | No | ||
| clients_affected | No | Percentage of clients affected | |
| affected_services | No | ||
| geographic_spread | No | ||
| economic_impact_eur | No | ||
| criticality_of_services | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must carry the full burden. It only states 'Log a new incident' without disclosing any behavioral traits such as permissions required, side effects, or whether the tool can overwrite existing incidents. This leaves the agent underinformed.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise (two sentences) and front-loaded with the key purpose. However, for a tool with many parameters, slightly more structure (e.g., listing key fields) could improve readability without harming conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (17 parameters, no output schema, no annotations), the description is insufficient. It does not explain what the tool returns, how errors are handled, or how to interpret the many parameters. A more complete description would include guidance on required fields and typical workflows.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 17 parameters and only 12% schema description coverage, the description adds no additional meaning to the parameters. It does not explain any critical fields like 'title', 'severity', or 'team', leaving the agent to guess their semantics beyond the minimal schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Log'), the resource ('ICT-related incident'), and the context (first step in DORA incident management). This differentiates it from sibling tools like 'classify_incident' and 'final_report' which occur later in the process.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
It explicitly positions the tool as the first step in the DORA incident management process, guiding the agent to use it for initial logging. It does not explicitly list alternative tools for when not to use it, but the context implies it is for new incidents only.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
major_incident_checkAInspect
Quick check: would these criteria values classify as a MAJOR incident? No incident record needed — use for pre-assessment.
| Name | Required | Description | Default |
|---|---|---|---|
| data_losses | No | ||
| duration_hours | No | ||
| clients_affected | No | ||
| geographic_spread | No | ||
| economic_impact_eur | No | ||
| criticality_of_services | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so the description carries the full burden. It indicates the tool is a 'quick check' with no record creation, but does not describe return type or internal logic (e.g., how criteria are combined). It provides basic behavioral insight but lacks depth.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that efficiently conveys the tool's core purpose and key distinction (no record needed). No superfluous words; front-loaded with the action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema, the description does not explain what the tool returns (e.g., boolean, severity). It also omits details on how missing parameters affect classification, making the tool incomplete for an agent to use correctly without further inference.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has 6 parameters with 0% description coverage, and the description adds no individual parameter explanations. It only contextualizes them as 'criteria values', but fails to specify what each field (e.g., data_losses, duration_hours) represents or its range.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool is for pre-assessing whether criteria classify as a major incident without creating a record. It uses specific language like 'Quick check' and contrasts with 'No incident record needed', distinguishing it from sibling tools like classify_incident and log_incident.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies use for pre-assessment but does not explicitly state when to use this tool versus alternatives like classify_incident. It lacks clear when-not or exclusion criteria, leaving the agent to infer usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
reclassifyAInspect
Reclassify an incident (MAJOR to NON-MAJOR or vice versa). Competent authority must be notified of reclassification.
| Name | Required | Description | Default |
|---|---|---|---|
| reason | No | ||
| incident_id | Yes | ||
| new_classification | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses that the tool changes classification (a mutating operation) and requires notification. No annotations exist, so the description carries full burden. However, it does not mention whether the change is reversible, what output is returns, or any side effects beyond notification.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences: the first concisely states the action, the second adds a critical requirement. No redundant or unnecessary information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers the core action and a key procedural note, but lacks details about the optional 'reason' parameter, the fate of the old classification, and the output format. With no output schema and a mutation operation, more context would be beneficial.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description hints at the new_classification parameter by mentioning 'MAJOR to NON-MAJOR or vice versa', but provides no explanation for 'incident_id' or 'reason'. With 3 parameters and 0% schema coverage, the description does not adequately compensate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states verb 'Reclassify' and resource 'incident', and specifies the two possible directions (MAJOR to NON-MAJOR or vice versa). This distinguishes it from sibling 'classify_incident', which handles initial classification.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides a clear contextual note about notifying competent authority after reclassification, implying a required follow-up step. Does not explicitly state when not to use, but the sibling set suggests it is for changing existing classifications, not initial ones.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!