Skip to main content
Glama

Server Details

IncidentOracle - 12-tool incident management MCP: triage, BaFin DORA reporting, RCA.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
ToolOracle/incidentoracle
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.2/5 across 12 of 12 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool targets a distinct step in the DORA incident management lifecycle, from logging and classification to reporting and deadline tracking. There is no ambiguity between tools like classify_incident, log_incident, and major_incident_check.

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern using snake_case (e.g., classify_incident, deadline_tracker, final_report). The naming is predictable and aligns with the domain language.

Tool Count5/5

With 12 tools, the set is well-scoped for a DORA incident management server. Each tool serves a clear purpose, covering the full incident workflow without unnecessary duplication or gaps.

Completeness4/5

The tool set covers the entire incident lifecycle: logging, classification, reclassification, major notifications, reports, and monitoring. Minor gaps exist (e.g., no explicit tool to update non-classification incident details), but these are not critical for core DORA compliance.

Available Tools

12 tools
classify_incidentCInspect

Classify an incident against the 6 DORA criteria (RTS 2024/1772). Determines if MAJOR (triggers 4h/72h/1m reporting) or NON-MAJOR.

ParametersJSON Schema
NameRequiredDescriptionDefault
data_lossesNoConfidential/personal data affected?
incident_idYes
duration_hoursNo
clients_affectedNo% of clients affected
geographic_spreadNoNumber of EU member states
economic_impact_eurNo
criticality_of_servicesNoCritical functions affected?
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full responsibility for behavioral disclosure. It tells the classification logic but omits whether the tool mutates state, is idempotent, or returns any output. No side effects or authorization needs are mentioned.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, front-loaded with the primary purpose, and efficiently includes specific reporting thresholds. No redundant words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 7 parameters, no output schema, and no annotations, the description is insufficient. It does not cover parameter usage, return values, or behavioral impacts. Sibling tools exist but are not differentiated.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 57% with some parameters lacking descriptions. The tool description does not explain how the parameters map to the 6 DORA criteria, nor does it compensate for the missing schema descriptions. It adds no value beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'classify', the resource 'incident', and the specific criteria (DORA RTS 2024/1772). It also distinguishes the outcome as MAJOR or NON-MAJOR with concrete reporting thresholds. However, it does not differentiate from sibling tools like 'major_incident_check' or 'reclassify'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It lacks context on prerequisites, exclusions, or specific scenarios where classification is appropriate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

cyber_threat_notifyCInspect

Voluntary notification of a significant cyber threat (Art. 19(2)). Uses ITS 2025/302 Annex III template.

ParametersJSON Schema
NameRequiredDescriptionDefault
iocsNo
ttpsNo
titleYes
sourceNo
mitigationNo
descriptionNo
threat_typeNo
affected_systemsNo
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, and the description fails to disclose any behavioral traits such as side effects, authentication needs, rate limits, or what happens upon submission (e.g., logging, alerting). The description is extremely minimal.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is very concise with two sentences that front-load the purpose and template reference. However, it sacrifices necessary detail for brevity, making it less informative than it could be.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 8 parameters, no output schema, and no annotations, the description is highly incomplete. It does not cover what the tool returns, how to fill parameters, or the process after submission. Critical information is missing for correct invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 8 parameters with 0% description coverage in the schema. The description does not explain any parameter meanings, formats, or relationships. It only states the tool's purpose, leaving the agent with no guidance on how to populate the fields.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: voluntary notification of a significant cyber threat. It references a specific legal article (Art. 19(2)) and a template (ITS 2025/302 Annex III), distinguishing it from sibling tools like initial_notification or intermediate_report.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for voluntary notifications of significant cyber threats, but does not explicitly state when to use it versus alternatives or when not to use it. It lacks guidance on prerequisites or exclusion criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

deadline_trackerAInspect

Track all active MAJOR incident reporting deadlines. Shows overdue and upcoming.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It implies read-only behavior (tracking, showing) but does not disclose any behavioral traits like real-time updates, caching, or limitations. Minimal value beyond the obvious.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences. Front-loaded with main action in first sentence. No redundant words, every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given zero parameters and no output schema, description adequately covers the tool's purpose and output (overdue/upcoming deadlines). Slightly lacking in details like what counts as 'active' or how deadlines are defined, but acceptable for a simple list tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Tool has 0 parameters, so schema coverage is 100% vacuously. Per guidelines, baseline is 4 for no params. Description adds no parameter info (unnecessary) but does not need to.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states it tracks active MAJOR incident reporting deadlines and shows overdue/upcoming. Specific verb ('Track') and resource ('deadlines') with scope ('MAJOR incident reporting') distinguishes it from sibling tools like classify_incident or health_check.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives. For example, sibling 'major_incident_check' might be related but no comparison or when-not-to-use advice is given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

final_reportCInspect

Generate the 1-month final report with root cause analysis and lessons learned.

ParametersJSON Schema
NameRequiredDescriptionDefault
incident_idYes
resolved_atNo
total_cost_eurNo
lessons_learnedNo
recovery_actionsNo
root_cause_finalNo
preventive_measuresNo
client_communicationNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist, so the description carries full responsibility for behavioral disclosure. It only states the action without any mention of side effects, authorization needs, idempotency, or output format. It fails to inform whether the report generation is read-only or has other implications.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that immediately states the action and key content. It is concise, front-loaded, and contains no superfluous words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 8 parameters (1 required), no output schema, and no schema descriptions, the description is woefully inadequate. It does not explain what parameters are needed, their formats, or what the report contains. Critical context for correct invocation is missing.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It mentions 'root cause analysis' and 'lessons learned', which correspond to 'root_cause_final' and 'lessons_learned' parameters, but ignores the other six parameters like 'resolved_at' and 'total_cost_eur'. The description adds minimal meaning and does not explain parameter roles or constraints.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Generate' and the resource '1-month final report' and specifies content 'root cause analysis and lessons learned'. It distinguishes from sibling tools like 'intermediate_report' by indicating 'final' and '1-month', though it does not explicitly mention incidents despite schema and sibling context.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives like 'intermediate_report' or 'initial_notification'. There are no context notes or exclusions, leaving the agent without decision support for tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

health_checkCInspect

Server status.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description bears full responsibility. 'Server status' implies a read operation but does not disclose potential side effects, authentication needs, or rate limits. The return format is also unspecified.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely short at two words, which is concise but sacrifices informativeness. It is front-loaded, but more content could improve clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (no parameters, no output schema), the description is minimally adequate. However, it could better specify the server context or the expected output format.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has zero parameters, so the baseline score is 4. The description adds no parameter information, but none is needed since the schema already covers all.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Server status' indicates the tool returns status information about a server, which distinguishes it from sibling incident tools. However, it lacks a specific verb and doesn't detail what aspects of status are covered.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No usage guidelines are provided. The description does not specify when to use this tool versus alternatives, nor any predecessor or context requirements.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

incident_logCInspect

Full incident register with filters (status, classification, severity, search).

ParametersJSON Schema
NameRequiredDescriptionDefault
searchNo
statusNo
severityNo
classificationNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must disclose behavior. It states the tool provides a 'full register with filters', implying a safe read operation, but lacks details on performance, pagination, or any side effects. The description does not contradict any annotations (none provided).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that front-loads the purpose and key filters. It is concise, though could benefit from a brief addition on usage context without becoming verbose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 4 parameters, no output schema, and no annotations, the description is too sparse. It does not specify the return format, how the filters combine, or how this tool differs from siblings. Agents lack sufficient context for correct invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It lists the filter parameters but provides no additional semantics (e.g., how 'search' is applied, whether filters are AND/OR). The parameter names and enum values in the schema are self-explanatory, but the description adds minimal value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly communicates that the tool retrieves a full incident register with filtering capabilities, listing the filter fields (status, classification, severity, search). It implies a read operation, though it does not explicitly use a verb like 'list' or 'query'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus its siblings (e.g., log_incident for creating, incident_stats for aggregates). The description assumes the agent infers the use case from the name and context, but explicit differentiation is missing.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

incident_statsAInspect

Dashboard: total/open/major incidents, overdue deadlines, by severity/status.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description implies a read-only aggregation operation (dashboard), which is transparent given the context. It does not explicitly state that it is non-destructive or requires no authentication, but the behavior is clearly non-mutating and safe.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise single-sentence description that front-loads the key function ('Dashboard') and lists the specific statistics provided. No unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with no parameters and no output schema, the description adequately explains what statistics are returned. It lacks details on output format or pagination, but given the simplicity, it is reasonably complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has zero parameters, and the schema coverage is 100%. The description does not need to add parameter meaning. Baseline for no parameters is 4.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it is a dashboard providing statistics: total, open, major incidents, overdue deadlines, and breakdowns by severity/status. It uses a specific resource ('incident stats') and clearly distinguishes from siblings that handle individual incidents or classifications.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives. It does not specify that it is for high-level overviews or compare to other tools that may provide similar information.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

initial_notificationBInspect

Generate the 4h initial notification for a MAJOR incident (ITS 2025/302 Annex I). Must be submitted within 4h of classification, max 24h after detection.

ParametersJSON Schema
NameRequiredDescriptionDefault
authorityNoCompetent authority (e.g., BaFin, FMA)
entity_leiNo
entity_nameNo
incident_idYes
affected_statesNoComma-separated EU member states
discovery_methodNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden for behavioral disclosure. It only states that the tool generates a notification without explaining side effects, required permissions, rate limits, or what happens after invocation (e.g., if the notification is sent or stored). This leaves significant ambiguity for the agent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise: two sentences covering action, regulatory context, and timing. Every sentence adds value without redundancy. It is front-loaded with the core purpose, making it easy for the agent to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and no annotations, the description should provide a more complete picture. It omits what the notification contains, how optional parameters affect output, and any response details. The agent may be left guessing about the tool's full behavior and dependencies.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is only 33% (2 of 6 parameters have descriptions), and the description adds no parameter-level explanation beyond the schema. The agent lacks guidance on how to fill optional fields like 'entity_lei' or 'discovery_method', which is critical given their absence from the description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool generates the 4-hour initial notification for MAJOR incidents, referencing a specific regulation (ITS 2025/302 Annex I). The verb 'Generate' and resource 'initial notification' are specific, and the tool is clearly distinguished from siblings like 'intermediate_report' and 'final_report' by its focus on the initial notification.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides timing constraints (submission within 4h of classification, max 24h after detection) but does not explicitly state when to use this tool versus alternatives like 'cyber_threat_notify' or 'classify_incident'. Usage context is implied but not clarified with exclusions or alternative recommendations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

intermediate_reportBInspect

Generate the 72h intermediate report for a MAJOR incident (ITS 2025/302). Must include action plan if incident is not yet resolved.

ParametersJSON Schema
NameRequiredDescriptionDefault
root_causeNo
action_planNo
incident_idYes
recovery_statusNo
description_updateNo
containment_actionsNo
expected_resolutionNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description bears full burden for behavioral traits. It mentions a conditional requirement (action plan if unresolved) but omits whether the tool is destructive, idempotent, or requires permissions. The verb 'Generate' suggests creation, but side effects are unspecified.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, focused sentence with no redundancy. It is appropriately concise, though it could benefit from additional structure (e.g., separating purpose from conditions).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (7 parameters, no annotations or output schema), the description is incomplete. It does not explain parameter purposes, return values, or prerequisites, leaving significant gaps for an AI agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, yet the description provides no parameter-level explanations. It only mentions 'action plan' in the context of a condition. The 7 parameters remain undocumented, failing to compensate for the schema gap.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it generates a 72h intermediate report for a MAJOR incident, with specific condition ('Must include action plan if not resolved'). This distinguishes it from siblings like initial_notification or final_report.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies use for major incidents at the 72-hour mark, but does not explicitly state when to use or avoid this tool, nor does it mention alternative tools (e.g., major_incident_check). Context is clear but lacks exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

log_incidentBInspect

Log a new ICT-related incident. First step in the DORA incident management process (Art. 17).

ParametersJSON Schema
NameRequiredDescriptionDefault
teamNo
notesNo
ownerNo
titleYes
severityNo
data_lossesNo
descriptionNo
detected_atNoISO datetime of detection
incident_idNo
bcm_activatedNo
duration_hoursNo
affected_systemsNo
clients_affectedNoPercentage of clients affected
affected_servicesNo
geographic_spreadNo
economic_impact_eurNo
criticality_of_servicesNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must carry the full burden. It only states 'Log a new incident' without disclosing any behavioral traits such as permissions required, side effects, or whether the tool can overwrite existing incidents. This leaves the agent underinformed.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise (two sentences) and front-loaded with the key purpose. However, for a tool with many parameters, slightly more structure (e.g., listing key fields) could improve readability without harming conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (17 parameters, no output schema, no annotations), the description is insufficient. It does not explain what the tool returns, how errors are handled, or how to interpret the many parameters. A more complete description would include guidance on required fields and typical workflows.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 17 parameters and only 12% schema description coverage, the description adds no additional meaning to the parameters. It does not explain any critical fields like 'title', 'severity', or 'team', leaving the agent to guess their semantics beyond the minimal schema definitions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Log'), the resource ('ICT-related incident'), and the context (first step in DORA incident management). This differentiates it from sibling tools like 'classify_incident' and 'final_report' which occur later in the process.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It explicitly positions the tool as the first step in the DORA incident management process, guiding the agent to use it for initial logging. It does not explicitly list alternative tools for when not to use it, but the context implies it is for new incidents only.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

major_incident_checkAInspect

Quick check: would these criteria values classify as a MAJOR incident? No incident record needed — use for pre-assessment.

ParametersJSON Schema
NameRequiredDescriptionDefault
data_lossesNo
duration_hoursNo
clients_affectedNo
geographic_spreadNo
economic_impact_eurNo
criticality_of_servicesNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so the description carries the full burden. It indicates the tool is a 'quick check' with no record creation, but does not describe return type or internal logic (e.g., how criteria are combined). It provides basic behavioral insight but lacks depth.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that efficiently conveys the tool's core purpose and key distinction (no record needed). No superfluous words; front-loaded with the action.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of an output schema, the description does not explain what the tool returns (e.g., boolean, severity). It also omits details on how missing parameters affect classification, making the tool incomplete for an agent to use correctly without further inference.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema has 6 parameters with 0% description coverage, and the description adds no individual parameter explanations. It only contextualizes them as 'criteria values', but fails to specify what each field (e.g., data_losses, duration_hours) represents or its range.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool is for pre-assessing whether criteria classify as a major incident without creating a record. It uses specific language like 'Quick check' and contrasts with 'No incident record needed', distinguishing it from sibling tools like classify_incident and log_incident.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies use for pre-assessment but does not explicitly state when to use this tool versus alternatives like classify_incident. It lacks clear when-not or exclusion criteria, leaving the agent to infer usage context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

reclassifyAInspect

Reclassify an incident (MAJOR to NON-MAJOR or vice versa). Competent authority must be notified of reclassification.

ParametersJSON Schema
NameRequiredDescriptionDefault
reasonNo
incident_idYes
new_classificationYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses that the tool changes classification (a mutating operation) and requires notification. No annotations exist, so the description carries full burden. However, it does not mention whether the change is reversible, what output is returns, or any side effects beyond notification.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences: the first concisely states the action, the second adds a critical requirement. No redundant or unnecessary information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Covers the core action and a key procedural note, but lacks details about the optional 'reason' parameter, the fate of the old classification, and the output format. With no output schema and a mutation operation, more context would be beneficial.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description hints at the new_classification parameter by mentioning 'MAJOR to NON-MAJOR or vice versa', but provides no explanation for 'incident_id' or 'reason'. With 3 parameters and 0% schema coverage, the description does not adequately compensate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states verb 'Reclassify' and resource 'incident', and specifies the two possible directions (MAJOR to NON-MAJOR or vice versa). This distinguishes it from sibling 'classify_incident', which handles initial classification.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides a clear contextual note about notifying competent authority after reclassification, implying a required follow-up step. Does not explicitly state when not to use, but the sibling set suggests it is for changing existing classifications, not initial ones.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.