Skip to main content
Glama

Server Details

DriftOracle - 15 tools for model/data drift monitoring: PSI, KS-test, alerts, evidence packs.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
ToolOracle/driftoracle
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsC

Average 2.8/5 across 15 of 15 tools scored. Lowest: 1.8/5.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose, from specific detect methods (cloud, control, enterprise, evidence, regulatory) to status, history, configuration, and remediation. No overlapping functionality.

Naming Consistency3/5

Mixed naming patterns: some use verb_noun (detect_*, configure_*, mark_*, resolve_*, trigger_*), others noun_noun (drift_history, drift_status, health_check, obligation_map), and one single-word (ping). Inconsistent but still readable.

Tool Count5/5

15 tools is well-scoped for a drift detection server, covering all necessary detection types, status, history, configuration, and remediation without excess.

Completeness5/5

Covers full lifecycle: multiple detection methods, manual marking, status/history tracking, threshold configuration, resolution, and remediation. No obvious gaps for the domain.

Available Tools

15 tools
configure_thresholdsCInspect

View/set drift detection thresholds.

ParametersJSON Schema
NameRequiredDescriptionDefault
setNoTrue to update
evidence_max_age_daysNo
assessment_max_age_daysNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must convey behavioral traits. It indicates both read and write capabilities ('view/set'), but does not disclose side effects of setting thresholds (e.g., impact on ongoing drift detection), required permissions, or whether changes are reversible. The description is too brief to provide adequate transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise at one short sentence, but it is overly terse. While it is front-loaded with the verb and resource, it sacrifices valuable information that could be included without being verbose, such as parameter explanations or usage context.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, no annotations, and low schema description coverage, the description is incomplete. It fails to explain return values, side effects of setting thresholds, or how the parameters interact, leaving the agent with insufficient information to use the tool correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 33%; only the 'set' parameter is described ('True to update'). The description does not explain the semantics of evidence_max_age_days or assessment_max_age_days, which are critical for correct usage. It adds no meaning beyond the schema for these parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: viewing and setting drift detection thresholds. It uses a specific verb ('view/set') and a specific resource ('drift detection thresholds'), distinguishing it from sibling tools that detect or manage drift rather than configure thresholds.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus its siblings (e.g., detect_cloud_drift, drift_status) or when to view versus set. It lacks context for appropriate usage scenarios, such as prerequisites or recommended sequence.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

detect_cloud_driftBInspect

Cloud provider status changes — live check.

ParametersJSON Schema
NameRequiredDescriptionDefault
entity_idNoOptional
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description must disclose behavioral traits. It states 'live check', implying real-time status, but does not mention side effects (e.g., read-only), required permissions, rate limits, or what happens upon invocation. This is insufficient for safe tool selection.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, front-loaded sentence with no wasted words. It is appropriately concise, though slightly more detail could be included without harming conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of many sibling tools with similar names, the description is too brief to guide an agent effectively. It lacks output format, behavior details, and usage context. The tool has only one optional parameter and no output schema, so completeness is low.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% (the single optional 'entity_id' parameter is described as 'Optional'). Baseline is 3 because the schema already documents the parameter. The description adds no additional meaning or examples, so it does not exceed the baseline.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Cloud provider status changes — live check' clearly states the tool's purpose: detecting cloud drift via a live status check. It uses a specific verb (detect) and resource (cloud drift), and distinguishes it from sibling tools like detect_control_drift or detect_enterprise_drift.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. Given many sibling tools with similar 'detect_*_drift' names, the agent needs explicit context to choose correctly. No when-not or alternative suggestions are given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

detect_control_driftCInspect

Controls that degraded from GREEN to YELLOW/RED.

ParametersJSON Schema
NameRequiredDescriptionDefault
entity_idNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description must disclose behavioral traits. It only states a filtering condition, implying a read operation, but gives no information about side effects, authorization needs, or return behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise but to the point of being insufficient. A single fragment does not provide enough substance for an agent to understand the tool's full scope.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the low complexity (1 optional parameter, no output schema), the description should still explain what 'controls' are, what the output represents, and how the parameter affects results. It fails to provide a complete picture.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description does not mention the sole parameter entity_id, leaving its purpose unclear. Schema coverage is 0%, so no value is added beyond the schema definition.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description specifies the exact condition (controls degraded from GREEN to YELLOW/RED), which clarifies the verb 'detect' and resource 'control drift'. It distinguishes from sibling tools like detect_cloud_drift or detect_enterprise_drift by focusing on control-specific state transitions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool over alternatives or what context (e.g., entity_id) is appropriate. The description is a fragment with no usage hints.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

detect_enterprise_driftAInspect

Detect compliance drift across 6 enterprise dimensions: NIS2 (CyberShield), ISO 27001 (CyberShield), LkSG (SupplyChainOracle), Contract DORA Art.28 (LegalTechOracle), DAC6 Tax (TaxOracle), Healthcare MDR (HealthGuard). Auto-logs drift events.

ParametersJSON Schema
NameRequiredDescriptionDefault
entity_idNo
categoriesNoComma-separated: nis2, iso27001, lksg, contracts, tax, healthcare. Omit for all.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden. It mentions 'auto-logs drift events', a useful behavioral trait, but does not disclose what triggers logging, whether it's read-only, or any side effects. More detail would be beneficial.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences: first states purpose and lists dimensions, second mentions auto-logging. No fluff, front-loaded with critical information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Absent output schema and annotations, the description is adequate for a simple drift detection tool. It names the dimensions and mentions logging, but lacks details on return format, error handling, or performance implications. Could be more complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 50% (only 'categories' has a description). The description reiterates valid category values and implies 'omit for all' from the schema. For a tool with two parameters, the description adds marginal value beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: detecting compliance drift across 6 named enterprise dimensions. It includes specific regulatory frameworks (NIS2, ISO 27001, etc.), distinguishing it from sibling drift-detection tools like detect_cloud_drift or detect_regulatory_drift.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for enterprise compliance drift detection but does not provide explicit guidance on when to use this tool versus alternatives (e.g., detect_regulatory_drift). No when-not or prerequisite information is included.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

detect_evidence_driftCInspect

Find stale assessments + expired evidence + degraded controls.

ParametersJSON Schema
NameRequiredDescriptionDefault
entity_idNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden. It implies a read-only operation (find) but does not disclose authorization needs, side effects, or return format. Terms like 'stale', 'expired', 'degraded' are vague without further context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise (one line), but this brevity comes at the cost of essential information. It is under-specified rather than efficiently informative.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has one optional parameter, no output schema, no annotations, and many similar siblings, the description is completely inadequate. The agent cannot reliably determine how to use the tool or interpret results.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema has one parameter (entity_id) with 0% description coverage. The description does not mention entity_id at all, so the agent gains no insight into what it represents or how to use it.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool finds specific items: stale assessments, expired evidence, and degraded controls. It uses a specific verb and resource. However, it does not explicitly differentiate from closely related siblings like detect_control_drift, which may also deal with controls.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus other drift detection siblings (e.g., detect_control_drift, detect_regulatory_drift). The agent is left to infer usage without explicit context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

detect_regulatory_driftCInspect

Scan for regulation changes affecting DORA. Checks RegWatch + LawOracle + EU feeds.

ParametersJSON Schema
NameRequiredDescriptionDefault
entity_idNoOptional
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist, so the description must cover behavioral traits. It mentions three data sources but omits any side effects, authentication needs, rate limits, or whether the operation is a read-only scan. The behavioral profile is insufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that front-loads the tool's purpose and key data sources. It is efficient with no wasted words. A slight expansion (e.g., explaining entity_id) would not harm conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of annotations and an output schema, the description fails to provide enough context about the output format, parameter usage, or differentiation from siblings. The agent cannot confidently select or invoke this tool correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The only parameter, entity_id, has a schema description of 'Optional' with no further meaning. The tool description does not explain what entity_id does, leaving the agent to guess its purpose. Baseline is 3 due to 100% schema coverage, but the param description is vacuous, and the tool description adds nothing.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it scans for regulation changes affecting DORA and specifies the data sources (RegWatch, LawOracle, EU feeds). However, it doesn't explicitly distinguish itself from sibling drift tools like detect_control_drift, relying on the name to imply regulatory focus.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. With many sibling drift detection tools, the agent receives no hints about the appropriate context (e.g., regulatory compliance vs. cloud or control drift).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

drift_historyCInspect

History of all drift events.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
entity_idNoOptional
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must disclose behavioral traits. It implies a read-only retrieval of history but does not state whether it requires special permissions, whether results are paginated, or if it returns all events or only recent ones.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence, which is concise but arguably under-specified. It front-loads the core purpose but lacks any structure or additional context that would help the agent.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema, missing annotations, and only 2 parameters (one described), the description is too sparse. It does not explain what the 'history' contains (e.g., timestamps, drift types, severity), nor how 'limit' or 'entity_id' affect results. This leaves significant gaps for correct invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 2 parameters: 'limit' (no description) and 'entity_id' (description: 'Optional'). The description adds no meaning beyond the schema; for 'limit', there is no clarification on units or defaults. With 50% schema description coverage, the description does not compensate for the missing schema details.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'History of all drift events' communicates it returns historical drift data, but it does not specify if it's a list, summary, or details, nor how it differs from 'drift_status'. It is adequate but not particularly distinctive among sibling tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives like 'drift_status' or 'detect_*' tools. The description does not mention any preconditions or exclusions, leaving the agent to infer usage from the name alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

drift_statusCInspect

Current open drift events per entity.

ParametersJSON Schema
NameRequiredDescriptionDefault
entity_idNoOptional — empty for all
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must convey behavior. It indicates a read operation ('open drift events') but does not disclose if the tool is read-only, whether it requires specific permissions, or any side effects. The term 'open' is undefined.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that is front-loaded with the key action and resource. It is concise without unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (one optional parameter, no output schema), the description is too minimal. It does not explain what 'drift events' are or the format of the response, which is needed for correct invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema already provides a clear description for the only parameter ('Optional — empty for all'). The description adds no extra meaning beyond what the schema states, so baseline score is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states it returns 'current open drift events per entity,' which gives a general idea but lacks specificity on what constitutes a drift event or the exact output format. It is not a tautology but does not fully distinguish from sibling tools like 'drift_history' or 'detect_*'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. For example, it is unclear whether 'drift_status' should be used instead of 'drift_history' for historical queries or 'detect_*' for new detections.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

full_drift_scanAInspect

Run ALL 5 drift detections in one call (regulatory + evidence + cloud + control + ENTERPRISE). Enterprise dimension scans NIS2, LkSG, ISO 27001, Contracts, Tax, Healthcare via MEGA MCP oracles.

ParametersJSON Schema
NameRequiredDescriptionDefault
entity_idNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description bears full responsibility. It only states the action (running detections) without disclosing behavioral traits like safety, permissions, side effects, or return behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences with no wasted words. The first sentence gives the main purpose, and the second adds important detail on the enterprise dimension.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description adequately covers the tool's purpose and contrasts with siblings, but it omits details about the entity_id parameter's role and the output format, which are not provided elsewhere.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0% for the single parameter entity_id. The description does not mention or explain the parameter, providing no additional meaning beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool runs all 5 drift detections in one call and lists the five types (regulatory, evidence, cloud, control, enterprise). This is a specific verb+resource that distinguishes it from the individual detection siblings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies using this tool when a comprehensive scan is needed, contrasting with the individual detection siblings. However, it lacks explicit guidance on when not to use it or specific alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

health_checkCInspect

Server status.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description should disclose behavioral traits. It only states 'Server status.' and does not indicate whether the operation is read-only, destructive, or has side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence with no wasted words. It is highly concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description should explain what the tool returns (e.g., status details). 'Server status.' is too vague for an agent to know how to interpret the result.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

There are zero parameters and schema coverage is 100% (empty schema). Baseline is 3. The description adds no parameter information, but none is needed.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Server status.' indicates the tool checks server health but does not elaborate on what status entails or differentiate from sibling tools like 'ping'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives such as 'ping'. The description is silent on context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

mark_driftCInspect

Manually mark an article as DRIFT.

ParametersJSON Schema
NameRequiredDescriptionDefault
reasonNo
articleNo
entity_idNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must disclose behavioral traits. It only says 'mark an article as DRIFT' without indicating side effects, permissions needed, or whether it is destructive. This is insufficient for an agent to gauge consequences.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely brief (one phrase), which is concise but lacks substance. It earns its place in terms of brevity, but valuable information is missing. A score of 3 is appropriate for minimal viable conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of three parameters and no output schema or annotations, the description is highly incomplete. It does not cover parameter semantics, return values, or operational effects, leaving significant gaps for agent usage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has three parameters (reason, article, entity_id) with 0% schema description coverage. The description mentions only 'article' and adds no meaning to the other parameters. It fails to clarify what each param expects, leaving the agent with no added context.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action (mark) and resource (article) and the label 'DRIFT'. However, it does not differentiate from sibling tools like 'detect_cloud_drift' or 'resolve_drift', which also deal with drift. A 4 reflects clarity but lack of sibling distinction.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus other drift-related tools. It does not mention prerequisites, exclusions, or typical scenarios. This makes it hard for an agent to select it appropriately.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

obligation_mapDInspect

Drift detection obligations (DORA-DFT-01..04).

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, and the description fails to disclose any behavioral traits such as side effects, permissions, or idempotency. The agent cannot infer whether this is a read or write operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely short (5 words) but fails to convey necessary information. It is under-specified rather than concise, as every word should earn its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, no annotations, and a terse description, the tool lacks essential context. An AI agent cannot understand what this tool returns or how to integrate it into a workflow.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has zero parameters, so the schema coverage is trivially 100%. Per guidelines, zero parameters baseline is 4. The description does not add any parameter meaning, but none is needed.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description mentions 'drift detection obligations' and a DORA reference but lacks an action verb or clear outcome. It's ambiguous whether this tool retrieves, lists, or maps obligations, and the purpose is not distinctly separated from sibling tools like 'detect_cloud_drift'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives. With many drift-related siblings, the description offers no context for selection, leaving the agent to guess.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

pingAInspect

Connectivity test.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must disclose behavior. It only says 'Connectivity test.' but does not mention that it is safe, idempotent, or has no side effects, leaving behavioral traits implicit.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two words, front-loaded, no wasted text. Every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with no parameters and no output schema, the description is adequate. It could mention the expected response (e.g., success/failure indication), but the simplicity makes it nearly complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

There are no parameters, so the schema is trivially covered. The description adds no param info, but none is needed; baseline 4 applies per guidelines for 0 params.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Connectivity test.' uses a specific verb ('test') and resource ('connectivity'), clearly distinguishing from sibling tools like health_check which imply broader checks.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives such as health_check. The description simply states what it does without context for selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

resolve_driftCInspect

Mark drift event as resolved.

ParametersJSON Schema
NameRequiredDescriptionDefault
drift_idNo
resolved_byNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description must disclose behavior. Only states 'mark as resolved', but does not mention side effects, irreversibility, authorization needs, or any other behavioral traits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence is concise and front-loaded, but at the cost of missing essential information. Acceptable minimal structure but insufficient content.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With 2 undocumented parameters, no output schema, no annotations, and only a one-line description, the tool completely fails to provide enough context for correct agent invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Both parameters (drift_id, resolved_by) have no schema descriptions (0% coverage) and the description adds no meaning. Agent has no clue about format, purpose, or required values.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states 'Mark drift event as resolved.' specifying the action (resolve) and resource (drift event). It distinguishes from sibling tools like detect_* or mark_drift.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus siblings. No mention of prerequisites, when not to use, or alternatives among the many drift-related tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

trigger_remediationCInspect

Self-healing: sync all data, run escalation, collect evidence, auto-resolve drift events.

ParametersJSON Schema
NameRequiredDescriptionDefault
entity_idNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Lists high-level actions (sync, escalation, collect evidence, auto-resolve), providing some behavioral insight. However, lacks details on side effects, destructive nature, or required permissions, which is critical given no annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Concise single sentence with actions colon-separated. Front-loaded with 'Self-healing' for quick understanding. Could benefit from bullet points but efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and one parameter, the description lacks completeness on when to use, what entity_id means, and detailed outcome of remediation. Does not sufficiently differentiate from similar tools.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The sole parameter entity_id is not described at all in the description. Schema coverage is 0%, so the agent has no guidance on what to pass or its format.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it is a self-healing tool that syncs data, runs escalation, collects evidence, and auto-resolves drift events. It distinguishes from sibling detection tools by focusing on remediation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives. It implies use after drift detection but does not specify prerequisites or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.