tlpt
Server Details
TLPTOracle — 17-tool TIBER-EU TLPT framework: scope, threat intel, scenarios, reports.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 2.7/5 across 14 of 17 tools scored. Lowest: 1.8/5.
Most tools have distinct purposes, but potential confusion exists between 'auto_simulate' and 'generate_scenarios' (both involve scenario generation) and between 'health_check' and 'ping' (both test connectivity). Descriptions help mitigate but do not eliminate ambiguity.
Tool names consistently use snake_case but mix verb-based (generate_scenarios, register_exercise) and noun-based (evidence_bundle, threat_profile) patterns. 'ping' is a single word, breaking the pattern. This inconsistency may confuse agents about whether a tool performs an action or represents a resource.
17 tools cover the breadth of TLPT activities (scoping, simulation, findings, compliance) without being overwhelming. Each tool serves a distinct part of the workflow, making the count well-scoped for domain complexity.
The tool set covers major TLPT lifecycle phases: preparation (threat_profile), scenario generation (generate_scenarios, auto_simulate), execution mapping (attack_chain, mitre_map), findings (finding_register, remediation_plan), compliance (obligation_map, sync_to_ampel), and scheduling (test_calendar). Minor gaps like a dedicated reporting tool exist but can be worked around.
Available Tools
17 toolsattack_chainCInspect
Generate kill chain for a TIBER-EU scenario with MITRE mapping.
| Name | Required | Description | Default |
|---|---|---|---|
| scenario_id | Yes | TIBER-01 to TIBER-08 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are present, so the description carries full burden for behavioral disclosure. It only states it generates a kill chain without mentioning side effects, permissions, or response characteristics. For a generation tool, this lack of transparency is inadequate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise at one sentence of 9 words. It contains no redundant information. However, it sacrifices some clarity by omitting output format, making it slightly less effective than a slightly longer version.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With one parameter, no output schema, and no annotations, the description lacks completeness. It does not specify the return value (e.g., what a kill chain looks like), how to interpret results, or any constraints. This is insufficient for an AI agent to use the tool reliably.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% for the single parameter 'scenario_id', with the description 'TIBER-01 to TIBER-08' providing format constraints. The tool description adds context that the scenario is for TIBER-EU, but no additional semantic value beyond the schema. Baseline 3 applies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Generate' and the resource 'kill chain for a TIBER-EU scenario with MITRE mapping'. It distinguishes from siblings like 'generate_scenarios' and 'mitre_map' by specifying the type of generation. However, it does not clarify the output format (e.g., diagram, list), leaving slight ambiguity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. There is no mention of when not to use it or references to sibling tools like 'mitre_map' or 'threat_profile'. The usage context is only implied by the name and description.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
auto_simulateCInspect
Autonomous TLPT simulation: generates TIBER-EU scenarios, maps MITRE ATT&CK, creates findings, rates detection. Art. 26.
| Name | Required | Description | Default |
|---|---|---|---|
| entity_id | Yes | ||
| scenarios | No | Number of scenarios (default: 3) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description bears the full burden. It fails to disclose whether the tool is destructive, requires specific permissions, or has side effects like creating or modifying records. The autonomous nature is mentioned but not elaborated in terms of behavioral implications.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that efficiently lists the tool's capabilities. While it is concise, it could benefit from clearer structure or bullet points, but there is no fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description should explain return values and side effects. It omits what the tool returns, how to interpret 'Art. 26', and any prerequisites beyond entity_id. For a multi-step autonomous tool, this is insufficient.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 50% (only 'scenarios' has a description). The tool description does not explain the 'entity_id' parameter, which is required and lacks documentation. It adds no clarification beyond the schema, leaving the agent uncertain about what entity_id refers to.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: autonomous TLPT simulation with specific outputs (TIBER-EU scenarios, MITRE ATT&CK mapping, findings, detection rating). It distinguishes itself from siblings like 'generate_scenarios' and 'mitre_map' by implying an end-to-end automated process.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives such as 'generate_scenarios' or 'attack_chain'. There is no mention of prerequisites, limitations, or cases where another tool would be more appropriate.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
evidence_bundleCInspect
Compile TLPT evidence for authority attestation.
| Name | Required | Description | Default |
|---|---|---|---|
| exercise_id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description must convey behavioral traits, but only states the action; no information on side effects, authorization needs, or output behavior is given.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The one-sentence description is concise but overly terse, lacking necessary detail for effective tool understanding; it does not earn its brevity with high information density.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of output schema and annotations, the description fails to provide adequate completeness; the agent is left unsure of the tool's full capabilities and effects.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has one parameter (exercise_id) with no schema description (0% coverage), and the description omits any explanation of what exercise_id represents or how to use it.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states a specific verb 'Compile' and a distinct resource 'TLPT evidence for authority attestation', effectively differentiating it from siblings like attack_chain or finding_register.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives, leaving the agent to infer context from the name alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
finding_registerCInspect
Log a pentest/TLPT finding.
| Name | Required | Description | Default |
|---|---|---|---|
| title | Yes | ||
| severity | Yes | ||
| attack_path | No | ||
| exercise_id | No | ||
| recommendation | No | ||
| mitre_technique | No | ||
| system_affected | No | ||
| detection_status | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description must disclose behavioral traits, but it only says 'Log a finding'—implying a create operation. It does not mention side effects, idempotency, permissions, or what happens on duplicate entries, leaving significant gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single short sentence, which is highly concise. However, it is so minimal that it sacrifices usefulness for brevity; it could include more context without becoming verbose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of annotations, output schema, and any parameter documentation, the description is severely incomplete. It fails to explain important aspects like required fields beyond the schema, return values, or error handling, making it inadequate for reliable tool invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description adds no meaning beyond the input schema, which itself has 0% coverage (no descriptions for any of the 8 parameters). The agent must infer parameter usage entirely from names and types, making the description unhelpful for parameter semantics.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Log a pentest/TLPT finding' uses a specific verb 'log' and a clear resource 'finding' with domain context 'pentest/TLPT'. This clearly distinguishes it from sibling tools like 'attack_chain' or 'evidence_bundle', making the purpose unmistakable.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives, nor does it mention when not to use it. The purpose is implied but entirely lacks context for decision-making.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
generate_scenariosAInspect
List all 8 TIBER-EU scenarios available for autonomous simulation.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden. It correctly implies a read-only operation ('list'), but does not disclose any other behavioral traits such as data freshness, caching behavior, or rate limits. Given the simple nature, it is adequate but minimal.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, concise sentence that conveys all necessary information without extraneous words. It is front-loaded and efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with no parameters and no output schema, the description sufficiently covers what the tool does and its scope. It could mention output format or return structure, but given the simplicity, it is reasonably complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
There are no parameters (schema coverage 100% empty), so the description need not add parameter information. The baseline of 4 applies as there is nothing to compensate for, and the description does not need to elaborate on nonexistent parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool lists 'all 8 TIBER-EU scenarios available for autonomous simulation.' It specifies the exact number, context, and the verb 'List' indicates a read operation. This distinguishes it from sibling tools like attack_chain or auto_simulate which have different purposes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. It simply states what it does without mentioning prerequisites, exclusions, or related tools that might be more appropriate for filtering or specific scenario retrieval.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
health_checkCInspect
Server status.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description bears full burden for behavioral transparency. 'Server status.' does not disclose what the tool does (e.g., check connectivity, database, or latency), nor its side effects or response format.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise at two words, but potentially too terse. While brevity is valued, a slightly more descriptive sentence (e.g., 'Returns current server health status.') would improve clarity without losing conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given zero parameters and no output schema, the description is minimally complete but fails to specify what 'server status' entails (e.g., return format, possible values, or implications). A simple tool still benefits from clarity on output.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
There are zero parameters, so baseline is 4. The description adds nothing beyond the schema, but this is acceptable since no parameter details are needed.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Server status.' is a noun phrase, not a verb+resource. It vaguely indicates the tool returns server status but lacks specificity about the action performed. Compared to sibling tools like 'ping' which also imply health checks, this does not clearly distinguish its purpose.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool vs alternatives like 'ping' or other health-related tools. The description provides no context for appropriate usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mitre_mapAInspect
Map MITRE ATT&CK techniques to DORA TLPT. Filter by technique ID or tactic.
| Name | Required | Description | Default |
|---|---|---|---|
| tactic | No | ||
| technique | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, and the description does not disclose behavioral traits like read-only nature, output format, or side effects. It only states the mapping operation, which is insufficient for a tool with zero annotation coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence with no wasted words. Every component serves a purpose, and it is front-loaded with the core action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simplicity of the tool (2 optional params, no output schema), the description is mostly complete. It covers the mapping and filtering, but could benefit from mentioning what the mapping produces (e.g., a list of DORA TLPT mappings).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description adds that parameters are for filtering ('Filter by technique ID or tactic'), but does not detail valid values or format, providing minimal guidance beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Map') and the resources ('MITRE ATT&CK techniques to DORA TLPT'), distinguishing it from siblings like 'attack_chain' or 'obligation_map'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies use for mapping with optional filtering ('Filter by technique ID or tactic'), but lacks explicit guidance on when to use this tool versus alternatives, such as 'attack_chain' or 'obligation_map'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
obligation_mapDInspect
TLPT obligations DORA-TST-06 to TST-10.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, and the description does not disclose any behavioral traits (e.g., read-only, mutation, authentication needs). It is completely opaque.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Very short, but under-specified. While concise, it fails to convey essential meaning, making it more incomplete than concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema, no annotations, and a cryptic description. The tool's purpose, behavior, and output are entirely unclear, leaving the agent without sufficient information.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
There are no parameters, and schema coverage is 100% trivially. The description does not need to add parameter meaning. Baseline 4 for zero parameters applies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'TLPT obligations DORA-TST-06 to TST-10' gives a reference to specific regulatory obligation IDs but lacks a verb explaining what the tool does (e.g., list, map, retrieve). It is ambiguous whether it displays, exports, or links these obligations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus siblings like 'attack_chain' or 'threat_profile'. The description does not specify context or prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
phase_trackerCInspect
Track TIBER-EU 8-phase lifecycle.
| Name | Required | Description | Default |
|---|---|---|---|
| exercise_id | Yes | ||
| phase_status | No | ||
| update_phase | No | Phase 1-8 to update |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description bears full burden. It does not disclose whether the tool is read-only, destructive, or what side effects occur. 'Track' implies non-destructive monitoring but is insufficient for a tool that likely updates phase status based on the schema's 'update_phase' parameter.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single 5-word sentence, making it concise but too vague. It front-loads the main domain but sacrifices critical details, resulting in under-specification that forces reliance on the schema, which is also sparse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 3 parameters, no output schema, and no annotations, the description is grossly incomplete. It fails to explain return values, update behavior, or how parameters interact. The agent cannot confidently use this tool based solely on the description and schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is low (33%, only 'update_phase' described). The description adds no meaning beyond the schema: it does not explain the role of 'exercise_id' (e.g., identifier for which exercise) or 'phase_status' (e.g., whether it's a filter or a value to set). The agent must guess parameter relationships.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states 'Track TIBER-EU 8-phase lifecycle,' which provides a domain-specific resource and action verb. However, it is vague and does not clarify whether the tool reads or updates phase status, lacking specificity compared to sibling tools like 'attack_chain' or 'register_exercise.'
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives like 'generate_scenarios' or 'remediation_plan.' The description offers no context for appropriate usage, leaving the agent to infer from the schema alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
pingAInspect
Connectivity test.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden for behavioral disclosure. 'Connectivity test' implies a safe, read-only action, but it does not specify what happens on failure (e.g., timeout) or what the response looks like. Adequate for a trivial tool but could be more explicit.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise at two words, with no wasted text. It is front-loaded and to the point, earning its place with minimal verbiage.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (no parameters, no output schema), the description is largely complete. However, it lacks any indication of the return value or expected outcome, which would be beneficial for completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has zero parameters, so the input schema fully covers parameter semantics. The description does not need to add parameter information, and it appropriately avoids extraneous detail.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Connectivity test' is a clear verb+resource pair that succinctly states the tool's function. It effectively distinguishes from sibling tools like health_check and attack_chain by focusing on basic network connectivity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. The description does not mention any context, prerequisites, or exclusions, leaving the agent without direction on appropriate usage scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
register_exerciseCInspect
Register a TLPT/pentest exercise.
| Name | Required | Description | Default |
|---|---|---|---|
| notes | No | ||
| scope | No | ||
| status | No | ||
| test_type | No | ||
| budget_eur | No | ||
| start_date | No | ||
| exercise_id | No | ||
| ti_provider | No | ||
| exercise_name | Yes | ||
| target_end_date | No | ||
| red_team_provider | No | ||
| authority_notified | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden for behavioral disclosure. It only states 'Register', implying a create operation, but gives no details on side effects, required permissions, or constraints. This is insufficient for an agent to understand the tool's impact.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely short (one sentence) but at the expense of completeness. It fails to front-load critical information such as the purpose of the tool in relation to its many parameters, making it under-specified rather than efficiently concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 12 parameters, no annotations, and no output schema, the description is severely inadequate. An agent would have no understanding of how to use the tool correctly, what is required, or what the result will be.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description adds no information about any of the 12 parameters. Schema description coverage is 0%, and the description does not compensate by explaining key parameters like 'exercise_name', 'status', or 'testing_type'. The agent has no guidance on how to fill the input.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the verb 'Register' and the resource 'a TLPT/pentest exercise', which is specific. However, it does not differentiate from sibling tools like 'team_assignment' or 'test_calendar', missing a chance to clarify its unique role.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. There is no mention of prerequisites, context, or when not to use it, leaving the agent without decision support.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
remediation_planCInspect
Track remediation of TLPT findings. Set add=true to create.
| Name | Required | Description | Default |
|---|---|---|---|
| add | No | ||
| owner | No | ||
| action | No | ||
| due_date | No | ||
| finding_id | No | ||
| exercise_id | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, so the description must disclose behavioral traits. It states that setting 'add=true' creates, but it does not explain what happens when 'add' is false or missing, nor does it reveal other behavioral aspects like persistence, side effects, or access requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely short (one sentence), which is concise, but it sacrifices necessary detail. While brevity is valued, the lack of structure and missing information reduces effectiveness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 6 parameters with no schema descriptions, no output schema, and no annotations, the description is far from complete. It fails to provide sufficient context for the agent to understand the tool's full behavior and parameter requirements.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 0%, and the description only briefly mentions the 'add' parameter. The other five parameters ('owner', 'action', 'due_date', 'finding_id', 'exercise_id') are completely undocumented, leaving the agent without semantic meaning.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description mentions 'track remediation of TLPT findings' and 'set add=true to create', indicating both tracking and creation. However, it is ambiguous whether the tool primarily tracks or creates, and the verb 'track' is vague without further context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus siblings like 'finding_register' or 'phase_tracker'. The description does not mention conditions, alternatives, or when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
sync_to_ampelCInspect
Push TLPT results to AmpelOracle Art. 26 checks.
| Name | Required | Description | Default |
|---|---|---|---|
| entity_id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must fully disclose behavior. It only indicates a write operation ('Push') but fails to state whether it is idempotent, whether it overwrites existing data, is reversible, or requires specific permissions. Critical transparency gaps remain.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is single sentence and concise, but it is not well-structured; the phrase 'AmpelOracle Art. 26 checks' is cryptic and may confuse agents. Some additional structure or explanation would improve clarity without sacrificing brevity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has one parameter and no output schema, the description should explain entity_id and any side effects. It does neither, leaving the agent without enough context to correctly invoke the tool or interpret its effects.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has a single required parameter entity_id with 0% description coverage. The tool description never mentions or explains the parameter, leaving its purpose and expected format entirely ambiguous. This fails to add value beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Push') and identifies the resource ('TLPT results') and target ('AmpelOracle Art. 26 checks'), making the action clear. However, it does not differentiate from sibling tools, but the name itself is unique enough.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives, nor does it mention any prerequisites or exclusions. The agent has no context for deciding between sync_to_ampel and other sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
team_assignmentCInspect
Manage Red/Blue/White/Purple team assignments.
| Name | Required | Description | Default |
|---|---|---|---|
| red_team | No | ||
| blue_team | No | ||
| white_team | No | ||
| exercise_id | Yes | ||
| purple_team | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, and the description fails to disclose behavioral traits such as whether the tool creates or modifies assignments, any side effects, or permissions needed.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is very short (8 words) but underspecifies; it is concise but at the cost of completeness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 5 parameters and no output schema or annotations, the description is severely undercomplete. It does not explain the effect of the operation, return values, or prerequisites.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 0%, yet the description only lists team colors without explaining the meaning or expected format of any parameter (e.g., what a string value represents). The agent gets no guidance on how to fill parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses 'manage' which is vague; it lists team colors but does not specify whether the tool creates, updates, or deletes assignments. This leaves ambiguity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives like register_exercise or other sibling tools. The agent is given no context for selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
test_calendarCInspect
Annual testing schedule and compliance check.
| Name | Required | Description | Default |
|---|---|---|---|
| year | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden of disclosing behavior. It implies a read operation (schedule/compliance check) but does not confirm whether it is read-only, whether it modifies data, or if it requires special permissions. No behavioral traits are disclosed.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is very short (one sentence) but lacks necessary detail, making it under-specific rather than efficiently concise. It does not front-load critical information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the low schema richness, no annotations, and no output schema, the description is insufficient for an agent to understand the tool's full functionality. It omits return values, error cases, and behavioral details.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has one parameter, 'year', with 0% description coverage. The description does not explain what the 'year' parameter represents (e.g., calendar year, fiscal year) or how it should be formatted. This provides no added meaning beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description indicates it relates to an annual testing schedule and compliance check, but lacks a specific verb (e.g., retrieve, create, update) to clarify the action. It distinguishes from sibling tools like attack_chain or mitre_map by suggesting a calendar or compliance focus, but the purpose is still vague.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives like health_check or register_exercise. The description does not specify context, prerequisites, or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
threat_profileCInspect
Define threat landscape for TLPT scoping.
| Name | Required | Description | Default |
|---|---|---|---|
| sector | No | ||
| geography | No | ||
| exclusions | No | ||
| exercise_id | Yes | ||
| crown_jewels | No | ||
| threat_actors | No | ||
| attack_scenarios | No | ||
| critical_systems | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden. 'Define' implies mutation but no details on side effects, permissions, idempotency, or what happens on repeated calls. The behavior is opaque.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise (5 words), which is efficient but at the cost of completeness. It lacks structure such as usage examples or clarifications. Not all concise descriptions are good; this one is under-specified.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 8 parameters, no output schema, and no annotations, the one-line description is grossly incomplete. The agent has no way to understand what the tool returns, how parameters interact, or how it fits with sibling tools like 'tlpt_readiness'.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It does not explain any of the 8 parameters (e.g., sector, geography, crown_jewels). The parameter names provide some hint, but the description adds no semantic value beyond the names themselves.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a verb 'define' and a noun 'threat landscape for TLPT scoping', making the general purpose clear. However, it doesn't differentiate from sibling tools like 'attack_chain' or 'generate_scenarios', which could also relate to threat scoping.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. There is no mention of prerequisites, context, or exclusion of other tools, leaving the agent without decision support.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tlpt_readinessCInspect
Assess organizational readiness for a TLPT exercise.
| Name | Required | Description | Default |
|---|---|---|---|
| scope_defined | No | ||
| budget_approved | No | ||
| legal_framework | No | ||
| authority_engaged | No | ||
| red_team_selected | No | ||
| white_team_formed | No | ||
| ti_provider_selected | No | ||
| crown_jewels_identified | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description 'assess' suggests a read-only operation, but does not confirm if the tool modifies state or has side effects. With no annotations, the description should disclose behavioral traits like idempotency or authentication needs. The lack of detail leaves ambiguity.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, direct sentence with no wasted words. However, it is too brief for the complexity of the tool (8 parameters), missing additional context that could improve utility.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 8 boolean parameters, no output schema, and no annotations, the description should explain what 'readiness' involves, how results are returned, or any dependencies. The current description is insufficient for an agent to use the tool effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 8 boolean parameters with 0% schema description coverage, and the tool description provides no explanations for any parameter. Parameter names like 'scope_defined' are somewhat self-explanatory, but the description adds no value beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'assess' and the resource 'organizational readiness for a TLPT exercise', directly matching the tool name. It is distinguishable from sibling tools like 'health_check' by specifying 'TLPT exercise', but lacks explicit differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives such as 'health_check' or 'ping'. There is no mention of prerequisites, context, or scenarios where this tool is appropriate or not.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!