Skip to main content
Glama

preview_profile_changes

Preview configuration changes before applying profiles to understand impact on virtual machines, Ansible roles, and resource requirements.

Instructions

Preview changes that would be made by applying a profile without modifying the config.

This is a dry-run tool that shows what would happen if you applied a profile, without actually modifying the configuration. Useful for understanding the impact before committing to changes.

Args: config: The Ludus range configuration to analyze profile_type: Type of profile to preview ("adversary" or "defender") profile_level: Level to preview (threat level for adversary, monitoring level for defender)

Returns: Dictionary containing: - status: "success" - profile_type: The profile type previewed - profile_level: The level previewed - affected_vms: List of VMs that would be modified - changes_summary: Summary of changes - ansible_roles_added: List of Ansible roles that would be added - estimated_impact: Estimated resource and complexity impact - recommendations: Recommendations based on the preview

Examples: # Preview medium adversary profile preview = await preview_profile_changes( config=my_config, profile_type="adversary", profile_level="medium" )

# Preview advanced defender profile
preview = await preview_profile_changes(
    config=my_config,
    profile_type="defender",
    profile_level="advanced"
)

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
configYes
profile_typeYes
profile_levelYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It clearly states this is a 'dry-run tool' that 'without actually modifying the configuration', which effectively communicates it's a read-only, non-destructive operation. It also describes the return structure in detail, though it doesn't mention potential errors, rate limits, or authentication requirements.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and appropriately sized. It begins with a clear purpose statement, follows with usage context, then details parameters and returns in organized sections, and concludes with practical examples. Every sentence adds value without redundancy, and the information is front-loaded with the most important details first.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 3-parameter tool with no annotations and no output schema, the description provides comprehensive context. It explains the tool's purpose, when to use it, what each parameter means, and details the return structure with specific fields. The examples further clarify usage. This is complete enough for an agent to understand and invoke the tool correctly despite the lack of structured metadata.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage for 3 parameters, the description compensates well by explaining each parameter's meaning: 'config: The Ludus range configuration to analyze', 'profile_type: Type of profile to preview ("adversary" or "defender")', and 'profile_level: Level to preview (threat level for adversary, monitoring level for defender)'. This adds significant semantic value beyond the bare schema, though it doesn't specify format constraints or validation rules.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('preview changes', 'shows what would happen') and resources ('profile', 'config'), and explicitly distinguishes it from actual modification tools by emphasizing it's a 'dry-run tool' that 'without actually modifying the configuration'. This differentiates it from sibling tools like apply_adversary_profile and apply_defender_profile.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidance: it states when to use this tool ('Useful for understanding the impact before committing to changes') and implicitly when not to use it (when you want to actually apply changes, use the apply_* sibling tools instead). The examples further reinforce the appropriate contexts for usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/tjnull/Ludus-FastMCP'

If you have feedback or need assistance with the MCP directory API, please join our Discord server