Skip to main content
Glama
agenticcontrolio

TwinCAT Validator MCP Server

autofix_file

Automatically fix common TwinCAT XML issues in TwinCAT 3 files by applying safe corrections and formatting improvements to ensure code quality and compliance.

Instructions

Automatically fix common TwinCAT XML issues.

Args: file_path: Path to TwinCAT file create_backup: Create backup before fixing fixes_to_apply: List of fix IDs, or None for all profile: "full" (default) verbose response, "llm_strict" minimal response format_profile: "default" or "twincat_canonical" formatting pass strict_contract: If True, fail closed on generation-contract violations create_implicit_files: If True, auto-create missing implicit dependency files (currently interface .TcIO files for IMPLEMENTS I_* clauses) orchestration_hints: If True, include next_action/terminal/no_change hints and content fingerprints for loop prevention in weak agents. intent_profile: Programming paradigm intent — "auto" (default), "procedural", or "oop". Controls which check families are used in post-fix validation.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
file_pathYes
create_backupNo
fixes_to_applyNo
profileNofull
format_profileNodefault
strict_contractNo
create_implicit_filesNo
orchestration_hintsNo
enforcement_modeNostrict
intent_profileNoauto

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • The actual implementation of the autofix_file MCP tool, which applies fixes to a TwinCAT file based on specified profiles and enforcement modes.
    def autofix_file(
        file_path: str,
        create_backup: bool = True,
        fixes_to_apply: Optional[list[str]] = None,
        profile: str = "full",
        format_profile: str = "default",
        strict_contract: bool = False,
        create_implicit_files: bool = False,
        orchestration_hints: bool = False,
        enforcement_mode: str = DEFAULT_ENFORCEMENT_MODE,
        intent_profile: str = "auto",
    ) -> str:
        """Automatically fix common TwinCAT XML issues.
    
        Args:
            file_path: Path to TwinCAT file
            create_backup: Create backup before fixing
            fixes_to_apply: List of fix IDs, or None for all
            profile: "full" (default) verbose response, "llm_strict" minimal response
            format_profile: "default" or "twincat_canonical" formatting pass
            strict_contract: If True, fail closed on generation-contract violations
            create_implicit_files: If True, auto-create missing implicit dependency files
                (currently interface .TcIO files for IMPLEMENTS I_* clauses)
            orchestration_hints: If True, include next_action/terminal/no_change hints
                and content fingerprints for loop prevention in weak agents.
            intent_profile: Programming paradigm intent — "auto" (default), "procedural",
                or "oop".  Controls which check families are used in post-fix validation.
        """
        _t0 = time.monotonic()
        ctx = None
        try:
            mode_error = _validate_enforcement_mode(enforcement_mode, start_time=_t0)
            if mode_error:
                return mode_error
            ctx = _resolve_execution_context(file_path, enforcement_mode=enforcement_mode)
            if intent_profile not in _VALID_INTENT_PROFILES:
                return _tool_error(
                    f"Invalid intent_profile: {intent_profile}",
                    file_path=file_path,
                    start_time=_t0,
                    execution_context=ctx,
                    valid_intent_profiles=list(_VALID_INTENT_PROFILES),
                )
            profile_error = _validate_profile(profile, start_time=_t0, execution_context=ctx)
            if profile_error:
                return profile_error
            format_profile_error = _validate_format_profile(
                format_profile, start_time=_t0, execution_context=ctx
            )
            if format_profile_error:
                return format_profile_error
    
            path, error = _validate_file_path(file_path, start_time=_t0, execution_context=ctx)
            if error:
                return error
    
            file = TwinCATFile.from_path(path)
            _intent_resolved = _resolve_intent_profile(file.content, intent_profile)
            _exclude_cats = frozenset({"oop"}) if _intent_resolved == "procedural" else None
            implicit_files_created: list[str] = []
            original_content = file.content
            content_fingerprint_before = _sha256_text(original_content)
    
            pre_canon_invalid_guids = _count_invalid_guid_tokens(original_content)
    
            implicit_creation_enabled = create_implicit_files or profile == "llm_strict"
            if implicit_creation_enabled:
                implicit_files_created = _create_missing_implicit_files(file)
    
            if file.suffix == ".TcPOU":
                _canonicalize_tcpou_method_layout(file)
            elif file.suffix == ".TcIO":
                _normalize_interface_inline_methods(file)
                _canonicalize_tcio_layout(file)
            elif file.suffix == ".TcDUT":
                _canonicalize_tcdut_layout(file)
    
            if format_profile == "twincat_canonical":
                _ensure_tcplcobject_attrs(file)
                _canonicalize_getter_declarations(file)
                _canonicalize_ids(file)
                if file.suffix == ".TcPOU":
                    _rebuild_pou_lineids(file)
                _normalize_line_endings_and_trailing_ws(file)
    
            if strict_contract:
                contract_errors = _check_generation_contract(file)
                if contract_errors:
                    blockers = [
                        {
                            "check": "generation_contract",
                            "line": None,
                            "message": msg,
                            "fixable": False,
                        }
                        for msg in contract_errors
                    ]
                    if profile == "llm_strict":
                        content_changed = file.content != original_content
                        if content_changed:
                            file.save(create_backup=create_backup)
                        post_canon_invalid_guids, contract_violations = _artifact_sanity_violations(
                            file, strict_contract=True
                        )
                        invalid_guid_count = max(pre_canon_invalid_guids, post_canon_invalid_guids)
                        result = {
                            "success": True,
                            "file_path": str(file.filepath),
                            "safe_to_import": False,
                            "safe_to_compile": False,
                            "content_changed": content_changed,
                            "fixes_applied": [],
                            "blocking_count": len(blockers),
                            "blockers": blockers,
                            "contract_passed": False,
                            "contract_errors": contract_errors,
                            "requires_regeneration": True,
                            "implicit_files_created": implicit_files_created,
                            "invalid_guid_count": invalid_guid_count,
                            "contract_violations": contract_violations,
                        }
                        if orchestration_hints:
                            issue_fingerprint = _compute_issue_fingerprint(blockers)
                            no_progress_count = _update_no_progress_count(
                                str(file.filepath),
                                issue_fingerprint,
                                content_changed,
                            )
                            next_action, terminal = _derive_next_action(
                                safe_to_import=False,
                                safe_to_compile=False,
                                blockers=blockers,
                                no_change_detected=not content_changed,
                                no_progress_count=no_progress_count,
                                contract_failed=True,
                            )
                            result.update(
                                {
                                    "no_change_detected": not content_changed,
                                    "content_fingerprint_before": content_fingerprint_before,
                                    "content_fingerprint_after": _sha256_text(file.content),
                                    "issue_fingerprint": issue_fingerprint,
                                    "no_progress_count": no_progress_count,
                                    "next_action": next_action,
                                    "terminal": terminal,
                                }
                            )
                        return _with_meta(result, _t0, execution_context=ctx)
    
                    return _with_meta(
                        {
                            "success": True,
                            "file_path": str(file.filepath),
                            "content_changed": file.content != original_content,
                            "fixes_applied": [],
                            "validation_after_fix": None,
                            "contract_passed": False,
                            "contract_errors": contract_errors,
                            "requires_regeneration": True,
                            "implicit_files_created": implicit_files_created,
                            "invalid_guid_count": max(
                                pre_canon_invalid_guids,
                                _count_invalid_guid_tokens(file.content),
                            ),
                            "contract_violations": contract_errors,
                        },
                        _t0,
                        execution_context=ctx,
                    )
    
            resolved_fix_ids = fixes_to_apply
            # In canonical profile, LineIds are rebuilt deterministically after fixes.
            # Skipping the experimental lineids fixer avoids duplicate/unstable edits.
            if format_profile == "twincat_canonical":
                if resolved_fix_ids is None:
                    resolved_fix_ids = [
                        fix_id for fix_id in config.fix_capabilities.keys() if fix_id != "lineids"
                    ]
                else:
                    resolved_fix_ids = [
                        fix_id for fix_id in resolved_fix_ids if fix_id != "lineids"
                    ]
    
            fix_result = fix_engine.apply_fixes(file, fix_ids=resolved_fix_ids)
    
            if format_profile == "twincat_canonical":
                _ensure_tcplcobject_attrs(file)
                if file.suffix == ".TcPOU":
                    _canonicalize_tcpou_method_layout(file)
                elif file.suffix == ".TcIO":
                    _normalize_interface_inline_methods(file)
                    _canonicalize_tcio_layout(file)
                elif file.suffix == ".TcDUT":
                    _canonicalize_tcdut_layout(file)
                _canonicalize_getter_declarations(file)
                _canonicalize_ids(file)
                if file.suffix == ".TcPOU":
                    _rebuild_pou_lineids(file)
                _normalize_line_endings_and_trailing_ws(file)
    
            content_changed = file.content != original_content
            content_fingerprint_after = _sha256_text(file.content)
    
            backup_path = None
            if content_changed:
                backup_path = file.save(create_backup=create_backup)
    
            validation_result_all = validation_engine.validate(
                file, "all", exclude_categories=_exclude_cats
            )
    
            if profile == "llm_strict":
                validation_result_blockers = validation_engine.validate(
                    file, "critical", exclude_categories=_exclude_cats
                )
            else:
                validation_result_blockers = validation_result_all
    
            # Build policy-enforcement blockers from issues (special-cased serialisation).
            policy_blockers: list[dict] = []
            for issue in validation_result_blockers.issues:
                if issue.severity not in ERROR_SEVERITIES or issue.fix_available:
                    continue
                if str(issue.category).lower() == "policy_enforcement":
                    rule_match = re.search(r"\[rule_id:([a-z0-9_]+)\]", issue.message)
                    rule_id = (
                        rule_match.group(1)
                        if rule_match
                        else "enforce_interface_contract_integrity"
                    )
                    clean_message = re.sub(r"^\[rule_id:[a-z0-9_]+\]\s*", "", issue.message)
                    policy_blockers.append(
                        {
                            "check": "policy_enforcement",
                            "rule_id": rule_id,
                            "line": issue.line_num,
                            "message": clean_message,
                            "severity": "error",
                            "fixable": False,
                        }
                    )
    
            post_canon_invalid_guids, contract_violations = _artifact_sanity_violations(
                file, strict_contract=strict_contract
            )
            invalid_guid_count = max(pre_canon_invalid_guids, post_canon_invalid_guids)
            issue_records = _engine_issues_to_records(validation_result_all)
            sanity_blockers: list[dict] = []
            if invalid_guid_count > 0:
                sanity_blockers.append(
                    {
                        "check": "artifact_sanity",
                        "line": None,
                        "message": (
                            f"Found {invalid_guid_count} malformed GUID token(s) in Id attributes."
                        ),
                        "fixable": False,
                    }
                )
            if contract_violations:
                for violation in contract_violations:
                    sanity_blockers.append(
                        {
                            "check": "generation_contract",
                            "line": None,
                            "message": violation,
                            "fixable": False,
                        }
                    )
    
            # Derive canonical contract state (RC-1: single source of truth).
            # Pass policy_blockers + sanity_blockers as extra_blockers so that
            # the canonical derivation accounts for them without duplicating issues.
            all_extra_blockers = policy_blockers + sanity_blockers
            cs = derive_contract_state(
                validation_result_blockers.issues,
                extra_blockers=all_extra_blockers if all_extra_blockers else None,
                profile=profile,
            )
            # Override blockers list: use cs.blockers which has the canonical merged set,
            # but replace any policy-enforcement issues with the specially-formatted dicts.
            # Strategy: build blockers as cs.blockers but substitute policy_blockers.
            non_policy_issue_blockers = [
                b
                for b in cs.blockers
                if isinstance(b, dict)
                and b.get("check") != "policy_enforcement"
                and b not in sanity_blockers
            ]
            blockers = non_policy_issue_blockers + policy_blockers + sanity_blockers
            safe_to_import = cs.safe_to_import
            safe_to_compile = cs.safe_to_compile
            if sanity_blockers:
                issue_records.extend(sanity_blockers)
    
            if profile == "llm_strict":
                result = {
                    "success": True,
                    "file_path": str(file.filepath),
                    "safe_to_import": safe_to_import,
                    "safe_to_compile": safe_to_compile,
                    "content_changed": content_changed,
                    "fixes_applied": fix_result.applied_fixes,
                    "blocking_count": len(blockers),
                    "blockers": blockers,
                    "invalid_guid_count": invalid_guid_count,
                    "contract_violations": contract_violations,
                }
                if create_implicit_files:
                    result["implicit_files_created"] = implicit_files_created
                if orchestration_hints:
                    no_change_detected = not content_changed
                    issue_fingerprint = _compute_issue_fingerprint(issue_records)
                    no_progress_count = _update_no_progress_count(
                        str(file.filepath),
                        issue_fingerprint,
                        content_changed,
                    )
                    next_action, terminal = _derive_next_action(
                        safe_to_import=safe_to_import,
                        safe_to_compile=safe_to_compile,
                        blockers=blockers,
                        no_change_detected=no_change_detected,
                        no_progress_count=no_progress_count,
                        contract_failed=False,
                    )
                    result.update(
                        {
                            "no_change_detected": no_change_detected,
                            "content_fingerprint_before": content_fingerprint_before,
                            "content_fingerprint_after": content_fingerprint_after,
                            "issue_fingerprint": issue_fingerprint,
                            "no_progress_count": no_progress_count,
                            "next_action": next_action,
                            "terminal": terminal,
                        }
                    )
            else:
                validation_after_fix = {
                    "status": "passed" if validation_result_all.passed else "failed",
                    "remaining_issues": len(validation_result_all.issues),
                    "error_count": validation_result_all.errors,
                    "warning_count": validation_result_all.warnings,
                }
    
                result = {
                    "success": True,
                    "file_path": str(file.filepath),
                    "backup_created": create_backup and content_changed,
                    "backup_path": str(backup_path) if backup_path else None,
                    "content_changed": content_changed,
                    "fixes_applied": [
                        {"type": fix_id, "description": f"Applied {fix_id} fix", "count": 1}
                        for fix_id in fix_result.applied_fixes
                    ],
                    "validation_after_fix": validation_after_fix,
                    "invalid_guid_count": invalid_guid_count,
                    "contract_violations": contract_violations,
                }
                if create_implicit_files:
                    result["implicit_files_created"] = implicit_files_created
    
            return _with_meta(result, _t0, execution_context=ctx)
  • Registration of the fix tools in the server facade.
    from twincat_validator.mcp_tools_fix import register_fix_tools
    from twincat_validator.mcp_tools_batch import register_batch_tools
    from twincat_validator.mcp_tools_orchestration import register_orchestration_tools
    
    register_resources()
    register_validation_tools()
    register_fix_tools()
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses specific complex behaviors (contract violation handling, implicit file creation, orchestration hints for agents) but fails to explicitly state the fundamental safety profile: that this is a destructive file-modifying operation. Given zero annotations, this omission is significant despite the detailed parameter explanations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Appropriately structured with a clear one-line summary followed by detailed Args documentation. Length is justified given 10 parameters with zero schema descriptions. Indented Args format is readable, though slightly non-standard for MCP.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Handles high complexity reasonably well given lack of annotations and schema descriptions, but gaps remain: missing enforcement_mode parameter, no mention of how to discover valid 'fix IDs' for fixes_to_apply, and no warning about destructive side effects despite output schema existing.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema coverage, the Args section compensates heavily for 9/10 parameters with detailed semantics (e.g., profile options, strict_contract behavior). However, it completely omits the 'enforcement_mode' parameter present in the schema, leaving that parameter undocumented.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear specific verb ('fix') and resource ('TwinCAT XML issues'). Distinguishes from validation/suggestion siblings by emphasizing automatic fixing action, though could explicitly contrast with autofix_batch for scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to choose this over siblings (autofix_batch for multiple files, suggest_fixes for preview-only) or prerequisites. The 'Automatically' prefix implies hands-off repair but doesn't state conditions like 'use when file is corrupt but parseable'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/agenticcontrolio/twincat-validator-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server