Skip to main content
Glama
agenticcontrolio

TwinCAT Validator MCP Server

process_twincat_batch

Validate and automatically fix TwinCAT 3 XML files in batch using deterministic quality checks and IEC 61131-3 OOP standards to ensure code quality in industrial automation projects.

Instructions

Run enforced deterministic batch TwinCAT workflow.

Steps:

  1. validate_batch (pre-check)

  2. autofix_batch (strict pipeline)

  3. validate_batch (post-check)

Args: file_patterns: Glob patterns (e.g., ["*.TcPOU"]) directory_path: Base directory create_backup: Create backup files before fixing validation_level: "all", "critical", or "style" enforcement_mode: Policy enforcement mode ("strict" or "compat") response_mode: "summary" (minimal, default), "compact" (no pre/post blobs), or "full" (all detail sections included). include_sections: In summary mode only — optional list of heavy sections to add. Supported: "blockers", "issues", "pre_validation", "autofix", "post_validation", "effective_oop_policy", "meta_detailed". Unknown names are ignored with a warning in the response. Has no effect in compact or full mode. include_knowledge_hints: Include recommended_check_ids from blockers (when not done). intent_profile: Programming paradigm intent — "auto" (default), "procedural", or "oop". Controls which check families run: - "procedural": OOP checks are skipped. - "oop": Full OOP check family is enforced. - "auto": Scans matched .TcPOU declarations for EXTENDS/IMPLEMENTS; resolves to "oop" if any are found, otherwise "procedural".

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
file_patternsYes
directory_pathNo.
create_backupNo
validation_levelNoall
enforcement_modeNostrict
response_modeNosummary
include_sectionsNo
include_knowledge_hintsNo
intent_profileNoauto

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • The implementation of the process_twincat_batch tool handler.
    async def process_twincat_batch(
        file_patterns: list[str],
        directory_path: str = ".",
        create_backup: bool = False,
        validation_level: str = "all",
        enforcement_mode: str = DEFAULT_ENFORCEMENT_MODE,
        response_mode: str = "summary",
        include_sections: list[str] | None = None,
        include_knowledge_hints: bool = False,
        intent_profile: str = "auto",
    ) -> str:
        """Run enforced deterministic batch TwinCAT workflow.
    
        Steps:
        1. validate_batch (pre-check)
        2. autofix_batch (strict pipeline)
        3. validate_batch (post-check)
    
        Args:
            file_patterns: Glob patterns (e.g., ["*.TcPOU"])
            directory_path: Base directory
            create_backup: Create backup files before fixing
            validation_level: "all", "critical", or "style"
            enforcement_mode: Policy enforcement mode ("strict" or "compat")
            response_mode: "summary" (minimal, default), "compact" (no pre/post blobs),
                or "full" (all detail sections included).
            include_sections: In summary mode only — optional list of heavy sections to add.
                Supported: "blockers", "issues", "pre_validation", "autofix", "post_validation",
                "effective_oop_policy", "meta_detailed". Unknown names are ignored with a warning
                in the response. Has no effect in compact or full mode.
            include_knowledge_hints: Include recommended_check_ids from blockers (when not done).
            intent_profile: Programming paradigm intent — "auto" (default), "procedural",
                or "oop".  Controls which check families run:
                - "procedural": OOP checks are skipped.
                - "oop": Full OOP check family is enforced.
                - "auto": Scans matched .TcPOU declarations for EXTENDS/IMPLEMENTS; resolves
                  to "oop" if any are found, otherwise "procedural".
        """
        _t0 = time.monotonic()
        ctx = None
        try:
            mode_error = _validate_enforcement_mode(enforcement_mode, start_time=_t0)
            if mode_error:
                return mode_error
            ctx = _resolve_execution_context(directory_path, enforcement_mode=enforcement_mode)
            from glob import glob as _glob
    
            from twincat_validator.server import autofix_batch, validate_batch
    
            if validation_level not in ["all", "critical", "style"]:
                return _tool_error(
                    f"Invalid validation_level: {validation_level}",
                    start_time=_t0,
                    execution_context=ctx,
                    valid_levels=["all", "critical", "style"],
                )
            if response_mode not in ["full", "compact", "summary"]:
                return _tool_error(
                    f"Invalid response_mode: {response_mode}",
                    start_time=_t0,
                    execution_context=ctx,
                    valid_response_modes=["full", "compact", "summary"],
                )
            if intent_profile not in _VALID_INTENT_PROFILES:
                return _tool_error(
                    f"Invalid intent_profile: {intent_profile}",
                    start_time=_t0,
                    execution_context=ctx,
                    valid_intent_profiles=list(_VALID_INTENT_PROFILES),
                )
    
            # Resolve intent by scanning matched files so "auto" detects OOP content.
            _base_path = Path(directory_path)
            _all_files: set[Path] = set()
            for _pattern in file_patterns:
                _matches = _glob(str(_base_path / _pattern), recursive=True)
                _all_files.update(Path(f) for f in _matches)
            _tc_files = [f for f in _all_files if f.suffix in config.supported_extensions]
            intent_profile_resolved = _batch_auto_resolve_intent(_tc_files, intent_profile)
            check_categories_executed = (
                ["core", "oop"] if intent_profile_resolved == "oop" else ["core"]
            )
            pre_validation = json.loads(
                await validate_batch(
                    file_patterns=file_patterns,
                    directory_path=directory_path,
                    validation_level=validation_level,
                    enforcement_mode=enforcement_mode,
                    intent_profile=intent_profile_resolved,
                )
            )
            if not pre_validation.get("success", False):
                return _with_meta(
                    {
                        "success": False,
                        "workflow": "batch_strict_pipeline",
                        "failed_step": "validate_batch_pre",
                        "step_error": pre_validation,
                        "done": False,
                        "terminal_mode": False,
                        "next_action": "inspect_error",
                    },
                    _t0,
                    execution_context=ctx,
                )
    
            autofix_result = json.loads(
                await autofix_batch(
                    file_patterns=file_patterns,
                    directory_path=directory_path,
                    create_backup=create_backup,
                    profile="llm_strict",
                    format_profile="twincat_canonical",
                    strict_contract=True,
                    create_implicit_files=True,
                    orchestration_hints=True,
                    enforcement_mode=enforcement_mode,
                    intent_profile=intent_profile_resolved,
                )
            )
            if not autofix_result.get("success", False):
                return _with_meta(
                    {
                        "success": False,
                        "workflow": "batch_strict_pipeline",
                        "failed_step": "autofix_batch",
                        "step_error": autofix_result,
                        "done": False,
                        "terminal_mode": False,
                        "next_action": "inspect_error",
                    },
                    _t0,
                    execution_context=ctx,
                )
    
            post_validation = json.loads(
                await validate_batch(
                    file_patterns=file_patterns,
                    directory_path=directory_path,
                    validation_level=validation_level,
                    enforcement_mode=enforcement_mode,
                    intent_profile=intent_profile_resolved,
                )
            )
            if not post_validation.get("success", False):
                return _with_meta(
                    {
                        "success": False,
                        "workflow": "batch_strict_pipeline",
                        "failed_step": "validate_batch_post",
                        "step_error": post_validation,
                        "done": False,
                        "terminal_mode": False,
                        "next_action": "inspect_error",
                    },
                    _t0,
                    execution_context=ctx,
                )
    
            workflow_compliance_warnings = _collect_intent_mismatch_warnings(
                intent_profile_resolved,
                steps=[
                    ("validate_batch_pre", pre_validation),
                    ("autofix_batch", autofix_result),
                    ("validate_batch_post", post_validation),
                ],
            )
    
            batch_summary = post_validation.get("batch_summary", {})
            file_summaries = _build_batch_file_summaries(post_validation, autofix_result)
            safe_to_import = (
                all(item["safe_to_import"] for item in file_summaries) if file_summaries else False
            )
            safe_to_compile = (
                all(item["safe_to_compile"] for item in file_summaries) if file_summaries else False
            )
            done = batch_summary.get("failed", 0) == 0 and safe_to_import and safe_to_compile
            blockers = _aggregate_blockers_from_files(file_summaries)
            result = {
                "success": True,
                "workflow": "batch_strict_pipeline",
                "tools_used": ["validate_batch", "autofix_batch", "validate_batch"],
                "file_patterns": file_patterns,
                "directory_path": directory_path,
                "response_mode": response_mode,
                "intent_profile_requested": intent_profile,
                "intent_profile_resolved": intent_profile_resolved,
                "check_categories_executed": check_categories_executed,
                "workflow_compliance_warnings": workflow_compliance_warnings,
                "batch_summary": batch_summary,
                "safe_to_import": safe_to_import,
                "safe_to_compile": safe_to_compile,
                "files": file_summaries,
                "done": done,
                "status": "done" if done else "blocked",
                "blocking_count": len(blockers),
                "blockers": blockers,
                "effective_oop_policy": {
                    "policy_source": ctx.policy_source,
                    "policy": ctx.effective_oop_policy,
                },
            }
            if response_mode == "full":
                result["pre_validation"] = pre_validation
                result["autofix"] = autofix_result
                result["post_validation"] = post_validation
            if done:
                result["terminal_mode"] = True
                result["next_action"] = "done_no_further_autofix"
                result["allow_followup_autofix_without_user_request"] = False
            else:
                result["terminal_mode"] = False
                result["next_action"] = "manual_intervention_or_targeted_fix"
    
                if include_knowledge_hints:
                    result["recommended_check_ids"] = sorted(
                        set(b["check_id"] for b in blockers if b.get("check_id"))
                    )
    
            _assert_orchestration_contract(result, is_batch=True)
    
            # Apply response shaping (summary mode projects to minimal payload).
            shaped, unknown_sections = _shape_batch_response(
                result, response_mode, include_sections
            )
            if unknown_sections:
                shaped["unknown_include_sections"] = unknown_sections
            return _with_meta(shaped, _t0, execution_context=ctx)
        except Exception as e:
            error_kwargs = {"execution_context": ctx}
            if ctx is None:
                error_kwargs.update(unresolved_policy_fields(enforcement_mode))
            return _tool_error(str(e), start_time=_t0, **error_kwargs)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It successfully discloses the 3-step pipeline, the file-modification nature (implied by 'autofix' and 'backup' parameters), and complex behavioral traits like `intent_profile` auto-detection. It lacks explicit safety warnings about mutation, though the backup option hints at risk.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with a front-loaded summary, clear step enumeration, and organized Args section. It is appropriately sized given the 9 undocumented parameters, though dense. The 'include_sections' parameter description is particularly verbose but necessary given the complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given high complexity (9 params, workflow logic) and poor schema coverage, the description is comprehensive. It covers all parameters and explains the OOP vs Procedural logic. Since an output schema exists, omitting return value descriptions is acceptable. Minor gap: no mention of error handling behavior if autofix fails.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema has 0% description coverage (titles only). The Args section fully compensates by providing detailed semantics for all 9 parameters, including enum values ('all'/'critical'/'style'), examples (['*.TcPOU']), and behavioral context (response modes, intent profile logic). This is exemplary compensation for schema deficiency.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a specific verb ('Run') and resource ('TwinCAT workflow'), and the enumerated steps (validate/autofix/validate) clearly distinguish this composite tool from siblings like `validate_batch` or `autofix_batch` alone. However, it assumes domain knowledge of 'deterministic' without referencing the sibling `verify_determinism_batch`.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The three-step workflow implies this is for comprehensive batch processing, but there is no explicit guidance on when to choose this over `process_twincat_single` or standalone `validate_batch`. The description explains 'what' it does but not 'when' to prefer it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/agenticcontrolio/twincat-validator-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server