Skip to main content
Glama
SethGame

FlexSim MCP Server

by SethGame

flexsim_evaluate

Execute FlexScript code to analyze and manipulate FlexSim simulation models, enabling digital twin analysis, parameter studies, and real-time model control.

Instructions

Execute FlexScript code.

Args:
    script: FlexScript code to evaluate

Examples:
    script='Model.find("Queue1").subnodes.length'  # Get queue content
    script='getmodeltime()'  # Get simulation time

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
paramsYes

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • Main handler function for flexsim_evaluate tool. Decorated with @mcp.tool() which also serves as registration. Executes FlexScript code using the FlexSim controller's evaluate method and returns results with error handling.
    @mcp.tool()
    async def flexsim_evaluate(params: EvaluateScriptInput) -> str:
        """Execute FlexScript code.
    
        Args:
            script: FlexScript code to evaluate
    
        Examples:
            script='Model.find("Queue1").subnodes.length'  # Get queue content
            script='getmodeltime()'  # Get simulation time
        """
        try:
            controller = await get_controller()
            result = controller.evaluate(params.script)
    
            return f"Result: {result}"
        except Exception as e:
            return f"Script error: {format_error(e)}"
  • Input schema for the flexsim_evaluate tool using Pydantic BaseModel. Defines the script parameter with validation (required, min_length=1, max_length=10000).
    class EvaluateScriptInput(BaseModel):
        """Input for evaluating FlexScript."""
        script: str = Field(..., min_length=1, max_length=10000)
  • Helper function that formats exceptions into user-friendly error messages. Used by flexsim_evaluate to provide clear error feedback for script execution failures.
    def format_error(e: Exception) -> str:
        """Format exception as user-friendly error message."""
        msg = str(e)
        if "not found" in msg.lower():
            return f"Not found: {msg}"
        elif "syntax" in msg.lower():
            return f"FlexScript syntax error: {msg}"
        elif "license" in msg.lower():
            return f"License error: {msg}"
        elif "permission" in msg.lower():
            return f"Permission denied: {msg}"
        return f"Error: {msg}"
  • Helper function that manages the FlexSim controller singleton instance. Used by flexsim_evaluate to get or create the controller for executing FlexScript commands.
    async def get_controller():
        """Get or create the FlexSim controller instance."""
        global _controller
    
        async with _controller_lock:
            if _controller is None:
                _controller = await launch_flexsim()
            return _controller
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states the tool executes code, implying it's a mutation operation that could affect the simulation state, but doesn't specify permissions needed, side effects (e.g., whether it modifies model data), error handling, or performance implications. The examples hint at read-only queries, but the tool's name 'evaluate' suggests broader execution capabilities without clarifying limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded: the first sentence states the core purpose, followed by structured sections for Args and Examples. Every sentence earns its place by clarifying parameters or demonstrating usage. It could be slightly more concise by integrating the examples into the Args section, but overall it's efficient and well-organized.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (executing arbitrary code in a simulation environment) and the presence of an output schema (which likely handles return values), the description is moderately complete. It covers the basic purpose and parameter usage but lacks critical context: no annotations mean safety/behavior traits are undocumented, and it doesn't explain how this tool relates to siblings or what happens on execution failure. The examples help but don't fully address the tool's scope.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds significant meaning beyond the input schema, which has 0% description coverage. It explains that the 'script' parameter is 'FlexScript code to evaluate' and provides two concrete examples showing syntax and common use cases (querying queue content and simulation time). This compensates well for the schema's lack of documentation, though it doesn't detail constraints like the 1-10000 character length from the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Execute FlexScript code.' This is a specific verb ('Execute') + resource ('FlexScript code'), making it immediately understandable. However, it doesn't explicitly differentiate from sibling tools like flexsim_compile (which might compile rather than execute) or flexsim_get_node_value (which might use FlexScript internally), so it falls short of a perfect 5.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention when to prefer flexsim_evaluate over other tools (e.g., flexsim_get_node_value for specific values, flexsim_run for simulation execution, or flexsim_compile for code compilation). The examples show usage but don't provide contextual decision-making criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/SethGame/mcp_flexsim'

If you have feedback or need assistance with the MCP directory API, please join our Discord server