Skip to main content
Glama

run_contam_simulation

Run CONTAM airflow and contaminant transport simulations to validate building models and collect generated output files.

Instructions

Use this when you want to validate or run a CONTAM .prj model with ContamX and collect the generated files.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
projectPathYes
workingDirectoryNo
timeoutSecondsNo
testInputOnlyNo
bridgeAddressNo
windFromBridgeNo
volumeFlowBridgeNo

Implementation Reference

  • The 'run_contam_simulation' tool is implemented in 'server.js'. It resolves the 'contamx' executable, prepares arguments, runs the process, and snapshots file system changes to identify generated artifacts.
      "run_contam_simulation",
      "Use this when you want to validate or run a CONTAM .prj model with ContamX and collect the generated files.",
      {
        projectPath: z.string(),
        workingDirectory: z.string().optional(),
        timeoutSeconds: z.number().int().min(1).max(3600).optional(),
        testInputOnly: z.boolean().optional(),
        bridgeAddress: z.string().optional(),
        windFromBridge: z.boolean().optional(),
        volumeFlowBridge: z.boolean().optional()
      },
      async ({
        projectPath,
        workingDirectory,
        timeoutSeconds,
        testInputOnly,
        bridgeAddress,
        windFromBridge,
        volumeFlowBridge
      }) => {
        const executablePath = await resolveExecutable("contamx");
        const resolvedProjectPath = asAbsolutePath(projectPath);
    
        if (!(await fileExists(resolvedProjectPath))) {
          throw new Error(`Project file not found: ${resolvedProjectPath}`);
        }
    
        const projectDirectory = path.dirname(resolvedProjectPath);
        const resolvedWorkingDirectory = asAbsolutePath(workingDirectory ?? projectDirectory);
        const args = [resolvedProjectPath];
    
        if (testInputOnly) {
          args.push("-t");
        }
        if (bridgeAddress) {
          args.push("-b", bridgeAddress);
        }
        if (windFromBridge) {
          args.push("-w");
        }
        if (volumeFlowBridge) {
          args.push("-f");
        }
    
        const before = await snapshotDirectory(projectDirectory);
        const result = await runProcess(executablePath, args, {
          cwd: resolvedWorkingDirectory,
          timeoutSeconds: timeoutSeconds ?? DEFAULT_TIMEOUT_SECONDS
        });
        const after = await snapshotDirectory(projectDirectory);
    
        return toolResponse(
          result.ok ? "ContamX completed successfully." : "ContamX finished with errors or a non-zero exit code.",
          {
            executablePath,
            projectPath: resolvedProjectPath,
            workingDirectory: resolvedWorkingDirectory,
            args,
            ...result,
            fileChanges: diffSnapshots(before, after),
            artifacts: await collectProjectArtifacts(resolvedProjectPath)
          }
        );
      }
    );
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It successfully discloses that files are generated/collected and mentions the external ContamX engine. However, it omits critical behavioral details: timeout behavior, whether file collection is destructive (overwrites), bridge integration semantics, and error handling for failed simulations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence of 18 words with no redundancy. However, it uses the indirect phrasing 'Use this when you want to' instead of the more direct imperative style ('Validates or runs...'), which slightly weakens the front-loading.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given this is a complex simulation tool with 7 parameters (including bridge integration and timeout controls), no output schema, and no annotations, the description is insufficient. It lacks explanations for the majority of parameters and doesn't describe the return value or file collection mechanism despite the complexity of the domain.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema has 0% description coverage across 7 parameters. The description only implicitly hints at 'testInputOnly' through the word 'validate', but provides no semantics for 'timeoutSeconds', 'workingDirectory', or the three bridge-related parameters ('bridgeAddress', 'windFromBridge', 'volumeFlowBridge'). It fails to compensate for the schema gap.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the action ('validate or run'), the specific resource ('CONTAM .prj model'), the engine used ('ContamX'), and the side effect ('collect the generated files'). However, it doesn't explicitly distinguish this execution tool from siblings like diagnose_contam_project or inspect_contam_project.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The phrase 'Use this when you want to' provides implied context, and it distinguishes between 'validate' and 'run' modes. However, it lacks explicit guidance on when to choose validation over full simulation, and doesn't mention alternatives like diagnose_contam_project for error checking without execution.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/summer521521/contam_MCP'

If you have feedback or need assistance with the MCP directory API, please join our Discord server