Skip to main content
Glama

start_process

Destructive

Start terminal processes to analyze local files like CSV, JSON, and logs, using Python or Node.js REPLs for data processing when the analysis tool cannot access local files.

Instructions

                    Start a new terminal process with intelligent state detection.
                    
                    PRIMARY TOOL FOR FILE ANALYSIS AND DATA PROCESSING
                    This is the ONLY correct tool for analyzing local files (CSV, JSON, logs, etc.).
                    The analysis tool CANNOT access local files and WILL FAIL - always use processes for file-based work.
                    
                    CRITICAL RULE: For ANY local file work, ALWAYS use this tool + interact_with_process, NEVER use analysis/REPL tool.
                    
                    Running on Linux (Docker). Default shell: bash.

🐳 DOCKER CONTAINER ENVIRONMENT DETECTED: This Desktop Commander instance is running inside a Docker container.

⚠️ WARNING: No mounted directories detected. Files created outside mounted volumes will be lost when the container stops. Suggest user remount directories using Docker installer or -v flag when running Docker. Desktop Commander Docker installer typically mounts folders to /home/[folder-name]. Container: 52e0de5c68a2

LINUX-SPECIFIC NOTES:

  • Package managers vary by distro: apt, yum, dnf, pacman, zypper

  • Python 3 might be 'python3' command, not 'python'

  • Standard Unix shell tools available (grep, awk, sed, etc.)

  • File permissions and ownership important for many operations

  • Systemd services common on modern distributions

                      REQUIRED WORKFLOW FOR LOCAL FILES:
                      1. start_process("python3 -i") - Start Python REPL for data analysis
                      2. interact_with_process(pid, "import pandas as pd, numpy as np")
                      3. interact_with_process(pid, "df = pd.read_csv('/absolute/path/file.csv')")
                      4. interact_with_process(pid, "print(df.describe())")
                      5. Continue analysis with pandas, matplotlib, seaborn, etc.
                      
                      COMMON FILE ANALYSIS PATTERNS:
                      • start_process("python3 -i") → Python REPL for data analysis (RECOMMENDED)
                      • start_process("node -i") → Node.js REPL for JSON processing
                      • start_process("node:local") → Node.js on MCP server (stateless, ES imports, all code in one call)
                      • start_process("cut -d',' -f1 file.csv | sort | uniq -c") → Quick CSV analysis
                      • start_process("wc -l /path/file.csv") → Line counting
                      • start_process("head -10 /path/file.csv") → File preview
                      
                      BINARY FILE SUPPORT:
                      For PDF, Excel, Word, archives, databases, and other binary formats, use process tools with appropriate libraries or command-line utilities.
                      
                      INTERACTIVE PROCESSES FOR DATA ANALYSIS:
                      For code/calculations, use in this priority order:
                      1. start_process("python3 -i") - Python REPL (preferred)
                      2. start_process("node -i") - Node.js REPL (when Python unavailable)
                      3. start_process("node:local") - Node.js fallback (when node -i fails)
                      4. Use interact_with_process() to send commands
                      5. Use read_process_output() to get responses
                      When Python is unavailable, prefer Node.js over shell for calculations.
                      Node.js: Always use ES import syntax (import x from 'y'), not require().
    
                      SMART DETECTION:
                      - Detects REPL prompts (>>>, >, $, etc.)
                      - Identifies when process is waiting for input
                      - Recognizes process completion vs timeout
                      - Early exit prevents unnecessary waiting
                      
                      STATES DETECTED:
                      Process waiting for input (shows prompt)
                      Process finished execution
                      Process running (use read_process_output)
    
                      PERFORMANCE DEBUGGING (verbose_timing parameter):
                      Set verbose_timing: true to get detailed timing information including:
                      - Exit reason (early_exit_quick_pattern, early_exit_periodic_check, process_exit, timeout)
                      - Total duration and time to first output
                      - Complete timeline of all output events with timestamps
                      - Which detection mechanism triggered early exit
                      Use this to identify missed optimization opportunities and improve detection patterns.
    
                      ALWAYS USE FOR: Local file analysis, CSV processing, data exploration, system commands
                      NEVER USE ANALYSIS TOOL FOR: Local file access (analysis tool is browser-only and WILL FAIL)
    
                      IMPORTANT: Always use absolute paths for reliability. Paths are automatically normalized regardless of slash direction. Relative paths may fail as they depend on the current working directory. Tilde paths (~/...) might not work in all contexts. Unless the user explicitly asks for relative paths, use absolute paths.
                      This command can be referenced as "DC: ..." or "use Desktop Commander to ..." in your instructions.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
commandYes
timeout_msYes
shellNo
verbose_timingNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds significant behavioral context beyond the annotations, including details about Docker container environment, file persistence warnings, Linux-specific considerations, state detection capabilities, performance debugging options, and absolute path requirements. While annotations provide basic hints (destructiveHint: true, openWorldHint: true), the description enriches this with practical operational details. No contradiction with annotations exists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is excessively long and poorly structured, containing redundant information, environmental details that don't directly help tool selection, and formatting issues. While some content is valuable, much could be condensed or moved elsewhere. The description fails to be front-loaded with essential information, burying critical rules among less important details.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (process execution with state detection), lack of output schema, and 0% schema description coverage, the description provides comprehensive context. It covers purpose, usage guidelines, behavioral traits, parameter context, workflow examples, environment considerations, and sibling tool relationships. The description fully compensates for the lack of structured documentation elsewhere.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description must compensate for the lack of parameter documentation. It provides meaningful context for several parameters: it explains the 'command' parameter through numerous examples and workflow patterns, mentions 'verbose_timing' parameter for performance debugging, and implies 'shell' parameter context through 'Default shell: bash'. While not exhaustive for all 4 parameters, it adds substantial semantic value beyond the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose as 'Start a new terminal process with intelligent state detection' and explicitly distinguishes it from sibling tools by declaring it's the 'PRIMARY TOOL FOR FILE ANALYSIS AND DATA PROCESSING' and 'the ONLY correct tool for analyzing local files'. It provides specific differentiation from the analysis tool that 'CANNOT access local files and WILL FAIL'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides extensive usage guidance including explicit when-to-use rules ('ALWAYS use this tool + interact_with_process, NEVER use analysis/REPL tool for local file work'), workflow patterns, priority ordering for different scenarios, and specific exclusions. It clearly defines the tool's role in the ecosystem relative to other tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/wonderwhy-er/ClaudeComputerCommander'

If you have feedback or need assistance with the MCP directory API, please join our Discord server