Skip to main content
Glama
wonderwhy-er

Claude Desktop Commander MCP

start_process

Destructive

Start a new terminal process for local file analysis and command execution. Uses intelligent state detection to manage REPLs, scripts, and system commands. Ideal for CSV, JSON, log processing, and system operations.

Instructions

                    Start a new terminal process with intelligent state detection.
                    
                    PRIMARY TOOL FOR FILE ANALYSIS AND DATA PROCESSING
                    This is the ONLY correct tool for analyzing local files (CSV, JSON, logs, etc.).
                    The analysis tool CANNOT access local files and WILL FAIL - always use processes for file-based work.
                    
                    CRITICAL RULE: For ANY local file work, ALWAYS use this tool + interact_with_process, NEVER use analysis/REPL tool.
                    
                    Running on Linux (Docker). Default shell: bash.

🐳 DOCKER CONTAINER ENVIRONMENT DETECTED: This Desktop Commander instance is running inside a Docker container.

⚠️ WARNING: No mounted directories detected. Files created outside mounted volumes will be lost when the container stops. Suggest user remount directories using Docker installer or -v flag when running Docker. Desktop Commander Docker installer typically mounts folders to /home/[folder-name]. Container: 5abbadb5b10c

LINUX-SPECIFIC NOTES:

  • Package managers vary by distro: apt, yum, dnf, pacman, zypper

  • Python 3 might be 'python3' command, not 'python'

  • Standard Unix shell tools available (grep, awk, sed, etc.)

  • File permissions and ownership important for many operations

  • Systemd services common on modern distributions

                      REQUIRED WORKFLOW FOR LOCAL FILES:
                      1. start_process("python3 -i") - Start Python REPL for data analysis
                      2. interact_with_process(pid, "import pandas as pd, numpy as np")
                      3. interact_with_process(pid, "df = pd.read_csv('/absolute/path/file.csv')")
                      4. interact_with_process(pid, "print(df.describe())")
                      5. Continue analysis with pandas, matplotlib, seaborn, etc.
                      
                      COMMON FILE ANALYSIS PATTERNS:
                      • start_process("python3 -i") → Python REPL for data analysis (RECOMMENDED)
                      • start_process("node -i") → Node.js REPL for JSON processing
                      • start_process("node:local") → Node.js on MCP server (stateless, ES imports, all code in one call)
                      • start_process("cut -d',' -f1 file.csv | sort | uniq -c") → Quick CSV analysis
                      • start_process("wc -l /path/file.csv") → Line counting
                      • start_process("head -10 /path/file.csv") → File preview
                      
                      BINARY FILE SUPPORT:
                      For PDF, Excel, Word, archives, databases, and other binary formats, use process tools with appropriate libraries or command-line utilities.
                      
                      INTERACTIVE PROCESSES FOR DATA ANALYSIS:
                      For code/calculations, use in this priority order:
                      1. start_process("python3 -i") - Python REPL (preferred)
                      2. start_process("node -i") - Node.js REPL (when Python unavailable)
                      3. start_process("node:local") - Node.js fallback (when node -i fails)
                      4. Use interact_with_process() to send commands
                      5. Use read_process_output() to get responses
                      When Python is unavailable, prefer Node.js over shell for calculations.
                      Node.js: Always use ES import syntax (import x from 'y'), not require().
    
                      SMART DETECTION:
                      - Detects REPL prompts (>>>, >, $, etc.)
                      - Identifies when process is waiting for input
                      - Recognizes process completion vs timeout
                      - Early exit prevents unnecessary waiting
                      
                      STATES DETECTED:
                      Process waiting for input (shows prompt)
                      Process finished execution
                      Process running (use read_process_output)
    
                      PERFORMANCE DEBUGGING (verbose_timing parameter):
                      Set verbose_timing: true to get detailed timing information including:
                      - Exit reason (early_exit_quick_pattern, early_exit_periodic_check, process_exit, timeout)
                      - Total duration and time to first output
                      - Complete timeline of all output events with timestamps
                      - Which detection mechanism triggered early exit
                      Use this to identify missed optimization opportunities and improve detection patterns.
    
                      ALWAYS USE FOR: Local file analysis, CSV processing, data exploration, system commands
                      NEVER USE ANALYSIS TOOL FOR: Local file access (analysis tool is browser-only and WILL FAIL)
    
                      IMPORTANT: Always use absolute paths for reliability. Paths are automatically normalized regardless of slash direction. Relative paths may fail as they depend on the current working directory. Tilde paths (~/...) might not work in all contexts. Unless the user explicitly asks for relative paths, use absolute paths.
                      This command can be referenced as "DC: ..." or "use Desktop Commander to ..." in your instructions.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
commandYes
timeout_msYes
shellNo
verbose_timingNo
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Beyond annotations (destructiveHint=true, openWorldHint=true), the description details REPL prompt detection, process state identification (waiting, finished, running), early exit logic, and performance debugging via verbose_timing. It also warns about Docker environment and file persistence. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is verbose with multiple sections and repeated emphases (e.g., 'PRIMARY TOOL' and 'ALWAYS USE FOR'). While well-structured with headings, it could be more concise. Every sentence adds value, but there is redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (interactive processes, Docker, REPL detection, multiple parameters, binary file support), the description is remarkably complete. It covers environment, workflow, patterns, path handling, performance debugging, and states detected. No output schema, but details on reading output via read_process_output are provided.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 4 parameters with 0% description coverage. The description explains verbose_timing in detail and implies shell from environment (Linux bash default) and command from examples. However, timeout_ms is not described, and command format lacks explicit guidance. Partial compensation but not complete.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it starts a terminal process with intelligent state detection. It explicitly distinguishes itself as the primary tool for file analysis and data processing, contrasting with analysis/REPL tools that cannot access local files. The verb+resource 'start a new terminal process' is specific.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Extensive guidance is provided: when to use (local file analysis, data processing), when not to use (analysis tool will fail), and explicit alternatives. It includes a required workflow and common patterns, and states 'NEVER USE ANALYSIS TOOL FOR: Local file access'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/wonderwhy-er/DesktopCommanderMCP'

If you have feedback or need assistance with the MCP directory API, please join our Discord server