Skip to main content
Glama
batteryshark

System Information MCP Server

by batteryshark

get_running_processes

Retrieve running processes with resource usage and executable paths to analyze performance, troubleshoot resource issues, and conduct security audits.

Instructions

Get list of running processes with resource usage and executable paths.

Shows top processes by CPU/memory usage with PIDs, names, and executable paths. Essential for performance analysis, troubleshooting resource issues, and security auditing.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • MCP tool handler for get_running_processes. Wraps the collector function, formats output as ToolResult with markdown sections.
    @mcp.tool
    def get_running_processes() -> ToolResult:
        """Get list of running processes with resource usage and executable paths.
        
        Shows top processes by CPU/memory usage with PIDs, names, and executable paths.
        Essential for performance analysis, troubleshooting resource issues, and security auditing.
        """
        info_sections = []
        info_sections.append("# Running Processes")
        info_sections.append(f"*Generated: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}*\n")
        
        try:
            from .collectors import get_running_processes as get_processes_data
            info_sections.extend(get_processes_data())
        except Exception as e:
            info_sections.append(f"⚠️ **Process detection error**: {str(e)}")
        
        return text_response("\n".join(info_sections))
  • Core helper function that iterates over processes using psutil, collects CPU/memory usage, sorts top processes, and formats as markdown table with summary.
    def get_running_processes() -> List[str]:
        """Get list of running processes with resource usage"""
        info = []
        info.append("## ⚙️ Running Processes")
        
        try:
            # Get all processes and sort by CPU usage
            processes = []
            for proc in psutil.process_iter(['pid', 'name', 'cpu_percent', 'memory_percent', 'exe', 'cmdline']):
                try:
                    pinfo = proc.info
                    if pinfo['name'] and pinfo['name'] != '':
                        # Get CPU percent (this call triggers measurement)
                        cpu_percent = proc.cpu_percent()
                        pinfo['cpu_percent'] = cpu_percent
                        processes.append(pinfo)
                except (psutil.NoSuchProcess, psutil.AccessDenied, psutil.ZombieProcess):
                    continue
            
            # Sort by CPU usage, then memory usage
            processes.sort(key=lambda x: (x.get('cpu_percent', 0), x.get('memory_percent', 0)), reverse=True)
            
            # Display top 20 processes
            info.append(f"\n### Top Processes (by CPU/Memory usage)")
            info.append("| PID | Name | CPU% | Memory% | Path |")
            info.append("|-----|------|------|---------|------|")
            
            count = 0
            for proc in processes:
                if count >= 20:
                    break
                    
                pid = proc.get('pid', 'N/A')
                name = proc.get('name', 'Unknown')[:20]  # Truncate long names
                cpu = proc.get('cpu_percent', 0)
                memory = proc.get('memory_percent', 0)
                
                # Get executable path
                exe_path = proc.get('exe', '')
                if not exe_path and proc.get('cmdline'):
                    # Fallback to first command line argument
                    cmdline = proc.get('cmdline', [])
                    if cmdline:
                        exe_path = cmdline[0]
                
                # Truncate path for display
                if exe_path:
                    if len(exe_path) > 40:
                        exe_path = "..." + exe_path[-37:]
                else:
                    exe_path = "N/A"
                
                # Only show processes with some activity or important system processes
                if cpu > 0.1 or memory > 0.5 or name.lower() in ['kernel_task', 'systemd', 'init', 'launchd']:
                    info.append(f"| {pid} | {name} | {cpu:.1f}% | {memory:.1f}% | {exe_path} |")
                    count += 1
            
            # Add process summary
            total_processes = len(processes)
            active_processes = len([p for p in processes if p.get('cpu_percent', 0) > 0])
            info.append(f"\n**Summary**: {total_processes} total processes, {active_processes} active")
            
        except Exception as e:
            info.append(f"⚠️ **Error collecting process information**: {str(e)}")
        
        return info
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It describes what the tool returns (list of processes with specific attributes) and its purpose, but lacks details on potential limitations (e.g., real-time vs. cached data, permissions required, or system-specific behavior). The description adds value by specifying the focus on 'top processes' and use cases, but does not fully cover behavioral traits like performance impact or error handling.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence, followed by additional details in a structured manner. Each sentence adds value: the first defines the tool, the second elaborates on output specifics, and the third provides usage contexts. There is no wasted text, making it highly efficient and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (a read-only operation with no parameters) and lack of annotations or output schema, the description is reasonably complete. It explains what the tool does, what information it returns, and when to use it. However, it could be more complete by detailing the output format (e.g., structure of the list) or any system dependencies, which would help an agent invoke it correctly without an output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0 parameters with 100% coverage, so no parameter documentation is needed. The description appropriately does not discuss parameters, focusing instead on the tool's output and usage. This meets the baseline for tools with no parameters, as it avoids unnecessary repetition from the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get list of running processes') and resource ('with resource usage and executable paths'), distinguishing it from sibling tools like get_hardware_details or get_network_status. It explicitly mentions what information is included (PIDs, names, executable paths) and the scope (top processes by CPU/memory usage).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('Essential for performance analysis, troubleshooting resource issues, and security auditing'), giving practical scenarios. However, it does not explicitly state when NOT to use it or name specific alternatives among the sibling tools, such as get_full_system_report for broader analysis.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/batteryshark/mcp-sysinfo'

If you have feedback or need assistance with the MCP directory API, please join our Discord server