Skip to main content
Glama
tizee

Unix Manual Server

by tizee

list_common_commands

Discover available Unix commands to quickly find documentation and usage information within the chat interface.

Instructions

List common Unix commands available on the system.

Returns: A list of common Unix commands

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • The handler function for the 'list_common_commands' tool. It scans common Unix directories (/bin, /usr/bin, /usr/local/bin) for executable files, collects unique commands, categorizes them into File Operations, Text Processing, System Information, and Networking, and returns a formatted string listing them. Registered via @mcp.tool() decorator.
    @mcp.tool()
    def list_common_commands() -> str:
        """
        List common Unix commands available on the system.
    
        Returns:
            A list of common Unix commands
        """
        logger.info("Listing common commands")
        # Define common directories in PATH that contain commands
        common_dirs = ['/bin', '/usr/bin', '/usr/local/bin']
        logger.debug(f"Searching in directories: {common_dirs}")
    
        commands = []
        for directory in common_dirs:
            if os.path.exists(directory) and os.path.isdir(directory):
                logger.debug(f"Scanning directory: {directory}")
                # List only executable files
                try:
                    for file in os.listdir(directory):
                        file_path = os.path.join(directory, file)
                        if os.path.isfile(file_path) and os.access(file_path, os.X_OK):
                            commands.append(file)
                except Exception as e:
                    logger.error(f"Error listing directory {directory}: {str(e)}")
    
        # Remove duplicates and sort
        commands = sorted(set(commands))
        logger.info(f"Found {len(commands)} unique commands")
    
        # Return a formatted string with command categories
        result = "Common Unix commands available on this system:\n\n"
    
        # File operations
        file_cmds = [cmd for cmd in commands if cmd in ['ls', 'cp', 'mv', 'rm', 'mkdir', 'touch', 'chmod', 'chown', 'find', 'grep']]
        if file_cmds:
            logger.debug(f"File operation commands found: {len(file_cmds)}")
            result += "File Operations:\n" + ", ".join(file_cmds) + "\n\n"
    
        # Text processing
        text_cmds = [cmd for cmd in commands if cmd in ['cat', 'less', 'more', 'head', 'tail', 'grep', 'sed', 'awk', 'sort', 'uniq', 'wc']]
        if text_cmds:
            logger.debug(f"Text processing commands found: {len(text_cmds)}")
            result += "Text Processing:\n" + ", ".join(text_cmds) + "\n\n"
    
        # System information
        sys_cmds = [cmd for cmd in commands if cmd in ['ps', 'top', 'htop', 'df', 'du', 'free', 'uname', 'uptime', 'who', 'whoami']]
        if sys_cmds:
            logger.debug(f"System info commands found: {len(sys_cmds)}")
            result += "System Information:\n" + ", ".join(sys_cmds) + "\n\n"
    
        # Network tools
        net_cmds = [cmd for cmd in commands if cmd in ['ping', 'netstat', 'ifconfig', 'ip', 'ssh', 'scp', 'curl', 'wget']]
        if net_cmds:
            logger.debug(f"Networking commands found: {len(net_cmds)}")
            result += "Networking:\n" + ", ".join(net_cmds) + "\n\n"
    
        # Show total count
        result += f"Total commands found: {len(commands)}\n"
        result += "Use get_command_documentation() to learn more about any command."
    
        return result
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool returns a list but doesn't specify details like format, size, or whether it's static or dynamic. For a tool with no annotations, this is insufficient to inform the agent about key behavioral traits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise and well-structured, with two sentences that directly state the purpose and return value. It's front-loaded with the main action and avoids unnecessary details, though the 'Returns:' section could be integrated more smoothly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (0 parameters, no output schema, no annotations), the description is minimally adequate. It covers the basic purpose but lacks depth in behavioral context and usage guidelines, which are needed for full agent understanding in this server context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has 0 parameters, and schema description coverage is 100%, so there's no need for parameter details in the description. The baseline for this scenario is 4, as the description appropriately avoids redundant information and focuses on the tool's purpose.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'List common Unix commands available on the system.' It specifies the verb ('List') and resource ('common Unix commands'), making it easy to understand what the tool does. However, it doesn't explicitly differentiate from sibling tools like 'check_command_exists' or 'get_command_documentation', which prevents a score of 5.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools or contexts where this tool is preferred, such as for general exploration versus specific checks. This lack of comparative usage information limits its helpfulness for an AI agent.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/tizee/mcp-unix-manual'

If you have feedback or need assistance with the MCP directory API, please join our Discord server