Skip to main content
Glama

get_jstat_output

Execute jstat commands to monitor JVM performance metrics like garbage collection, class loading, and compiler statistics for Java processes.

Instructions

执行 jstat 监控命令

        Args:
            pid (str): 进程ID,使用字符串形式(如:"12345")
            option (Optional[str]): jstat选项,如gc、class、compiler等
            interval (str): 采样间隔(毫秒),使用字符串形式(如:"1000"表示1秒)
            count (str): 采样次数,使用字符串形式(如:"10")

        Returns:
            Dict: 包含jstat执行结果的字典,包含以下字段:
                - raw_output (str): 原始输出
                - timestamp (float): 时间戳
                - success (bool): 是否成功
                - error (Optional[str]): 错误信息
        

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
pidNo
optionNo
intervalNo
countNo

Implementation Reference

  • Main handler function for the 'get_jstat_output' MCP tool. Validates inputs (PID, interval, count), executes JstatCommand via the executor, and formats the response dictionary with raw output, success status, and error handling.
    @self.mcp.tool()
    def get_jstat_output(pid: str = "", 
                        option: Optional[str] = None, 
                        interval: str = "", 
                        count: str = "") -> Dict:
        """执行 jstat 监控命令
    
        Args:
            pid (str): 进程ID,使用字符串形式(如:"12345")
            option (Optional[str]): jstat选项,如gc、class、compiler等
            interval (str): 采样间隔(毫秒),使用字符串形式(如:"1000"表示1秒)
            count (str): 采样次数,使用字符串形式(如:"10")
    
        Returns:
            Dict: 包含jstat执行结果的字典,包含以下字段:
                - raw_output (str): 原始输出
                - timestamp (float): 时间戳
                - success (bool): 是否成功
                - error (Optional[str]): 错误信息
        """
        try:
            validated_pid = self._validate_and_convert_id(pid if pid else None, "process ID")
            if validated_pid is None:
                return {
                    "raw_output": "",
                    "timestamp": time.time(),
                    "success": False,
                    "error": "Invalid process ID"
                }
            
            validated_interval = self._validate_and_convert_id(interval if interval else None, "interval")
            validated_count = self._validate_and_convert_id(count if count else None, "count")
            
        except ValueError as e:
            return {
                "raw_output": "",
                "timestamp": time.time(),
                "success": False,
                "error": str(e)
            }
        
        cmd = JstatCommand(self.executor, JstatFormatter())
        result = cmd.execute(str(validated_pid), option=option, interval=validated_interval, count=validated_count)
        return {
            "raw_output": result.get('output', ''),
            "timestamp": time.time(),
            "success": result.get('success', False),
            "error": result.get('error')
        }
  • Helper classes JstatCommand and JstatFormatter. JstatCommand constructs the jstat command line (jstat -option pid [interval [count]]). JstatFormatter formats the command result into a simple dictionary with success, output/error, and metadata.
    class JstatCommand(BaseCommand):
        """Jstat命令实现"""
    
        def __init__(self, executor, formatter):
            super().__init__(executor, formatter)
            self.timeout = 30
    
        def get_command(
                self,
                pid: str,
                option: Optional[str] = None,
                interval: Optional[int] = None,
                count: Optional[int] = None,
                *args,
                **kwargs) -> str:
            # option: gc, gccapacity, class, compiler, util, ...
            cmd = f'jstat'
            if option:
                cmd += f' -{option}'
            cmd += f' {pid}'
            if interval is not None:
                cmd += f' {interval}'
                if count is not None:
                    cmd += f' {count}'
            return cmd
    
    class JstatFormatter(OutputFormatter):
        """Jstat输出格式化器(仅文本输出)"""
    
        def format(self, result: CommandResult) -> Dict[str, Any]:
            if not result.success:
                return {
                    "success": False,
                    "error": result.error,
                    "timestamp": result.timestamp.isoformat()
                    }
            return {
                "success": True,
                "output": result.output,
                "execution_time": result.execution_time,
                "timestamp": result.timestamp.isoformat()
                }
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It describes the return format in detail (raw_output, timestamp, success, error), which is helpful. However, it lacks critical behavioral information such as whether this is a read-only operation, potential side effects, error conditions beyond what's in the return dict, or performance implications of the sampling parameters. The description adds some value but leaves significant gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (Args, Returns) and uses bullet points for the return fields. It's appropriately sized for a 4-parameter tool with detailed return specifications. However, the initial purpose statement is somewhat terse, and there's minor redundancy in specifying '使用字符串形式' (use string form) for multiple parameters.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (4 parameters, monitoring command execution) and lack of both annotations and output schema, the description does a reasonable job but has gaps. It thoroughly documents parameters and return format, which is good. However, it lacks context about jstat itself (what it monitors, typical use cases), doesn't explain the relationship between interval and count for continuous monitoring, and provides no error handling guidance beyond the error field in returns.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 0%, so the description must fully compensate. It provides excellent parameter semantics: it explains each parameter's purpose (pid for process ID, option for jstat options like gc, interval for sampling interval in milliseconds, count for sampling count), includes examples ('12345', '1000', '10'), and clarifies data types (all as strings). This goes well beyond what the bare schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: '执行 jstat 监控命令' (execute jstat monitoring command). It specifies the verb ('执行' - execute) and resource ('jstat 监控命令' - jstat monitoring command), making the purpose unambiguous. However, it doesn't explicitly differentiate from sibling tools like 'get_jcmd_output' or 'get_jvm_info', which prevents a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools like 'get_jcmd_output' or 'get_jvm_info', nor does it specify scenarios where jstat is preferred over other monitoring tools. The only implicit context is that it's for monitoring Java processes, but this is insufficient for effective tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/xzq-xu/jvm-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server