Skip to main content
Glama

get_session_metrics

Read-onlyIdempotent

Retrieve comprehensive metrics for all interactive OpenROAD sessions to analyze performance and resource usage.

Instructions

Get comprehensive metrics for all interactive OpenROAD sessions.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • SessionMetricsTool class - the execute() method calls self.manager.session_metrics() and formats the result as JSON via SessionMetricsResult.
    class SessionMetricsTool(BaseTool):
        """Tool for retrieving comprehensive session metrics."""
    
        async def execute(self) -> str:
            """Get comprehensive metrics for all sessions."""
            try:
                metrics = await self.manager.session_metrics()
                return self._format_result(SessionMetricsResult(metrics=metrics))
    
            except Exception as e:
                logger.exception("Failed to get session metrics")
                return self._format_result(
                    SessionMetricsResult(
                        metrics=None,
                        error=f"Metrics retrieval failed: {str(e)}",
                    )
                )
  • session_metrics() method on OpenROADManager - gathers aggregate and per-session metrics by calling session.get_detailed_metrics() on each active session.
    async def session_metrics(self) -> dict:
        """Get comprehensive metrics for all sessions."""
        await self._cleanup_terminated_sessions_with_lock()
    
        total_sessions = len(self._sessions)
        active_sessions = self.get_active_session_count()
        terminated_sessions = total_sessions - active_sessions
    
        session_details = []
        total_commands = 0
        total_cpu_time = 0.0
        total_memory_mb = 0.0
    
        for _, session in self._iter_initialized_sessions():
            try:
                metrics = await session.get_detailed_metrics()
                session_details.append(metrics)
                total_commands += metrics["commands"]["total_executed"]
                total_cpu_time += metrics["performance"]["total_cpu_time"]
                total_memory_mb += metrics["performance"]["current_memory_mb"]
            except Exception as e:
                self.logger.warning(f"Failed to get metrics for session {session.session_id}: {e}")
    
        return {
            "manager": {
                "total_sessions": total_sessions,
                "active_sessions": active_sessions,
                "terminated_sessions": terminated_sessions,
                "max_sessions": self._max_sessions,
                "utilization_percent": (active_sessions / self._max_sessions) * 100 if self._max_sessions > 0 else 0,
            },
            "aggregate": {
                "total_commands": total_commands,
                "total_cpu_time": total_cpu_time,
                "total_memory_mb": total_memory_mb,
                "avg_memory_per_session": total_memory_mb / active_sessions if active_sessions > 0 else 0,
            },
            "sessions": session_details,
        }
  • SessionMetricsResult Pydantic model - defines the result schema (metrics: dict | None, error: str | None inherited from BaseResult).
    class SessionMetricsResult(BaseResult):
        """Result from session metrics retrieval."""
    
        metrics: dict | None = None
  • MCP tool registration with @mcp.tool decorator and annotations (readOnly, idempotent). The handler delegates to session_metrics_tool.execute().
    @mcp.tool(
        annotations=ToolAnnotations(
            readOnlyHint=True,
            destructiveHint=False,
            idempotentHint=True,
            openWorldHint=False,
        )
    )
    async def get_session_metrics() -> str:
        """Get comprehensive metrics for all interactive OpenROAD sessions."""
        return await session_metrics_tool.execute()
  • get_detailed_metrics() on the InteractiveSession class - collects per-session state, commands, performance (CPU/memory), buffer, and timeout info.
    async def get_detailed_metrics(self) -> dict:
        """Get detailed performance and state metrics."""
        await self._update_performance_metrics()
        uptime = (datetime.now() - self.created_at).total_seconds()
        idle_time = (datetime.now() - self.last_activity).total_seconds()
        buffer_size = await self.output_buffer.get_size()
    
        return {
            "session_id": self.session_id,
            "state": self.state.value,
            "is_alive": self.is_alive(),
            "created_at": self.created_at.isoformat(),
            "last_activity": self.last_activity.isoformat(),
            "uptime_seconds": uptime,
            "idle_seconds": idle_time,
            "commands": {
                "total_executed": self.total_commands_executed,
                "current_count": self.command_count,
                "history_length": len(self.command_history),
            },
            "performance": {
                "total_cpu_time": self.total_cpu_time,
                "peak_memory_mb": self.peak_memory_mb,
                "current_memory_mb": await self._get_current_memory_usage(),
            },
            "buffer": {
                "current_size": buffer_size,
                "max_size": self.output_buffer.max_size,
                "utilization_percent": (buffer_size / self.output_buffer.max_size) * UTILIZATION_PERCENTAGE_BASE
                if self.output_buffer.max_size > 0
                else 0,
            },
            "timeout": {
                "configured_seconds": self.session_timeout_seconds,
                "is_timed_out": await self._check_session_timeout(),
            },
        }
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, destructiveHint=false, and idempotentHint=true, so the description adds scope ('all sessions') but no additional behavioral traits. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, concise sentence that is front-loaded with the key action and resource. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With zero parameters and an existing output schema, the description provides enough context: it states what the tool returns (metrics) and the scope (all interactive sessions). The output schema fills in the details of the return value.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has zero parameters; baseline is 4. The description does not need to add parameter information, and it does not. Schema coverage is 100% by definition.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('get') and resource ('comprehensive metrics for all interactive OpenROAD sessions'). It is specific enough to differentiate from sibling tools like 'get_session_history' and 'inspect_interactive_session', though not explicitly contrasting them.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus its siblings. Given many related tools exist, the absence of usage context makes it harder for an agent to choose correctly.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/luarss/openroad-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server