Skip to main content
Glama

ShallowCodeResearch_get_performance_metrics

Collect and analyze performance metrics for the MCP Hub research system, including execution times, success rates, error counts, and resource utilization to monitor system efficiency.

Instructions

Get performance metrics and analytics for the MCP Hub system. Collects and returns performance metrics including execution times, success rates, error counts, and resource utilization. Provides basic information if advanced metrics collection is not available. Returns: A dictionary containing performance metrics and statistics

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • app.py:301-324 (handler)
    Main handler function for the performance metrics tool. Checks if advanced features are available and delegates to metrics_collector.get_metrics_summary() or returns basic status.
    def get_performance_metrics() -> Dict[str, Any]:
        """
        Get performance metrics and analytics for the MCP Hub system.
    
        Collects and returns performance metrics including execution times,
        success rates, error counts, and resource utilization. Provides
        basic information if advanced metrics collection is not available.
    
        Returns:
            Dict[str, Any]: A dictionary containing performance metrics and statistics
        """
        if not ADVANCED_FEATURES_AVAILABLE:
            return {
                "status": "basic_mode",
                "message": "Performance metrics not available. Install 'pip install psutil aiohttp' to enable advanced monitoring.",
                "basic_info": {
                    "system_working": True,
                    "features_loaded": False
                }
            }
        try:
            return metrics_collector.get_metrics_summary()
        except Exception as e:
            return {"error": f"Performance metrics failed: {str(e)}"}
  • app.py:1072-1076 (registration)
    Gradio MCP registration of the get_performance_metrics tool via button click handler with api_name 'get_performance_metrics_service'. This exposes it as an MCP tool, likely prefixed as ShallowCodeResearch_get_performance_metrics in the context of the ShallowCodeResearch HF space.
        fn=get_performance_metrics,
        inputs=[],
        outputs=metrics_output,
        api_name="get_performance_metrics_service"
    )
  • Core implementation of metrics summary calculation in MetricsCollector class. Computes average, min, max, etc. from recent metric points over the last N minutes.
    def get_metrics_summary(self, 
                          metric_name: Optional[str] = None, 
                          last_minutes: int = 5) -> Dict[str, Any]:
        """Get summary statistics for metrics."""
        cutoff_time = datetime.now() - timedelta(minutes=last_minutes)
        
        with self.lock:
            if metric_name:
                metrics_to_analyze = {metric_name: self.metrics[metric_name]}
            else:
                metrics_to_analyze = dict(self.metrics)
        
        summary = {}
        
        for name, points in metrics_to_analyze.items():
            recent_points = [p for p in points if p.timestamp >= cutoff_time]
            
            if not recent_points:
                continue
            
            values = [p.value for p in recent_points]
            summary[name] = {
                "count": len(values),
                "average": sum(values) / len(values),
                "min": min(values),
                "max": max(values),
                "latest": values[-1] if values else 0,
                "last_updated": recent_points[-1].timestamp.isoformat() if recent_points else None
            }
        
        return summary
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden. It mentions the tool 'Collects and returns' metrics and has fallback behavior ('Provides basic information if advanced metrics collection is not available'), which adds some behavioral context. However, it lacks details on permissions, rate limits, data freshness, or whether this is a read-only operation. For a metrics tool with zero annotation coverage, this is insufficient disclosure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized with three sentences: purpose, scope/fallback, and return value. It's front-loaded with the core purpose. However, the third sentence 'Returns: A dictionary containing performance metrics and statistics' is somewhat redundant with the first two sentences and could be more integrated.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 0 parameters, no annotations, and no output schema, the description provides adequate purpose and scope but lacks behavioral details needed for full transparency. It explains what metrics are collected and mentions fallback behavior, but doesn't cover response format details, error handling, or system impact. For a metrics tool with minimal structured data, it's moderately complete but has gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has 0 parameters with 100% schema description coverage (empty schema). The description doesn't need to explain parameters, so it appropriately focuses on what the tool does rather than inputs. No parameter information is missing or needed, meeting the baseline for zero-parameter tools.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose as 'Get performance metrics and analytics for the MCP Hub system' with specific metrics listed (execution times, success rates, error counts, resource utilization). It distinguishes from siblings like get_cache_status or get_health_status by focusing on performance analytics rather than cache/health status. However, it doesn't explicitly contrast with these siblings in the description text.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context through 'Provides basic information if advanced metrics collection is not available,' suggesting fallback behavior. However, it doesn't explicitly state when to use this tool versus alternatives like get_health_status or get_cache_status, nor does it mention prerequisites or exclusions. The guidance is implied rather than explicit.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/CodeHalwell/gradio-mcp-agent-hack'

If you have feedback or need assistance with the MCP directory API, please join our Discord server