get_log_stats
Analyze log files to extract overall statistics, helping users understand patterns and trends in JSON log data.
Instructions
Get overall statistics for log files
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| files | No | Log files to analyze (default: all files) |
Implementation Reference
- json_logs_mcp_server.py:222-271 (handler)The handler function that executes the get_log_stats tool logic. It processes specified log files (or all), counts entries by level, collects unique modules and functions, and computes time range statistics.def get_log_stats(self, files: Optional[List[str]] = None) -> Dict[str, Any]: """Get overall statistics for log files""" if files is None: files = list(self.log_files_cache.keys()) total_entries = 0 levels = {} modules = set() functions = set() earliest_time = None latest_time = None for filename in files: try: entries = self.read_log_file(filename) total_entries += len(entries) for entry in entries: # Count levels level = entry.get("level", "UNKNOWN") levels[level] = levels.get(level, 0) + 1 # Collect modules and functions modules.add(entry.get("module", "UNKNOWN")) functions.add(entry.get("function", "UNKNOWN")) # Track time range timestamp = entry.get("parsed_timestamp") if timestamp: if earliest_time is None or timestamp < earliest_time: earliest_time = timestamp if latest_time is None or timestamp > latest_time: latest_time = timestamp except (FileNotFoundError, RuntimeError): continue return { "total_files": len(files), "total_entries": total_entries, "levels": levels, "unique_modules": sorted(list(modules)), "unique_functions": len(functions), "time_range": { "earliest": earliest_time.isoformat() if earliest_time else None, "latest": latest_time.isoformat() if latest_time else None, "span_hours": round((latest_time - earliest_time).total_seconds() / 3600, 2) if earliest_time and latest_time else None } }
- json_logs_mcp_server.py:412-421 (schema)Input schema definition for the get_log_stats tool, specifying optional array of log file names.inputSchema={ "type": "object", "properties": { "files": { "type": "array", "items": {"type": "string"}, "description": "Log files to analyze (default: all files)" } } }
- json_logs_mcp_server.py:409-422 (registration)Registration of the get_log_stats tool in the MCP server's list_tools() function.types.Tool( name="get_log_stats", description="Get overall statistics for log files", inputSchema={ "type": "object", "properties": { "files": { "type": "array", "items": {"type": "string"}, "description": "Log files to analyze (default: all files)" } } } ),
- json_logs_mcp_server.py:459-466 (registration)Tool dispatch logic in the MCP server's call_tool() function, invoking the handler and returning JSON results.elif name == "get_log_stats": results = log_analyzer.get_log_stats(arguments.get("files")) return [ types.TextContent( type="text", text=json.dumps(results, indent=2, default=str) ) ]