Skip to main content
Glama
mfreeman451

JSON Logs MCP Server

by mfreeman451

get_log_stats

Analyze log files to extract overall statistics, helping users understand patterns and trends in JSON log data.

Instructions

Get overall statistics for log files

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
filesNoLog files to analyze (default: all files)

Implementation Reference

  • The handler function that executes the get_log_stats tool logic. It processes specified log files (or all), counts entries by level, collects unique modules and functions, and computes time range statistics.
    def get_log_stats(self, files: Optional[List[str]] = None) -> Dict[str, Any]:
        """Get overall statistics for log files"""
        if files is None:
            files = list(self.log_files_cache.keys())
    
        total_entries = 0
        levels = {}
        modules = set()
        functions = set()
        earliest_time = None
        latest_time = None
    
        for filename in files:
            try:
                entries = self.read_log_file(filename)
                total_entries += len(entries)
    
                for entry in entries:
                    # Count levels
                    level = entry.get("level", "UNKNOWN")
                    levels[level] = levels.get(level, 0) + 1
    
                    # Collect modules and functions
                    modules.add(entry.get("module", "UNKNOWN"))
                    functions.add(entry.get("function", "UNKNOWN"))
    
                    # Track time range
                    timestamp = entry.get("parsed_timestamp")
                    if timestamp:
                        if earliest_time is None or timestamp < earliest_time:
                            earliest_time = timestamp
                        if latest_time is None or timestamp > latest_time:
                            latest_time = timestamp
    
            except (FileNotFoundError, RuntimeError):
                continue
    
        return {
            "total_files": len(files),
            "total_entries": total_entries,
            "levels": levels,
            "unique_modules": sorted(list(modules)),
            "unique_functions": len(functions),
            "time_range": {
                "earliest": earliest_time.isoformat() if earliest_time else None,
                "latest": latest_time.isoformat() if latest_time else None,
                "span_hours": round((latest_time - earliest_time).total_seconds() / 3600,
                                    2) if earliest_time and latest_time else None
            }
        }
  • Input schema definition for the get_log_stats tool, specifying optional array of log file names.
    inputSchema={
        "type": "object",
        "properties": {
            "files": {
                "type": "array",
                "items": {"type": "string"},
                "description": "Log files to analyze (default: all files)"
            }
        }
    }
  • Registration of the get_log_stats tool in the MCP server's list_tools() function.
    types.Tool(
        name="get_log_stats",
        description="Get overall statistics for log files",
        inputSchema={
            "type": "object",
            "properties": {
                "files": {
                    "type": "array",
                    "items": {"type": "string"},
                    "description": "Log files to analyze (default: all files)"
                }
            }
        }
    ),
  • Tool dispatch logic in the MCP server's call_tool() function, invoking the handler and returning JSON results.
    elif name == "get_log_stats":
        results = log_analyzer.get_log_stats(arguments.get("files"))
        return [
            types.TextContent(
                type="text",
                text=json.dumps(results, indent=2, default=str)
            )
        ]
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool 'gets' statistics, implying a read-only operation, but doesn't clarify aspects like performance impact, rate limits, authentication needs, or what 'overall statistics' entail (e.g., counts, averages, summaries). This leaves significant gaps in understanding the tool's behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's purpose without unnecessary words. It's front-loaded and appropriately sized for a simple tool, with every part earning its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of annotations and output schema, the description is incomplete. It doesn't specify what 'overall statistics' include (e.g., format, data types) or behavioral traits like error handling. For a tool that likely returns aggregated data, more context is needed to help the agent use it effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with the 'files' parameter documented as 'Log files to analyze (default: all files)'. The description adds no additional meaning beyond this, as it doesn't explain parameter usage or constraints. With high schema coverage, the baseline score of 3 is appropriate, as the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with a specific verb ('Get') and resource ('overall statistics for log files'), making it easy to understand what the tool does. However, it doesn't explicitly differentiate from sibling tools like 'aggregate_logs' or 'query_logs', which might also involve log analysis.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'aggregate_logs' or 'query_logs'. It lacks context about scenarios where overall statistics are preferred over detailed queries or aggregation, leaving the agent to infer usage based on tool names alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/mfreeman451/json-logs-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server