Skip to main content
Glama

get_error_log

Retrieve Home Assistant error logs to identify issues, count errors and warnings, and analyze integration mentions for troubleshooting.

Instructions

Get the Home Assistant error log for troubleshooting

Returns: A dictionary containing: - log_text: The full error log text - error_count: Number of ERROR entries found - warning_count: Number of WARNING entries found - integration_mentions: Map of integration names to mention counts - error: Error message if retrieval failed

Examples: Returns errors, warnings count and integration mentions Best Practices: - Use this tool when troubleshooting specific Home Assistant errors - Look for patterns in repeated errors - Pay attention to timestamps to correlate errors with events - Focus on integrations with many mentions in the log

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • Main tool handler function for 'get_error_log'. Registered via @async_handler decorator. Schema defined via @mcp.tool() decorator and comprehensive docstring. Delegates execution to the helper function.
    @mcp.tool()
    @async_handler("get_error_log")
    async def get_error_log() -> Dict[str, Any]:
        """
        Get the Home Assistant error log for troubleshooting
        
        Returns:
            A dictionary containing:
            - log_text: The full error log text
            - error_count: Number of ERROR entries found
            - warning_count: Number of WARNING entries found
            - integration_mentions: Map of integration names to mention counts
            - error: Error message if retrieval failed
            
        Examples:
            Returns errors, warnings count and integration mentions
        Best Practices:
            - Use this tool when troubleshooting specific Home Assistant errors
            - Look for patterns in repeated errors
            - Pay attention to timestamps to correlate errors with events
            - Focus on integrations with many mentions in the log    
        """
        logger.info("Getting Home Assistant error log")
        return await get_hass_error_log()
  • Core helper function implementing the error log retrieval logic. Fetches from HA /api/error_log endpoint, parses for ERROR/WARNING counts, extracts [integration] mentions via regex, handles HTTP and exception errors gracefully.
    @handle_api_errors
    async def get_hass_error_log() -> Dict[str, Any]:
        """
        Get the Home Assistant error log for troubleshooting
        
        Returns:
            A dictionary containing:
            - log_text: The full error log text
            - error_count: Number of ERROR entries found
            - warning_count: Number of WARNING entries found
            - integration_mentions: Map of integration names to mention counts
            - error: Error message if retrieval failed
        """
        try:
            # Call the Home Assistant API error_log endpoint
            url = f"{HA_URL}/api/error_log"
            headers = get_ha_headers()
            
            async with httpx.AsyncClient() as client:
                response = await client.get(url, headers=headers, timeout=30)
                
                if response.status_code == 200:
                    log_text = response.text
                    
                    # Count errors and warnings
                    error_count = log_text.count("ERROR")
                    warning_count = log_text.count("WARNING")
                    
                    # Extract integration mentions
                    import re
                    integration_mentions = {}
                    
                    # Look for patterns like [mqtt], [zwave], etc.
                    for match in re.finditer(r'\[([a-zA-Z0-9_]+)\]', log_text):
                        integration = match.group(1).lower()
                        if integration not in integration_mentions:
                            integration_mentions[integration] = 0
                        integration_mentions[integration] += 1
                    
                    return {
                        "log_text": log_text,
                        "error_count": error_count,
                        "warning_count": warning_count,
                        "integration_mentions": integration_mentions
                    }
                else:
                    return {
                        "error": f"Error retrieving error log: {response.status_code} {response.reason_phrase}",
                        "details": response.text,
                        "log_text": "",
                        "error_count": 0,
                        "warning_count": 0,
                        "integration_mentions": {}
                    }
        except Exception as e:
            logger.error(f"Error retrieving Home Assistant error log: {str(e)}")
            return {
                "error": f"Error retrieving error log: {str(e)}",
                "log_text": "",
                "error_count": 0,
                "warning_count": 0,
                "integration_mentions": {}
            }
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully describes what the tool returns (a dictionary with specific fields), mentions potential failure modes ('Error message if retrieval failed'), and provides practical behavioral context through the best practices section. It doesn't cover aspects like rate limits or authentication requirements, but for a read-only diagnostic tool, this is reasonably comprehensive.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and efficiently organized. It begins with a clear purpose statement, then details the return format, provides examples, and concludes with actionable best practices. Every section adds value without redundancy, and the information is appropriately front-loaded with the most critical details first.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 0-parameter diagnostic tool with no annotations and no output schema, the description provides excellent context. It explains what the tool does, what it returns, when to use it, and how to interpret results. The only minor gap is that it doesn't explicitly state this is a read-only operation (though this is implied by 'Get'), but given the tool's simplicity and the comprehensive guidance provided, this is a strong description.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has 0 parameters with 100% schema description coverage, so the baseline would be 4 even with no parameter information in the description. The description correctly doesn't waste space discussing nonexistent parameters, maintaining appropriate focus on the tool's output and usage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get the Home Assistant error log for troubleshooting.' It specifies the verb ('Get') and resource ('Home Assistant error log'), and the context ('for troubleshooting') distinguishes it from general log retrieval tools. However, it doesn't explicitly differentiate from potential sibling tools like system_overview or domain_summary_tool that might also provide diagnostic information.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidance in the 'Best Practices' section: 'Use this tool when troubleshooting specific Home Assistant errors.' It also offers detailed context on how to interpret results ('Look for patterns...', 'Pay attention to timestamps...', 'Focus on integrations...'), giving clear when-to-use instructions without needing to mention specific alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/voska/hass-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server