Skip to main content
Glama
piekstra

New Relic MCP Server

by piekstra

generate_log_parsing_rule

Create log parsing rules from queries or sample logs to extract structured data from New Relic log messages for monitoring and analysis.

Instructions

Generate a log parsing rule from either a query or provided samples.

Args:
    log_query: Optional NRQL WHERE clause to fetch logs (e.g., "service = 'api'")
    log_samples: Optional list of log message samples
    time_range: Time range for log query (default: "1 hour ago")
    field_hints: Optional hints for field types (e.g., {"user_id": "UUID"})
    account_id: Optional account ID (uses default if not provided)

Returns:
    Generated GROK pattern, NRQL pattern, and analysis

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
log_queryNo
log_samplesNo
time_rangeNo1 hour ago
field_hintsNo
account_idNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • MCP tool handler and registration for 'generate_log_parsing_rule'. Validates inputs, checks client, calls core helper in log_parsing.py, and formats JSON response.
    @mcp.tool()
    async def generate_log_parsing_rule(
        log_query: Optional[str] = None,
        log_samples: Optional[List[str]] = None,
        time_range: str = "1 hour ago",
        field_hints: Optional[Dict[str, str]] = None,
        account_id: Optional[str] = None,
    ) -> str:
        """
        Generate a log parsing rule from either a query or provided samples.
    
        Args:
            log_query: Optional NRQL WHERE clause to fetch logs (e.g., "service = 'api'")
            log_samples: Optional list of log message samples
            time_range: Time range for log query (default: "1 hour ago")
            field_hints: Optional hints for field types (e.g., {"user_id": "UUID"})
            account_id: Optional account ID (uses default if not provided)
    
        Returns:
            Generated GROK pattern, NRQL pattern, and analysis
        """
        if not client:
            return json.dumps({"error": "New Relic client not initialized"})
    
        acct_id = account_id or client.account_id
        if not acct_id:
            return json.dumps({"error": "Account ID required but not provided"})
    
        try:
            result = await log_parsing.generate_parsing_rule_from_logs(
                client, acct_id, log_query, log_samples, time_range, field_hints
            )
            return json.dumps(result, indent=2)
        except Exception as e:
            return json.dumps({"error": str(e)}, indent=2)
  • Core implementation of log parsing rule generation. Fetches logs via NRQL if needed, analyzes samples, generates GROK/NRQL patterns using GrokPatternGenerator class or single-log function.
    async def generate_parsing_rule_from_logs(
        client,
        account_id: str,
        log_query: Optional[str] = None,
        log_samples: Optional[List[str]] = None,
        time_range: str = "1 hour ago",
        field_hints: Optional[Dict[str, str]] = None,
    ) -> Dict[str, Any]:
        """
        Generate a log parsing rule from either a query or provided samples
    
        Args:
            client: New Relic client
            account_id: Account ID
            log_query: Optional NRQL query to fetch logs
            log_samples: Optional list of log message samples
            time_range: Time range for log query (default: "1 hour ago")
            field_hints: Optional hints for field types
    
        Returns:
            Dict containing the generated GROK pattern, NRQL pattern, and analysis
        """
        samples = log_samples or []
    
        # If no samples provided, fetch from New Relic
        if not samples and log_query:
            query = f"""
            SELECT message
            FROM Log
            WHERE {log_query}
            SINCE {time_range}
            LIMIT 10
            """
    
            result = await client.query_nrql(account_id, query)
    
            if result and "results" in result:
                samples = [
                    r.get("message", "") for r in result["results"] if r.get("message")
                ]
    
        if not samples:
            raise ValueError("No log samples available to generate pattern")
    
        # Use improved pattern generation for single samples
        if len(samples) == 1:
            grok_pattern, nrql_pattern = generate_grok_pattern_for_log(samples[0])
            # Create a simple analysis for single sample
            analysis = {"patterns_found": [], "samples_analyzed": 1}
            suggested_desc = "Auto-generated parsing rule for single log sample"
        else:
            generator = GrokPatternGenerator()
            analysis = generator.analyze_log_samples(samples)
            grok_pattern, nrql_pattern = generator.generate_grok_pattern(
                samples, field_hints
            )
            suggested_desc = (
                f"Auto-generated parsing rule for {analysis['patterns_found']}"
                if analysis["patterns_found"]
                else "Auto-generated parsing rule"
            )
    
        return {
            "grok_pattern": grok_pattern,
            "nrql_pattern": f"SELECT * FROM Log WHERE message LIKE '{nrql_pattern}'",
            "analysis": analysis,
            "samples_used": len(samples),
            "suggested_description": suggested_desc,
        }
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses that the tool generates patterns and analysis (output behavior) and mentions default values, but it doesn't cover critical behavioral aspects such as whether this is a read-only or mutating operation, authentication needs, rate limits, or error handling. The description adds some context but is incomplete for a tool with 5 parameters and no annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and front-loaded with the core purpose, followed by a clear 'Args' and 'Returns' section. Each sentence adds value without redundancy. It could be slightly more concise by integrating the default notes into the parameter descriptions, but overall it's efficient and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (5 parameters, no annotations, but with an output schema), the description is reasonably complete. It covers the purpose, all parameters with semantics, and the return value structure. The output schema exists, so the description doesn't need to detail return values. However, it lacks behavioral context like side effects or error conditions, which holds it back from a perfect score.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It provides meaningful semantics for all 5 parameters: it explains what each parameter is for (e.g., 'log_query: Optional NRQL WHERE clause to fetch logs'), gives examples (e.g., "service = 'api'"), and notes defaults (e.g., 'default: "1 hour ago"'). This adds significant value beyond the bare schema, though it could be more detailed on constraints or interactions between parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Generate a log parsing rule from either a query or provided samples.' It specifies the verb ('Generate') and resource ('log parsing rule'), and distinguishes the two input methods (query vs. samples). However, it doesn't explicitly differentiate from sibling tools like 'create_log_parsing_rule' or 'test_log_parsing_rule' in terms of when to use each, which prevents a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage by mentioning two alternative input methods (query or samples) and providing default values (e.g., 'default: "1 hour ago"'), but it lacks explicit guidance on when to choose one method over the other, prerequisites, or comparisons to sibling tools like 'create_log_parsing_rule'. This leaves some ambiguity for the agent.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/piekstra/newrelic-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server