Skip to main content
Glama
dimonb

Uptrace MCP Server

by dimonb

uptrace_search_logs

Search and filter application logs by text, severity level, service name, or custom UQL queries to identify issues and analyze system behavior.

Instructions

Search logs by text, severity, service name, or custom UQL query. Logs are represented as spans with _system = 'log:all'.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
hoursNoNumber of hours to look back (default: 3)
search_textNoText to search for in log messages (case-insensitive)
severityNoFilter by log severity (e.g., 'ERROR', 'WARN', 'INFO', 'DEBUG')
service_nameNoFilter by service name
queryNoAdditional UQL query string for advanced filtering
limitNoMaximum number of logs to return (default: 100)

Implementation Reference

  • The handler function logic that processes arguments and fetches logs from the Uptrace client.
    elif name == "uptrace_search_logs":
        hours = arguments.get("hours", 3)
        search_text = arguments.get("search_text")
        severity = arguments.get("severity")
        service_name = arguments.get("service_name")
        query = arguments.get("query")
        limit = arguments.get("limit", 100)
    
        time_lt = datetime.now(timezone.utc)
        time_gte = time_lt - timedelta(hours=hours)
    
        logger.info(
            "Searching logs: text=%s, severity=%s, service=%s (limit: %s)",
            search_text,
            severity,
            service_name,
            limit,
        )
    
        # Build UQL query for log search
        log_query_parts = []
        if search_text:
            # Search in event attribute (which contains log message)
            log_query_parts.append(f'where event contains "{search_text}"')
        if severity:
            log_query_parts.append(f'where log_severity = "{severity}"')
        if service_name:
            log_query_parts.append(f'where service_name = "{service_name}"')
        if query:
            log_query_parts.append(query)
    
        # Use spans API to query logs (logs are represented as spans)
        full_query = 'where _system = "log:all"'
        if log_query_parts:
            full_query += " | " + " | ".join(log_query_parts)
    
        response = client.get_spans(
            time_gte=time_gte,
            time_lt=time_lt,
            query=full_query,
            limit=limit,
        )
    
        lines = [
            "# Logs Search Results",
            f"**Time Range**: {time_gte.isoformat()} - {time_lt.isoformat()}",
            f"**Total Logs**: {response.count}",
            f"**Returned**: {len(response.spans)}",
            "",
        ]
    
        if search_text:
            lines.append(f"**Search Text**: `{search_text}`")
        if severity:
            lines.append(f"**Severity**: {severity}")
        if service_name:
            lines.append(f"**Service**: {service_name}")
        lines.append("")
    
        if response.spans:
            # Group by severity
            by_severity: Dict[str, int] = {}
            by_service: Dict[str, int] = {}
            for span in response.spans:
                sev = span.attrs.get("log_severity", "UNKNOWN")
                service = span.attrs.get("service_name", "unknown")
  • The tool definition, including the name, description, and input schema.
    Tool(
        name="uptrace_search_logs",
        description="Search logs by text, severity, service name, or custom UQL query. Logs are represented as spans with _system = 'log:all'.",
        inputSchema={
            "type": "object",
            "properties": {
                "hours": {
                    "type": "integer",
                    "description": "Number of hours to look back (default: 3)",
                    "default": 3,
                },
                "search_text": {
                    "type": "string",
                    "description": "Text to search for in log messages (case-insensitive)",
                },
                "severity": {
                    "type": "string",
                    "description": "Filter by log severity (e.g., 'ERROR', 'WARN', 'INFO', 'DEBUG')",
                    "enum": ["DEBUG", "INFO", "WARN", "ERROR", "FATAL"],
                },
                "service_name": {
                    "type": "string",
                    "description": "Filter by service name",
                },
                "query": {
                    "type": "string",
                    "description": "Additional UQL query string for advanced filtering",
                },
                "limit": {
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Adds crucial behavioral context that logs are implemented as spans with specific attributes (_system='log:all'), but omits other behavioral details like rate limits, pagination behavior, or return format.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences total with zero waste. First sentence establishes purpose and search capabilities; second sentence provides essential implementation context. Information is front-loaded and appropriately sized.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With 6 parameters (all optional) and no output schema, the description adequately covers the conceptual model but lacks return value documentation. Given 100% schema coverage for inputs, the omission of output structure prevents a higher score.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, establishing baseline 3. Description provides high-level grouping of parameters ('by text, severity, service name, or custom UQL query') but does not add detailed semantics, constraints, or examples beyond what the schema already documents.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb ('Search') and resource ('logs') clearly stated. Explicitly distinguishes from sibling tool 'uptrace_search_spans' by clarifying that logs are represented as spans with _system='log:all', helping the agent understand the domain model and tool scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies usage through explanation of search dimensions (text, severity, service, UQL) and log representation, but lacks explicit guidance on when to use this versus 'uptrace_search_spans' or prerequisites for the UQL query parameter.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/dimonb/uptrace-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server