Skip to main content
Glama
geored

Lumino

stream_analyze_pod_logs

Stream and analyze Kubernetes pod logs in chunks to detect patterns, identify errors and warnings, and provide real-time insights for troubleshooting.

Instructions

Stream and analyze pod logs in chunks with progressive pattern detection.

Processes logs in manageable chunks for memory efficiency and real-time insights.

Args:
    namespace: Kubernetes namespace.
    pod_name: Pod name to stream logs from.
    container_name: Specific container (if multiple).
    chunk_size: Lines per chunk (default: 5000).
    analysis_mode: "errors_only", "errors_and_warnings" (default), "full_analysis", or "custom_patterns".
    time_window: Time window for historical logs (e.g., "1h", "6h", "24h").
    follow: Stream logs in real-time (default: False).
    max_chunks: Max chunks to process (default: 50).
    since_seconds: Logs from last N seconds.
    tail_lines: Limit to last N lines.
    time_period: Time period (e.g., "1h", "30m").
    start_time: Start time (ISO format).
    end_time: End time (ISO format).
    max_context_tokens: Maximum tokens for output (default: 50000).

Returns:
    Dict[str, Any]: Keys: chunks, overall_summary, trending_patterns, recommendations, metadata.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
namespaceYes
pod_nameYes
container_nameNo
chunk_sizeNo
analysis_modeNoerrors_and_warnings
time_windowNo
followNo
max_chunksNo
since_secondsNo
tail_linesNo
time_periodNo
start_timeNo
end_timeNo
max_context_tokensNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions 'processes logs in manageable chunks for memory efficiency and real-time insights,' which hints at performance characteristics, but does not cover critical aspects like error handling, rate limits, authentication needs, or what happens if parameters conflict (e.g., 'time_window' vs. 'since_seconds'). It adds some context but leaves significant gaps for a tool with 14 parameters.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with a clear purpose statement followed by parameter details and return values. It is appropriately sized for a complex tool, though the parameter list is lengthy; every sentence adds value, and it avoids unnecessary repetition, making it efficient and front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (14 parameters, no annotations, but with an output schema), the description is fairly complete. It explains the purpose, parameters, and return structure, though it could benefit from more behavioral context (e.g., error cases). The output schema reduces the need to detail return values, but the description still provides a high-level overview of the return keys, enhancing completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description includes an 'Args' section that lists all 14 parameters with brief explanations, such as default values and allowed values for 'analysis_mode.' Since schema description coverage is 0%, this compensates well by providing essential semantics beyond the schema's titles, though it lacks deeper details like format examples or interdependencies between parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('stream and analyze pod logs in chunks') and distinguishes it from siblings like 'analyze_pod_logs_hybrid' and 'smart_summarize_pod_logs' by emphasizing chunk-based processing and progressive pattern detection. It effectively communicates the core functionality without being tautological.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage through terms like 'real-time insights' and 'memory efficiency,' suggesting when this tool might be preferred (e.g., for large logs or streaming). However, it lacks explicit guidance on when to use this tool versus alternatives like 'analyze_pod_logs_hybrid' or 'smart_summarize_pod_logs,' leaving the agent to infer based on context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/geored/Lumino'

If you have feedback or need assistance with the MCP directory API, please join our Discord server