Skip to main content
Glama
geored

Lumino

smart_get_namespace_events

Analyzes Kubernetes namespace events with adaptive filtering, automatically managing volume and prioritizing errors/warnings for diagnostic insights.

Instructions

Adaptive event analysis for a namespace with automatic volume management.

When no constraints specified, automatically: estimates volume, applies smart time windows,
prioritizes errors/warnings, samples within token limits.

Args:
    namespace: Kubernetes namespace to analyze.
    last_n_events: Exact event count (only if user specifies).
    time_period: Exact time window (only if user specifies).
    strategy: "auto" for adaptive behavior (default).
    focus_areas: Areas to emphasize (default: ["errors", "warnings", "failures"]).
    max_context_tokens: Max output tokens (default: 8000).
    include_summary: Include summary and insights (default: True).
    severity_filter: Filter by severity levels.
    resource_filter: Filter by resource type.

Returns:
    Dict: Events with adaptive filtering, insights, and recommendations.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
namespaceYes
last_n_eventsNo
time_periodNo
strategyNoauto
focus_areasNo
max_context_tokensNo
include_summaryNo
severity_filterNo
resource_filterNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and does well. It discloses key behavioral traits: adaptive volume estimation, smart time window application, prioritization of errors/warnings, and token-limited sampling. It also mentions the return format includes 'insights and recommendations.' However, it doesn't cover potential side effects, rate limits, or authentication requirements.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly structured and concise. It begins with a clear purpose statement, follows with usage guidelines, then provides parameter semantics in a clean format, and ends with return value information. Every sentence earns its place with no wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (adaptive analysis with 9 parameters), no annotations, but with an output schema, the description is mostly complete. It covers purpose, usage, parameters, and behavioral traits well. The output schema handles return values, so the description doesn't need to detail them. However, for a tool with no annotations, it could mention more about error handling or constraints.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage for 9 parameters, the description compensates excellently. It provides semantic meaning for all parameters: explains when to use last_n_events/time_period vs. automatic behavior, clarifies default values and purposes for strategy, focus_areas, max_context_tokens, include_summary, severity_filter, and resource_filter. This adds substantial value beyond the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool performs 'adaptive event analysis for a namespace with automatic volume management,' specifying both the action (analysis) and resource (namespace events). It distinguishes from siblings by emphasizing adaptive behavior and automatic volume management, unlike simpler event listing tools in the sibling list.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool: 'When no constraints specified, automatically: estimates volume, applies smart time windows, prioritizes errors/warnings, samples within token limits.' This explains the adaptive behavior scenario. However, it doesn't explicitly state when NOT to use it or name specific alternatives among the many sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/geored/Lumino'

If you have feedback or need assistance with the MCP directory API, please join our Discord server