Skip to main content
Glama

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault
KUBECONFIGNoPath to the kubeconfig file for Kubernetes cluster access (defaults to ~/.kube/config if not set)

Tools

Functions exposed to the LLM to take actions

NameDescription
diagnose-pod

Analyzes pod status, logs, and events to identify root causes and suggest solutions

debug-crashloop

Analyzes pods in CrashLoop state by examining exit codes, logs, and events to find the root cause

analyze-logs

Detects error patterns in logs and suggests causes and solutions (Connection Refused, OOM, DB errors, etc.)

check-resources

Compares pod CPU/Memory usage against limits to check for threshold violations

full-diagnosis

Comprehensively analyzes cluster nodes, pods, and resources to evaluate health

check-events

Queries events for specific resources or namespaces and analyzes Warning events

list-namespaces

Lists all namespaces in the cluster

list-pods

Lists all pods in a specific namespace

Prompts

Interactive templates invoked by user choice

NameDescription

No prompts

Resources

Contextual data attached and managed by the client

NameDescription

No resources

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/ongjin/k8s-doctor-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server