Retrieve all pods running on a specific Kubernetes node by providing the context and node name. Outputs a JSON string with the pod details for efficient cluster management.
Verify the responsiveness and operational status of llama-server to ensure local LLM instances are running correctly for seamless integration with Claude Desktop.
Provides Kubernetes cluster management capabilities through natural language via MCP protocol over HTTP/SSE. Supports Pod, Service, Deployment operations, log retrieval, and resource management with JWT authentication and RBAC permissions.
Provides comprehensive running performance calculations including VDOT, training paces, race time predictions, velocity markers, and heart rate zones using Jack Daniels, Greg McMillan, and Riegel methodologies.
An MCP server that enables AI assistants to interact with Kubernetes clusters by translating natural language into kubectl and Helm operations. It allows users to query, manage, and diagnose Kubernetes resources and cluster states through a seamless integration.