Kubernetes is an open-source container orchestration system for automating software deployment, scaling, and management of containerized applications.
Why this server?
Enables interaction with Kubernetes resources managed by Argo CD, including viewing managed resources, workload logs, resource events, and executing resource actions on Kubernetes objects deployed through Argo CD.
Why this server?
Provides version-aware Kubernetes documentation assistance, connecting to trusted, real-time Kubernetes docs and ensuring accurate, version-specific responses about kubectl behavior, API schemas, and feature gates across all Kubernetes versions.
Why this server?
Enables running operational tasks against Kubernetes clusters, such as listing pods across namespaces and identifying pods that are not in a ready state
Why this server?
Enables management of multiple Kubernetes clusters simultaneously, including CRUD operations on common resources like Deployments, Services, Pods, ConfigMaps, Secrets, Ingresses, StatefulSets, DaemonSets, Roles, and PersistentVolumes, as well as namespace and node management operations.
Why this server?
Allows connecting to a Kubernetes cluster to manage it, including listing, creating, deleting, and describing pods, services, deployments, and namespaces
Why this server?
Provides tools for listing and managing Google Kubernetes Engine (GKE) clusters across different regions.
Why this server?
Includes Kubernetes tools (kubectl, k9s) for managing Kubernetes clusters and resources.
Why this server?
Provides security insights for Kubernetes environments, including cluster inventory, container details, Kubernetes resource monitoring, and identifying security vulnerabilities in Kubernetes objects.
Why this server?
Supports proxying any Kubernetes API requests, enabling management of Kubernetes clusters, deployments, pods, services, and other resources.
Why this server?
Provides tools for monitoring Kubernetes clusters, retrieving problem details from services, and analyzing cluster events to troubleshoot deployment issues.
Why this server?
Enables access to Kubernetes Gateway API documentation and resources, providing source information and sample prompts for working with VPC Lattice in Kubernetes environments.
Why this server?
Enables querying and interacting with Kubernetes clusters through the Metoro observability platform APIs exposed to Claude
Why this server?
Allows running kubectl commands to interact with Kubernetes clusters using a specified kubeconfig path, with support for command line piping and automatic interpretation of command results
Why this server?
Enables monitoring and analysis of Kubernetes metrics, including the ability to query container, pod, and other Kubernetes-specific metrics collected by VictoriaMetrics.
Why this server?
Allows viewing and managing Kubernetes resources including configuration, generic resources, and pods. Supports operations like CRUD on any Kubernetes resource, listing/getting/deleting pods, showing pod logs, and running container images.
Why this server?
Provides access to Kubernetes clusters, allowing AI assistants to analyze resources, compare configurations between clusters, and troubleshoot deployments in the context of GitOps workflows.
Why this server?
Provides extensive Kubernetes cluster management capabilities through 49 different tools, including deployment scaling, node management, pod operations, YAML management, and storage administration across multiple clusters.
Why this server?
Provides comprehensive access to Kubernetes functionality including resource management, deployment scaling, pod operations, security configuration, diagnostics, and monitoring through natural language.
Why this server?
Supports deployment on Kubernetes using the provided Helm chart for scalable and managed API deployments.
Why this server?
Provides a comprehensive interface for managing Kubernetes clusters, including resource discovery, listing, detailed inspection, log retrieval, metrics collection, event tracking, and resource creation through a standardized MCP protocol.
Why this server?
Connects to Kubernetes clusters to list contexts, namespaces, nodes, resources (pods, services, deployments), fetch resource details, list events, and get pod logs
Why this server?
Provides comprehensive Kubernetes cluster management capabilities including resource creation, retrieval, updating, and deletion across multiple clusters. Supports operations on all Kubernetes resources (built-in and CRD), pod file operations, resource scaling, deployment management, and advanced querying using SQL-like syntax.
Why this server?
Provides tools for interacting with Kubernetes resources, including getting resources by kind in a namespace, retrieving specific resources by name, and patching resources. Offers access to Kubernetes configuration and context information.
Why this server?
Offers deployment templates for Kubernetes environments including EKS and GKE, with examples for deployment manifests and configuration.
Why this server?
Mentioned in usage examples for content editing and as a topic for technical content that can be enhanced using the OSP editing tools.
Why this server?
Enables interaction with Kubernetes clusters, allowing for management and orchestration of containerized applications through the Kubernetes API using the Python client.
Why this server?
Provides a read-only interface to Kubernetes clusters for retrieving comprehensive cluster information and diagnosing issues, including namespace management, pod status monitoring, node capacity checking, and resource management across deployments and services.
Why this server?
Allows AI agents to manage Kubernetes applications by creating and updating applications safely through Cyclops Modules instead of directly manipulating Kubernetes resources
Why this server?
Provides full support for kubectl operations to interact with Kubernetes clusters, allowing creation, updating, and listing of resources such as Deployments, Pods, and Services across multiple clusters.
Why this server?
Converts natural language requests into valid kubectl commands for Kubernetes cluster management, supporting operations like viewing pods, services, and other resources across namespaces with built-in security validation
Why this server?
Enables extraction and formatting of error logs from Kubernetes clusters through the Datadog API
Why this server?
Supports production-ready deployment with multi-environment support, horizontal scaling, and high availability features
Why this server?
Allows querying logs from Kubernetes clusters through natural language, enabling filtering and retrieval of Kubernetes-specific log data.
Why this server?
Supports deployment as a scalable application with multiple replicas for high availability and load balancing.
Why this server?
Allows management of Kubernetes clusters through kubectl commands, providing tools for creating and managing deployments, pods, services, namespaces, and other resources, as well as performing operations like scaling, port forwarding, and viewing logs and events.
Why this server?
Enables deployment and management of the MCP server on Kubernetes clusters, with support for scaling, health checks, and configuration management through Kubernetes resources.
Why this server?
Provides management capabilities for Google Kubernetes Engine (GKE) clusters, including monitoring cluster status and likely supporting deployment and configuration operations.
Why this server?
Reports on local Kubernetes configuration and kubectl installation details.