Supports Docker container operations through Kubernetes integration for managing containerized workloads
Provides Helm chart management capabilities including installing, upgrading, uninstalling charts, and template-based operations for bypassing authentication issues
Enables comprehensive Kubernetes cluster management including resource operations (get, create, apply, delete), deployment scaling, log retrieval, port forwarding, and troubleshooting workflows
MCP Server Kubernetes
MCP Server that can connect to a Kubernetes cluster and manage it. Supports loading kubeconfig from multiple sources in priority order.
https://github.com/user-attachments/assets/f25f8f4e-4d04-479b-9ae0-5dac452dd2ed
Installation & Usage
Prerequisites
Before using this MCP server with any tool, make sure you have:
kubectl installed and in your PATH
A valid kubeconfig file with contexts configured
Access to a Kubernetes cluster configured for kubectl (e.g. minikube, Rancher Desktop, GKE, etc.)
Helm v3 installed and in your PATH (no Tiller required). Optional if you don't plan to use Helm.
You can verify your connection by running kubectl get pods
in a terminal to ensure you can connect to your cluster without credential issues.
By default, the server loads kubeconfig from ~/.kube/config
. For additional authentication options (environment variables, custom paths, etc.), see ADVANCED_README.md.
Claude Code
Add the MCP server to Claude Code using the built-in command:
This will automatically configure the server in your Claude Code MCP settings.
Claude Desktop
Add the following configuration to your Claude Desktop config file:
VS Code
For VS Code integration, you can use the MCP server with extensions that support the Model Context Protocol:
Install a compatible MCP extension (such as Claude Dev or similar MCP clients)
Configure the extension to use this server:
Cursor
Cursor supports MCP servers through its AI integration. Add the server to your Cursor MCP configuration:
The server will automatically connect to your current kubectl context. You can verify the connection by asking the AI assistant to list your pods or create a test deployment.
Usage with mcp-chat
mcp-chat is a CLI chat client for MCP servers. You can use it to interact with the Kubernetes server.
Alternatively, pass it your existing Claude Desktop configuration file from above (Linux should pass the correct path to config):
Mac:
Windows:
Features
Connect to a Kubernetes cluster
Unified kubectl API for managing resources
Get or list resources with
kubectl_get
Describe resources with
kubectl_describe
List resources with
kubectl_get
Create resources with
kubectl_create
Apply YAML manifests with
kubectl_apply
Delete resources with
kubectl_delete
Get logs with
kubectl_logs
Manage kubectl contexts with
kubectl_context
Explain Kubernetes resources with
explain_resource
List API resources with
list_api_resources
Scale resources with
kubectl_scale
Update field(s) of a resource with
kubectl_patch
Manage deployment rollouts with
kubectl_rollout
Execute any kubectl command with
kubectl_generic
Verify connection with
ping
Advanced operations
Scale deployments with
kubectl_scale
(replaces legacyscale_deployment
)Port forward to pods and services with
port_forward
Run Helm operations
Install, upgrade, and uninstall charts
Support for custom values, repositories, and versions
Template-based installation (
helm_template_apply
) to bypass authentication issuesTemplate-based uninstallation (
helm_template_uninstall
) to bypass authentication issues
Pod cleanup operations
Clean up problematic pods (
cleanup_pods
) in states: Evicted, ContainerStatusUnknown, Completed, Error, ImagePullBackOff, CrashLoopBackOff
Node management operations
Cordoning, draining, and uncordoning nodes (
node_management
) for maintenance and scaling operations
Troubleshooting Prompt (
k8s-diagnose
)Guides through a systematic Kubernetes troubleshooting flow for pods based on a keyword and optional namespace.
Non-destructive mode for read and create/update-only access to clusters
Secrets masking for security (masks sensitive data in
kubectl get secrets
commands, does not affect logs)
Prompts
The MCP Kubernetes server includes specialized prompts to assist with common diagnostic operations.
/k8s-diagnose Prompt
This prompt provides a systematic troubleshooting flow for Kubernetes pods. It accepts a keyword
to identify relevant pods and an optional namespace
to narrow the search.
The prompt's output will guide you through an autonomous troubleshooting flow, providing instructions for identifying issues, collecting evidence, and suggesting remediation steps.
Local Development
Make sure that you have bun installed. Clone the repo & install dependencies:
Development Workflow
Start the server in development mode (watches for file changes):
Run unit tests:
Build the project:
Local Testing with Inspector
Local testing with Claude Desktop
Local testing with mcp-chat
Contributing
See the CONTRIBUTING.md file for details.
Advanced
Non-Destructive Mode
You can run the server in a non-destructive mode that disables all destructive operations (delete pods, delete deployments, delete namespaces, etc.):
For Claude Desktop configuration with non-destructive mode:
Commands Available in Non-Destructive Mode
All read-only and resource creation/update operations remain available:
Resource Information:
kubectl_get
,kubectl_describe
,kubectl_logs
,explain_resource
,list_api_resources
Resource Creation/Modification:
kubectl_apply
,kubectl_create
,kubectl_scale
,kubectl_patch
,kubectl_rollout
Helm Operations:
install_helm_chart
,upgrade_helm_chart
,helm_template_apply
,helm_template_uninstall
Connectivity:
port_forward
,stop_port_forward
Context Management:
kubectl_context
Commands Disabled in Non-Destructive Mode
The following destructive operations are disabled:
kubectl_delete
: Deleting any Kubernetes resourcesuninstall_helm_chart
: Uninstalling Helm chartscleanup
: Cleanup of managed resourcescleanup_pods
: Cleaning up problematic podsnode_management
: Node management operations (can drain nodes)kubectl_generic
: General kubectl command access (may include destructive operations)
Helm Template Apply Tool
The helm_template_apply
tool provides an alternative way to install Helm charts that bypasses authentication issues commonly encountered with certain Kubernetes configurations. This tool is particularly useful when you encounter errors like:
Instead of using helm install
directly, this tool:
Uses
helm template
to generate YAML manifests from the Helm chartApplies the generated YAML using
kubectl apply
Handles namespace creation and cleanup automatically
Usage Example
This is equivalent to running:
Parameters
name
: Release name for the Helm chartchart
: Chart name or path to chart directoryrepo
: Chart repository URL (optional if using local chart path)namespace
: Kubernetes namespace to deploy tovalues
: Chart values as an object (optional)valuesFile
: Path to values.yaml file (optional, alternative to values object)createNamespace
: Whether to create the namespace if it doesn't exist (default: true)
Pod Cleanup with Existing Tools
Pod cleanup can be achieved using the existing kubectl_get
and kubectl_delete
tools with field selectors. This approach leverages standard Kubernetes functionality without requiring dedicated cleanup tools.
Identifying Problematic Pods
Use kubectl_get
with field selectors to identify pods in problematic states:
Get failed pods:
Get completed pods:
Get pods with specific conditions:
Deleting Problematic Pods
Use kubectl_delete
with field selectors to delete pods in problematic states:
Delete failed pods:
Delete completed pods:
Workflow
First, identify problematic pods using
kubectl_get
with appropriate field selectorsReview the list of pods in the response
Delete the pods using
kubectl_delete
with the same field selectors
Available Field Selectors
status.phase=Failed
- Pods that have failedstatus.phase=Succeeded
- Pods that have completed successfullystatus.phase=Pending
- Pods that are pendingstatus.conditions[?(@.type=='Ready')].status=False
- Pods that are not ready
Safety Features
Field selectors: Target specific pod states precisely
Force deletion: Use
force=true
andgracePeriodSeconds=0
for immediate deletionNamespace isolation: Target specific namespaces or use
allNamespaces=true
Standard kubectl: Uses well-established Kubernetes patterns
Node Management Tool
The node_management
tool provides comprehensive node management capabilities for Kubernetes clusters, including cordoning, draining, and uncordoning operations. This is essential for cluster maintenance, scaling, and troubleshooting.
Operations Available
list
: List all nodes with their status and schedulabilitycordon
: Mark a node as unschedulable (no new pods will be scheduled)drain
: Safely evict all pods from a node and mark it as unschedulableuncordon
: Mark a node as schedulable again
Usage Examples
1. List all nodes:
2. Cordon a node (mark as unschedulable):
3. Drain a node (dry run first):
4. Drain a node (with confirmation):
5. Uncordon a node:
Drain Operation Parameters
force
: Force the operation even if there are pods not managed by controllersgracePeriod
: Period of time in seconds given to each pod to terminate gracefullydeleteLocalData
: Delete local data even if emptyDir volumes are usedignoreDaemonsets
: Ignore DaemonSet-managed pods (default: true)timeout
: The length of time to wait before giving up (e.g., '5m', '1h')dryRun
: Show what would be done without actually doing itconfirmDrain
: Explicit confirmation to drain the node (required for actual draining)
Safety Features
Dry run by default: Drain operations default to dry run to show what would be done
Explicit confirmation: Drain operations require
confirmDrain=true
to proceedStatus tracking: Shows node status before and after operations
Timeout protection: Configurable timeouts to prevent hanging operations
Graceful termination: Configurable grace periods for pod termination
Common Use Cases
Cluster Maintenance: Cordon nodes before maintenance, drain them, perform maintenance, then uncordon
Node Scaling: Drain nodes before removing them from the cluster
Troubleshooting: Isolate problematic nodes by cordoning them
Resource Management: Drain nodes to redistribute workload
For additional advanced features, see the ADVANCED_README.md.
Architecture
See this DeepWiki link for a more indepth architecture overview created by Devin.
This section describes the high-level architecture of the MCP Kubernetes server.
Request Flow
The sequence diagram below illustrates how requests flow through the system:
See this DeepWiki link for a more indepth architecture overview created by Devin.
Publishing new release
Go to the releases page, click on "Draft New Release", click "Choose a tag" and create a new tag by typing out a new version number using "v{major}.{minor}.{patch}" semver format. Then, write a release title "Release v{major}.{minor}.{patch}" and description / changelog if necessary and click "Publish Release".
This will create a new tag which will trigger a new release build via the cd.yml workflow. Once successful, the new release will be published to npm. Note that there is no need to update the package.json version manually, as the workflow will automatically update the version number in the package.json file & push a commit to main.
Not planned
Adding clusters to kubectx.
Star History
🖊️ Cite
If you find this repo useful, please cite:
This server cannot be installed
local-only server
The server can only run on the client's local machine because it depends on local resources.
Enables comprehensive Kubernetes cluster management through kubectl operations, Helm chart deployments, pod troubleshooting, and node management. Supports both read-only and full cluster administration capabilities with built-in safety features.