Supports Helm v3 operations for package management in Kubernetes, enabling installation, upgrades, and uninstallation of Helm charts through natural language commands.
Provides comprehensive access to Kubernetes functionality including resource management, deployment scaling, pod operations, security configuration, diagnostics, and monitoring through natural language.
Allows installation of the MCP tool directly from PyPI, with support for version-specific installations and development versions.
Kubectl MCP Server
A Model Context Protocol (MCP) server for Kubernetes that enables AI assistants like Claude, Cursor, and others to interact with Kubernetes clusters through natural language.
🎥 Live Demo - Watch kubectl-mcp-tool in Action with Claude!

Related MCP server: MCP Toolkit
🎥 Live Demo - Watch kubectl-mcp-tool in Action with Cursor!

🎥 Live Demo - Watch kubectl-mcp-tool in Action with Windsurf!

Features
Core Kubernetes Operations
Connect to a Kubernetes cluster
List and manage pods, services, deployments, and nodes
Create, delete, and describe pods and other resources
Get pod logs and Kubernetes events
Support for Helm v3 operations (installation, upgrades, uninstallation)
kubectl explain and api-resources support
Choose namespace for next commands (memory persistence)
Port forward to pods
Scale deployments and statefulsets
Execute commands in containers
Manage ConfigMaps and Secrets
Rollback deployments to previous versions
Ingress and NetworkPolicy management
Context switching between clusters
Natural Language Processing
Process natural language queries for kubectl operations
Context-aware commands with memory of previous operations
Human-friendly explanations of Kubernetes concepts
Intelligent command construction from intent
Fallback to kubectl when specialized tools aren't available
Mock data support for offline/testing scenarios
Namespace-aware query handling
Monitoring
Cluster health monitoring
Resource utilization tracking
Pod status and health checks
Event monitoring and alerting
Node capacity and allocation analysis
Historical performance tracking
Resource usage statistics via kubectl top
Container readiness and liveness tracking
Security
RBAC validation and verification
Security context auditing
Secure connections to Kubernetes API
Credentials management
Network policy assessment
Container security scanning
Security best practices enforcement
Role and ClusterRole management
ServiceAccount creation and binding
PodSecurityPolicy analysis
RBAC permissions auditing
Security context validation
Diagnostics
Cluster diagnostics and troubleshooting
Configuration validation
Error analysis and recovery suggestions
Connection status monitoring
Log analysis and pattern detection
Resource constraint identification
Pod health check diagnostics
Common error pattern identification
Resource validation for misconfigurations
Detailed liveness and readiness probe validation
Advanced Features
Multiple transport protocols support (stdio, SSE)
Integration with multiple AI assistants
Extensible tool framework
Custom resource definition support
Cross-namespace operations
Batch operations on multiple resources
Intelligent resource relationship mapping
Error explanation with recovery suggestions
Volume management and identification
Architecture
Model Context Protocol (MCP) Integration
The Kubectl MCP Tool implements the Model Context Protocol (MCP), enabling AI assistants to interact with Kubernetes clusters through a standardized interface. The architecture consists of:
MCP Server: A compliant server that handles requests from MCP clients (AI assistants)
Tools Registry: Registers Kubernetes operations as MCP tools with schemas
Transport Layer: Supports stdio, SSE, and HTTP transport methods
Core Operations: Translates tool calls to Kubernetes API operations
Response Formatter: Converts Kubernetes responses to MCP-compliant responses
Request Flow

Dual Mode Operation
The tool operates in two modes:
CLI Mode: Direct command-line interface for executing Kubernetes operations
Server Mode: Running as an MCP server to handle requests from AI assistants
Installation
For detailed installation instructions, please see the Installation Guide.
You can install kubectl-mcp-tool directly from PyPI:
For a specific version:
The package is available on PyPI: https://pypi.org/project/kubectl-mcp-tool/1.1.1/
Prerequisites
Python 3.9+
kubectl CLI installed and configured
Access to a Kubernetes cluster
pip (Python package manager)
Global Installation
Local Development Installation
Verifying Installation
After installation, verify the tool is working correctly:
Note: This tool is designed to work as an MCP server that AI assistants connect to, not as a direct kubectl replacement. The primary command available is kubectl-mcp serve which starts the MCP server.
Docker Image
If you prefer using Docker, a pre-built image is available on Docker Hub:
Running the image
The server inside the container listens on port 8000. Bind any free host port to 8000 and mount your kubeconfig:
-p 8081:8000maps host port 8081 → container port 8000.-v $HOME/.kube:/root/.kubemounts your kubeconfig so the server can reach the cluster.
Building a multi-architecture image (AMD64 & ARM64)
If you want to build and push a multi-arch image (so it runs on both x86_64 and Apple Silicon), use Docker Buildx:
The published image will contain a manifest list with both architectures, and Docker will automatically pull the correct variant on each machine.
Configuration
The MCP server is allowed to access these paths to read your Kubernetes configuration:
This configuration allows users to add their kubeconfig directory to the container, enabling the MCP server to authenticate with their Kubernetes cluster.
Usage with AI Assistants
Using the MCP Server
The MCP Server (kubectl_mcp_tool.mcp_server) is a robust implementation built on the FastMCP SDK that provides enhanced compatibility across different AI assistants:
Note: If you encounter any errors with the MCP Server implementation, you can fall back to using the minimal wrapper by replacing
kubectl_mcp_tool.mcp_serverwithkubectl_mcp_tool.minimal_wrapperin your configuration. The minimal wrapper provides basic capabilities with simpler implementation.
Direct Configuration
{ "mcpServers": { "kubernetes": { "command": "python", "args": ["-m", "kubectl_mcp_tool.mcp_server"], "env": { "KUBECONFIG": "/path/to/your/.kube/config", "PATH": "/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin", "MCP_LOG_FILE": "/path/to/logs/debug.log", "MCP_DEBUG": "1" } } } }Key Environment Variables
MCP_LOG_FILE: Path to log file (recommended to avoid stdout pollution)MCP_DEBUG: Set to "1" for verbose loggingMCP_TEST_MOCK_MODE: Set to "1" to use mock data instead of real clusterKUBECONFIG: Path to your Kubernetes config fileKUBECTL_MCP_LOG_LEVEL: Set to "DEBUG", "INFO", "WARNING", or "ERROR"
Testing the MCP Server You can test if the server is working correctly with:
python -m kubectl_mcp_tool.simple_pingThis will attempt to connect to the server and execute a ping command.
Alternatively, you can directly run the server with:
python -m kubectl_mcp_tool
Claude Desktop
Add the following to your Claude Desktop configuration at ~/Library/Application\ Support/Claude/claude_desktop_config.json (Windows: %APPDATA%\Claude\mcp.json):
Cursor AI
Add the following to your Cursor AI settings under MCP by adding a new global MCP server:
Save this configuration to ~/.cursor/mcp.json for global settings.
Note: Replace
/path/to/your/.kube/configwith the actual path to your kubeconfig file. On most systems, this is~/.kube/config.
Windsurf
Add the following to your Windsurf configuration at ~/.config/windsurf/mcp.json (Windows: %APPDATA%\WindSurf\mcp.json):
Automatic Configuration
For automatic configuration of all supported AI assistants, run the provided installation script: