Enables management of PostgreSQL clusters through the CloudNativePG operator, providing tools for cluster creation, scaling, status monitoring, and health checking within Kubernetes environments
Manages PostgreSQL database clusters using CloudNativePG operator, offering high-level workflow tools for cluster lifecycle management, scaling operations, and monitoring cluster health and status
CloudNativePG MCP Server
An MCP (Model Context Protocol) server for managing PostgreSQL clusters using the CloudNativePG operator in Kubernetes.
Overview
This MCP server enables LLMs to interact with PostgreSQL clusters managed by the CloudNativePG operator. It provides high-level workflow tools for:
π Listing and discovering PostgreSQL clusters
π Getting detailed cluster status and health information
π Creating new PostgreSQL clusters with best practices
π Scaling clusters up or down
ποΈ Deleting PostgreSQL clusters with safety confirmations
π₯ Managing PostgreSQL roles/users (list, create, update, delete)
ποΈ Managing PostgreSQL databases (list, create, delete)
π Managing backups and restores (TODO)
π Monitoring cluster health and logs (TODO)
Prerequisites
Kubernetes Cluster with CloudNativePG operator installed:
kubectl apply -f https://raw.githubusercontent.com/cloudnative-pg/cloudnative-pg/release-1.22/releases/cnpg-1.22.0.yamlPython 3.11+ installed
Kubernetes config file (kubeconfig) with cluster access at
~/.kube/configor set viaKUBECONFIGenvironment variableThe server uses the Kubernetes Python client library (no kubectl CLI required)
Appropriate RBAC permissions for the service account (see RBAC Setup below)
Installation
Option 1: Install via Smithery.ai (Recommended)
The easiest way to install and configure this MCP server is through Smithery.ai:
This automatically:
Installs the required Python dependencies
Configures the MCP server in your Claude Desktop config
Sets up the appropriate environment variables
Option 2: Manual Installation
Clone this repository:
git clone https://github.com/helxplatform/cnpg-mcp.git cd cnpg-mcpInstall Python dependencies:
pip install -r requirements.txtVerify Kubernetes connectivity (optional):
python -c "from kubernetes import config; config.load_kube_config(); print('β Kubernetes config loaded successfully')"Or if you have kubectl installed:
kubectl get nodesConfigure for Claude Desktop (optional): Add to your Claude Desktop config (
~/Library/Application Support/Claude/claude_desktop_config.jsonon macOS):{ "mcpServers": { "cnpg": { "command": "python", "args": ["/absolute/path/to/cnpg_mcp_server.py"], "env": { "KUBECONFIG": "/path/to/.kube/config" } } } }
Option 3: Install as Python Package
Install directly from source:
Then run:
RBAC Setup
The MCP server needs permissions to interact with CloudNativePG resources. The CloudNativePG helm chart automatically creates ClusterRoles (cnpg-cloudnative-pg-edit, cnpg-cloudnative-pg-view), so you only need to create a ServiceAccount and bind it to these existing roles:
This creates:
A
cnpg-mcp-serverServiceAccountClusterRoleBinding to
cnpg-cloudnative-pg-edit(for managing clusters)ClusterRoleBinding to
view(for reading pods, events, logs)
Verify the setup:
For read-only access: Change cnpg-cloudnative-pg-edit to cnpg-cloudnative-pg-view in rbac.yaml
Configuration
Transport Modes
The server supports two transport modes (currently only stdio is implemented):
1. stdio Transport (Default)
Communication over stdin/stdout. Best for local development and Claude Desktop integration.
Characteristics:
β Simple setup, no network configuration
β Automatic process management
β Secure (no network exposure)
β Single client per server instance
β Client and server must be on same machine
Use cases: Claude Desktop, local CLI tools, personal development
2. HTTP/SSE Transport (Future)
HTTP server with Server-Sent Events for remote access. Best for team environments and production deployments.
When implemented, will provide:
β Multiple clients can connect
β Remote access capability
β Independent server lifecycle
β Better for team/production use
β οΈ Requires authentication/TLS setup
Use cases: Team-shared server, production deployments, Kubernetes services
The codebase is structured to easily add HTTP transport when needed. See the run_http_transport() function for implementation guidelines.
Kubernetes Configuration
The server uses your kubeconfig for authentication:
Local development: Uses
~/.kube/configIn-cluster: Automatically uses service account tokens
You can also set the KUBECONFIG environment variable:
Namespace Handling:
Most tools accept an optional
namespaceparameterIf not specified, the server automatically uses the current namespace from your Kubernetes context
This makes it easier to work with a default namespace without specifying it every time
You can check your current namespace with:
kubectl config view --minify -o jsonpath='{..namespace}'
Running the Server
Command-Line Options
Standalone Mode (for testing)
Note: The server runs as a long-running process waiting for MCP requests. In stdio mode, it won't exit until interrupted. This is expected behavior.
With Claude Desktop
Add to your Claude Desktop configuration (~/Library/Application Support/Claude/claude_desktop_config.json on macOS):
With Docker/Kubernetes Deployment
For production deployments, you can containerize the server:
Deploy as a Kubernetes service that can be accessed by your LLM application.
Available Tools
Enhanced Output Formats: 4 tools support optional JSON format for programmatic consumption:
list_postgres_clusters(format="json")- Structured cluster listget_cluster_status(format="json")- Structured cluster detailslist_postgres_roles(format="json")- Structured role listlist_postgres_databases(format="json")- Structured database list
All other tools return human-readable text optimized for LLM consumption.
Cluster Management
1. list_postgres_clusters
List all PostgreSQL clusters in the Kubernetes cluster.
Parameters:
namespace(optional): Filter by namespace. If not provided, uses the current namespace from your Kubernetes contextdetail_level: "concise" (default) or "detailed"format: "text" (default) or "json" - Output format for programmatic consumption
Example:
JSON Output:
When format="json", returns structured data like:
2. get_cluster_status
Get detailed status for a specific cluster.
Parameters:
name(required): Name of the clusternamespace(optional): Namespace of the cluster. If not specified, uses the current namespace from your Kubernetes contextdetail_level: "concise" (default) or "detailed"format: "text" (default) or "json" - Output format for programmatic consumption
Example:
Note: Supports JSON format for structured output.
3. create_postgres_cluster
Create a new PostgreSQL cluster with high availability.
Parameters:
name(required): Cluster nameinstances(default: 3): Number of PostgreSQL instancesstorage_size(default: "10Gi"): Storage per instancepostgres_version(default: "16"): PostgreSQL versionstorage_class(optional): Kubernetes storage classwait(default: False): Wait for the cluster to become operational before returningtimeout(optional): Maximum time in seconds to wait (30-600 seconds). Defaults to 60 seconds per instancenamespace(optional): Target namespace. If not specified, uses the current namespace from your Kubernetes contextdry_run(default: False): Preview the cluster configuration without creating it
Example:
4. scale_postgres_cluster
Scale a cluster by changing the number of instances.
Parameters:
name(required): Cluster nameinstances(required): New number of instances (1-10)namespace(optional): Namespace of the cluster. If not specified, uses the current namespace from your Kubernetes context
Example:
5. delete_postgres_cluster
Delete a PostgreSQL cluster and its associated resources.
Automatically cleans up:
The cluster resource itself
All associated role password secrets (using label selector
cnpg.io/cluster={name})
Parameters:
name(required): Name of the cluster to deleteconfirm_deletion(default: False): Must be explicitly set to true to confirm deletionnamespace(optional): Namespace where the cluster exists. If not specified, uses the current namespace from your Kubernetes context
Example:
Warning: This is a DESTRUCTIVE operation that permanently removes the cluster and all its data. The tool will report how many secrets were cleaned up.
Role/User Management
6. list_postgres_roles
List all PostgreSQL roles/users managed in a cluster.
Parameters:
cluster_name(required): Name of the PostgreSQL clusternamespace(optional): Namespace where the cluster exists. If not specified, uses the current namespace from your Kubernetes contextformat: "text" (default) or "json" - Output format for programmatic consumption
Example:
Note: Supports JSON format for structured output with role attributes.
7. create_postgres_role
Create a new PostgreSQL role/user in a cluster. Automatically generates a secure password and stores it in a Kubernetes secret.
Parameters:
cluster_name(required): Name of the PostgreSQL clusterrole_name(required): Name of the role to createlogin(default: true): Allow role to log insuperuser(default: false): Grant superuser privilegesinherit(default: true): Inherit privileges from parent rolescreatedb(default: false): Allow creating databasescreaterole(default: false): Allow creating rolesreplication(default: false): Allow streaming replicationnamespace(optional): Namespace where the cluster exists
Example:
8. update_postgres_role
Update attributes of an existing PostgreSQL role/user.
Parameters:
cluster_name(required): Name of the PostgreSQL clusterrole_name(required): Name of the role to updatelogin,superuser,inherit,createdb,createrole,replication(all optional): Attributes to updatepassword(optional): New password for the rolenamespace(optional): Namespace where the cluster exists
Example:
9. delete_postgres_role
Delete a PostgreSQL role/user from a cluster. Also deletes the associated Kubernetes secret.
Parameters:
cluster_name(required): Name of the PostgreSQL clusterrole_name(required): Name of the role to deletenamespace(optional): Namespace where the cluster exists
Example:
Database Management
10. list_postgres_databases
List all PostgreSQL databases managed by Database CRDs for a cluster.
Parameters:
cluster_name(required): Name of the PostgreSQL clusternamespace(optional): Namespace where the cluster existsformat: "text" (default) or "json" - Output format for programmatic consumption
Example:
Note: Supports JSON format for structured output with database details.
11. create_postgres_database
Create a new PostgreSQL database using CloudNativePG's Database CRD.
Parameters:
cluster_name(required): Name of the PostgreSQL clusterdatabase_name(required): Name of the database to createowner(required): Name of the role that will own the databasereclaim_policy(default: "retain"): Policy for database deletion ("retain" or "delete")namespace(optional): Namespace where the cluster exists
Example:
12. delete_postgres_database
Delete a PostgreSQL database by removing its Database CRD.
Parameters:
cluster_name(required): Name of the PostgreSQL clusterdatabase_name(required): Name of the database to deletenamespace(optional): Namespace where the cluster exists
Example:
Note: Whether the database is actually dropped from PostgreSQL depends on the databaseReclaimPolicy set when the database was created.
Architecture
Design Principles
This MCP server follows agent-centric design principles:
Workflow-based tools: Each tool completes a meaningful workflow, not just a single API call
Optimized for context: Responses are concise by default, with detailed mode available
Actionable errors: Error messages suggest next steps
Natural naming: Tool names reflect user intent, not just API endpoints
Transport Layer Architecture
The server is designed with transport-agnostic core logic, making it easy to add new transport modes without rewriting tool implementations:
Why this matters:
All tool functions (decorated with
@mcp.tool()) work with any transportAdding HTTP transport only requires implementing
run_http_transport()No changes needed to business logic when switching transports
Can run both transports simultaneously if needed
To add HTTP/SSE transport later:
Uncomment HTTP dependencies in
requirements.txtInstall:
pip install mcp[sse] starlette uvicornImplement the
run_http_transport()function (skeleton already provided)Add authentication/authorization middleware
Configure TLS for production
Components
Kubernetes Client: Uses
kubernetesPython client for API accessCloudNativePG CRDs: Interacts with Custom Resource Definitions:
Cluster: Primary resource for PostgreSQL cluster managementDatabase: Declarative database creation and management (CNPG v1.23+)
Declarative Role Management: Manages PostgreSQL roles through the Cluster CRD's
.spec.managed.rolesfieldSecret Management: Automatically creates and manages Kubernetes secrets for role passwords
Async operations: All I/O is async for better performance
Lazy initialization: Kubernetes clients are initialized on first use, allowing graceful startup
Error handling: Comprehensive error formatting with suggestions
Development
Adding New Tools
To add a new tool:
Create a Pydantic model for input validation
Implement the tool function with
@mcp.tool()decoratorAdd comprehensive docstring following the format in existing tools
Implement error handling with actionable messages
Test thoroughly
Example skeleton:
Testing
Run syntax check:
Test with a real Kubernetes cluster:
Implemented Features
Delete cluster tool with safety confirmations
PostgreSQL role/user management (list, create, update, delete)
PostgreSQL database management (list, create, delete)
Dry-run mode for cluster creation
Wait for cluster readiness with configurable timeout
Automatic namespace inference from Kubernetes context
Lazy Kubernetes client initialization
TODO: Upcoming Features
Backup management (list, create, restore)
Log retrieval from pods
SQL query execution (with safety guardrails)
Connection information retrieval (automatic secret decoding)
Monitoring and metrics integration
Certificate and secret management
Cluster configuration updates
Pooler management
Troubleshooting
"Permission denied" errors
Ensure your service account has the necessary RBAC permissions. Check:
"Connection refused" or "Cluster unreachable"
Verify kubectl connectivity:
"No module named 'mcp'"
Install dependencies:
Server hangs
This is expected behavior - the server waits for MCP requests over stdio. Run in background or use process manager.
Security Considerations
RBAC: Apply principle of least privilege - only grant necessary permissions
Use
cnpg-cloudnative-pg-viewfor read-only accessUse
cnpg-cloudnative-pg-editfor cluster managementGrant additional permissions for secrets if using role management:
listsecrets with label selector (for cleanup during cluster deletion)createanddeletesecrets (for role management)
Secrets: Never log or expose database credentials
Role passwords are automatically generated and stored in Kubernetes secrets
Secrets are labeled with cluster and role information for easy management
Secrets are named
cnpg-{cluster}-user-{role}to avoid conflictsAutomatic cleanup: Secrets are automatically deleted when their cluster is deleted
Input validation: All inputs are validated with Pydantic models
Namespace isolation: Consider restricting to specific namespaces
Audit logging: Enable Kubernetes audit logs for compliance
Destructive operations: Cluster and database deletion require explicit confirmation
Role privileges: Be cautious when granting superuser or replication privileges
Database reclaim policy: Choose "retain" for production databases to prevent accidental data loss
Resources
License
[Your License Here]
Contributing
Contributions are welcome! Please:
Follow the existing code style
Add comprehensive docstrings
Include error handling
Test with real Kubernetes clusters
Update README with new features