Enables comprehensive security analysis of Kubernetes RBAC Roles, generating hardened configurations, validating security postures, and correlating runtime logs with granted permissions.
Leverages OpenAI's language models for intelligent Kubernetes Role security analysis, providing automated risk assessment and security recommendations through structured prompt chains.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@KubeGuard MCP Serveranalyze this Role manifest for security risks and give me a score"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
KubeGuard MCP Server
A Model Context Protocol (MCP) server for Kubernetes Role security analysis using LLM-assisted prompt chaining, based on the KubeGuard research paper: "LLM-Assisted Kubernetes Hardening via Configuration Files and Runtime Logs Analysis."
Features
π‘οΈ Security Analysis
Static Analysis: Rule-based security assessment of Kubernetes Roles
LLM Prompt Chaining: 5-step modular analysis workflow using OpenAI/Anthropic
Runtime Correlation: Analyze actual permission usage vs granted permissions
Security Scoring: 0-100 scale risk assessment with detailed breakdown
π§ Analysis Tools
analyze_role_security: Comprehensive Role security analysisgenerate_hardened_role: Create least-privilege Role configurationsvalidate_role_security: Validate against security thresholdsget_server_status: Server configuration and capabilities
π Analysis Methods
Rule-Based: Fast, reliable analysis using security pattern matching
LLM Chain: Deep analysis using 5-step prompt chaining methodology
Hybrid: Automatic fallback between methods
Quick Start
1. Installation
git clone https://github.com/your-org/kubeguard-mcp
cd kubeguard-mcp
pip install -r requirements.txt2. Configuration
cp .env.example .env
# Edit .env with your LLM API keys3. Run Server
python -m kubeguard.main4. Test Analysis
import asyncio
import json
from kubeguard import KubeGuardRoleAnalyzer
# Example Role manifest
role_manifest = {
"apiVersion": "rbac.authorization.k8s.io/v1",
"kind": "Role",
"metadata": {"name": "test-role", "namespace": "default"},
"rules": [{
"apiGroups": [""],
"resources": ["*"],
"verbs": ["*"]
}]
}
# Analyze security
analyzer = KubeGuardRoleAnalyzer()
analysis = await analyzer.analyze_role(role_manifest)
print(f"Security Score: {analysis.security_score}/100")
print(f"Risk Level: {analysis.risk_level.value}")Configuration
Environment Variables
Variable | Description | Default |
| LLM provider (openai/anthropic/none) |
|
| OpenAI API key | - |
| Anthropic API key | - |
| Model to use |
|
| Security validation threshold |
|
| Enable usage simulation |
|
Analysis Configuration
from kubeguard.config import config
# Check if LLM is configured
if config.has_llm_configured:
print(f"LLM Provider: {config.llm.provider}")
print(f"Model: {config.llm.model}")KubeGuard Methodology
5-Step LLM Prompt Chain
Role Understanding: Analyze structure and infer purpose
Permission Analysis: Deep security assessment
Runtime Correlation: Usage pattern analysis
Risk Assessment: Comprehensive risk scoring
Recommendation Generation: Actionable improvements
Security Scoring
90-100: Excellent security posture
70-89: Good security, minor improvements
50-69: Moderate risk, review required
30-49: High risk, immediate action needed
0-29: Critical risk, urgent remediation
Common Issues Detected
Wildcard permissions (
*)Excessive privileges beyond actual usage
Access to sensitive resources (secrets, configmaps)
Dangerous subresources (pods/exec, pods/portforward)
Privilege escalation vectors
Usage Examples
Basic Analysis
# Analyze a Role for security issues
analysis = await analyzer.analyze_role(role_manifest)
print(json.dumps(analysis.to_dict(), indent=2))Generate Hardened Role
# Create a hardened version
hardened = analyzer.generate_hardened_role(analysis)
print(json.dumps(hardened.hardened_role_manifest, indent=2))Runtime Log Integration
# Include runtime logs for usage correlation
runtime_logs = [
'{"verb":"get","resource":"pods","user":"system:serviceaccount:default:app"}',
'{"verb":"list","resource":"pods","user":"system:serviceaccount:default:app"}'
]
analysis = await analyzer.analyze_role(role_manifest, runtime_logs)MCP Integration
Available Tools
analyze_role_security
Input: Role manifest, optional runtime logs
Output: Comprehensive security analysis
generate_hardened_role
Input: Role manifest
Output: Hardened Role with improvements
validate_role_security
Input: Role manifest, security threshold
Output: Pass/fail validation with recommendations
Available Resources
kubeguard://security-guidelines: Security best practiceskubeguard://example-roles: Example secure/insecure configurationskubeguard://prompt-chain-info: LLM methodology detailskubeguard://configuration: Server configuration
Development
Project Structure
kubeguard-mcp/
βββ kubeguard/
β βββ main.py # MCP server entry point
β βββ analyzer.py # Core analysis engine
β βββ prompts.py # LLM prompt chains
β βββ models.py # Data models
β βββ config.py # Configuration
βββ tests/
βββ examples/
βββ requirements.txtTesting
# Run tests
python -m pytest tests/
# Test specific functionality
python -m pytest tests/test_analyzer.py -vContributing
Fork the repository
Create feature branch (
git checkout -b feature/amazing-feature)Commit changes (
git commit -m 'Add amazing feature')Push branch (
git push origin feature/amazing-feature)Open Pull Request
License
This project is licensed under the MIT License - see the LICENSE file for details.
Citation
If you use KubeGuard in your research, please cite the original paper:
@article{kubeguard2025,
title={KubeGuard: LLM-Assisted Kubernetes Hardening via Configuration Files and Runtime Logs Analysis},
author={[Authors]},
journal={arXiv preprint arXiv:2509.04191},
year={2025}
}Support
π§ Email: support@kubeguard.io
π Issues: GitHub Issues
π Documentation: Wiki