Skip to main content
Glama

KubeGuard MCP Server

A Model Context Protocol (MCP) server for Kubernetes Role security analysis using LLM-assisted prompt chaining, based on the KubeGuard research paper: "LLM-Assisted Kubernetes Hardening via Configuration Files and Runtime Logs Analysis."

Features

šŸ›”ļø Security Analysis

  • Static Analysis: Rule-based security assessment of Kubernetes Roles

  • LLM Prompt Chaining: 5-step modular analysis workflow using OpenAI/Anthropic

  • Runtime Correlation: Analyze actual permission usage vs granted permissions

  • Security Scoring: 0-100 scale risk assessment with detailed breakdown

šŸ”§ Analysis Tools

  • analyze_role_security: Comprehensive Role security analysis

  • generate_hardened_role: Create least-privilege Role configurations

  • validate_role_security: Validate against security thresholds

  • get_server_status: Server configuration and capabilities

šŸ“Š Analysis Methods

  • Rule-Based: Fast, reliable analysis using security pattern matching

  • LLM Chain: Deep analysis using 5-step prompt chaining methodology

  • Hybrid: Automatic fallback between methods

Quick Start

1. Installation

git clone https://github.com/your-org/kubeguard-mcp cd kubeguard-mcp pip install -r requirements.txt

2. Configuration

cp .env.example .env # Edit .env with your LLM API keys

3. Run Server

python -m kubeguard.main

4. Test Analysis

import asyncio import json from kubeguard import KubeGuardRoleAnalyzer # Example Role manifest role_manifest = { "apiVersion": "rbac.authorization.k8s.io/v1", "kind": "Role", "metadata": {"name": "test-role", "namespace": "default"}, "rules": [{ "apiGroups": [""], "resources": ["*"], "verbs": ["*"] }] } # Analyze security analyzer = KubeGuardRoleAnalyzer() analysis = await analyzer.analyze_role(role_manifest) print(f"Security Score: {analysis.security_score}/100") print(f"Risk Level: {analysis.risk_level.value}")

Configuration

Environment Variables

Variable

Description

Default

LLM_PROVIDER

LLM provider (openai/anthropic/none)

none

OPENAI_API_KEY

OpenAI API key

-

ANTHROPIC_API_KEY

Anthropic API key

-

LLM_MODEL

Model to use

gpt-4o-mini

SECURITY_SCORE_THRESHOLD

Security validation threshold

70

ENABLE_RUNTIME_SIMULATION

Enable usage simulation

true

Analysis Configuration

from kubeguard.config import config # Check if LLM is configured if config.has_llm_configured: print(f"LLM Provider: {config.llm.provider}") print(f"Model: {config.llm.model}")

KubeGuard Methodology

5-Step LLM Prompt Chain

  1. Role Understanding: Analyze structure and infer purpose

  2. Permission Analysis: Deep security assessment

  3. Runtime Correlation: Usage pattern analysis

  4. Risk Assessment: Comprehensive risk scoring

  5. Recommendation Generation: Actionable improvements

Security Scoring

  • 90-100: Excellent security posture

  • 70-89: Good security, minor improvements

  • 50-69: Moderate risk, review required

  • 30-49: High risk, immediate action needed

  • 0-29: Critical risk, urgent remediation

Common Issues Detected

  • Wildcard permissions (*)

  • Excessive privileges beyond actual usage

  • Access to sensitive resources (secrets, configmaps)

  • Dangerous subresources (pods/exec, pods/portforward)

  • Privilege escalation vectors

Usage Examples

Basic Analysis

# Analyze a Role for security issues analysis = await analyzer.analyze_role(role_manifest) print(json.dumps(analysis.to_dict(), indent=2))

Generate Hardened Role

# Create a hardened version hardened = analyzer.generate_hardened_role(analysis) print(json.dumps(hardened.hardened_role_manifest, indent=2))

Runtime Log Integration

# Include runtime logs for usage correlation runtime_logs = [ '{"verb":"get","resource":"pods","user":"system:serviceaccount:default:app"}', '{"verb":"list","resource":"pods","user":"system:serviceaccount:default:app"}' ] analysis = await analyzer.analyze_role(role_manifest, runtime_logs)

MCP Integration

Available Tools

  1. analyze_role_security

    • Input: Role manifest, optional runtime logs

    • Output: Comprehensive security analysis

  2. generate_hardened_role

    • Input: Role manifest

    • Output: Hardened Role with improvements

  3. validate_role_security

    • Input: Role manifest, security threshold

    • Output: Pass/fail validation with recommendations

Available Resources

  • kubeguard://security-guidelines: Security best practices

  • kubeguard://example-roles: Example secure/insecure configurations

  • kubeguard://prompt-chain-info: LLM methodology details

  • kubeguard://configuration: Server configuration

Development

Project Structure

kubeguard-mcp/ ā”œā”€ā”€ kubeguard/ │ ā”œā”€ā”€ main.py # MCP server entry point │ ā”œā”€ā”€ analyzer.py # Core analysis engine │ ā”œā”€ā”€ prompts.py # LLM prompt chains │ ā”œā”€ā”€ models.py # Data models │ └── config.py # Configuration ā”œā”€ā”€ tests/ ā”œā”€ā”€ examples/ └── requirements.txt

Testing

# Run tests python -m pytest tests/ # Test specific functionality python -m pytest tests/test_analyzer.py -v

Contributing

  1. Fork the repository

  2. Create feature branch (git checkout -b feature/amazing-feature)

  3. Commit changes (git commit -m 'Add amazing feature')

  4. Push branch (git push origin feature/amazing-feature)

  5. Open Pull Request

License

This project is licensed under the MIT License - see the LICENSE file for details.

Citation

If you use KubeGuard in your research, please cite the original paper:

@article{kubeguard2025, title={KubeGuard: LLM-Assisted Kubernetes Hardening via Configuration Files and Runtime Logs Analysis}, author={[Authors]}, journal={arXiv preprint arXiv:2509.04191}, year={2025} }

Support

-
security - not tested
-
license - not tested
-
quality - not tested

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/vinaybist/MCP-H'

If you have feedback or need assistance with the MCP directory API, please join our Discord server