Skip to main content
Glama

Podman MCP Server

by kunwarmahen
README.md8.54 kB
# Podman MCP Server **Container management made accessible through the Model Context Protocol.** ## Overview The Podman MCP Server exposes container management capabilities through MCP, allowing AI tools and applications to: - List and inspect running containers - Start, stop, and restart containers - Execute commands inside containers - View container logs - Manage container images - Monitor container resource usage Designed for seamless integration with the MCP Discovery Hub for automatic network discovery. ## Features ### Container Management - **List containers**: View all running or stopped containers - **Container info**: Inspect detailed container information - **Start/Stop/Restart**: Control container lifecycle - **Execute commands**: Run commands inside containers - **View logs**: Access container logs with configurable line count - **Resource stats**: Monitor CPU, memory, and I/O usage ### Image Management - **List images**: View all available container images - **Pull images**: Download images from registries ### Network Discovery - **Automatic broadcasting**: Announces itself on the network via multicast - **Zero-configuration**: No manual registration needed - **Multi-transport support**: Works with HTTP and streamable-http ## Installation ### Prerequisites - Python 3.10+ - Podman installed and running - `uv` package manager (or `pip`) ### Setup ```bash # Clone or navigate to project cd podman-mcp-server # Install dependencies uv sync # Or with pip: pip install -r requirements.txt ``` ## Configuration ### Environment Variables ```env # Transport mode MCP_TRANSPORT=http # http, streamable-http, or stdio (default) # Server settings MCP_HOST=0.0.0.0 # Binding host MCP_PORT=3001 # Server port MCP_SERVER_NAME=Podman MCP Server # Display name # Broadcasting (for MCP Discovery Hub) MCP_ENABLE_BROADCAST=true # Enable/disable broadcasting MCP_BROADCAST_INTERVAL=30 # Seconds between announcements ``` ### .env File Create a `.env` file in the project root: ```env MCP_TRANSPORT=http MCP_PORT=3001 MCP_SERVER_NAME=Podman MCP Server MCP_ENABLE_BROADCAST=true MCP_BROADCAST_INTERVAL=30 ``` ## Usage ### Start in HTTP Mode (with broadcasting) ```bash # Using environment variables MCP_TRANSPORT=http MCP_PORT=3001 uv run main.py # Or with .env file uv run main.py ``` ### Start in Streamable-HTTP Mode ```bash MCP_TRANSPORT=streamable-http MCP_PORT=3001 uv run main.py ``` ### Start in Stdio Mode (for Claude) ```bash # Default mode, works with Claude Desktop uv run main.py ``` ## Available Tools ### Containers #### List Containers ``` list_containers(all: bool = False) ``` List running containers (or all if `all=true`) **Example:** ```json { "method": "tools/call", "params": { "name": "list_containers", "arguments": { "all": true } } } ``` #### Container Info ``` container_info(container: str) ``` Get detailed information about a specific container #### Start Container ``` start_container(container: str) ``` Start a stopped container #### Stop Container ``` stop_container(container: str, timeout: int = 10) ``` Stop a running container (gracefully, with timeout in seconds) #### Restart Container ``` restart_container(container: str) ``` Restart a container #### Container Logs ``` container_logs(container: str, tail: int = 100) ``` Get logs from a container (last N lines) #### Run Container ``` run_container( image: str, name: str = None, detach: bool = True, ports: List[str] = [], env: List[str] = [], volumes: List[str] = [] ) ``` Run a new container **Example:** ```json { "method": "tools/call", "params": { "name": "run_container", "arguments": { "image": "nginx:latest", "name": "my-webserver", "ports": ["8080:80"], "detach": true } } } ``` #### Remove Container ``` remove_container(container: str, force: bool = False) ``` Remove a container (force if running) #### Exec in Container ``` exec_container(container: str, command: List[str]) ``` Execute a command inside a container #### Container Stats ``` container_stats(container: str = None, no_stream: bool = True) ``` Get resource usage statistics for containers ### Images #### List Images ``` list_images(all: bool = False) ``` List available container images #### Pull Image ``` pull_image(image: str) ``` Pull/download an image from a registry ## Integration with MCP Discovery Hub ### Automatic Discovery When broadcasting is enabled, this server automatically registers with the MCP Discovery Hub: 1. **Server broadcasts**: Every 30 seconds, announces itself on `239.255.255.250:5353` 2. **Hub discovers**: Discovery hub receives announcement and probes the server 3. **Tools registered**: All 12 container management tools become available network-wide ### Manual Registration If running without broadcasting: ```bash # Scan for the server manually curl -X POST http://localhost:8000/scan \ -H "Content-Type: application/json" \ -d '{"ports": [3001]}' ``` ## API Endpoints (When in HTTP Mode) ### GET / Server info endpoint ```bash curl http://localhost:3001/ ``` Response: ```json { "name": "Podman MCP Server", "version": "1.0.0", "protocol": "MCP Streamable HTTP", "endpoint": "/mcp" } ``` ### POST /mcp MCP protocol endpoint All MCP communication happens here (initialize, tools/list, tools/call) ## Use Cases ### 1. Container Orchestration Use with AI tools to manage containerized applications: ``` "User: Start a new web server and configure it" AI: I'll start an nginx container for you... → calls run_container(image="nginx", name="webserver", ports=["8080:80"]) ``` ### 2. Monitoring and Debugging Check container status and logs: ``` "User: What's the status of my database container?" AI: Let me check the logs and stats... → calls container_logs(container="postgres", tail=50) → calls container_stats(container="postgres") ``` ### 3. Multi-Server Management Deploy and manage containers across multiple hosts: ``` Host 1: Podman MCP Server (port 3001) Host 2: Podman MCP Server (port 3001) Host 3: MCP Discovery Hub (port 8000) ↓ All containers managed from single AI interface ``` ### 4. Development Workflows Quickly spin up development environments: ``` "User: Set up a development database for testing" AI: I'll create a PostgreSQL container for you... → calls run_container( image="postgres:15", name="dev-db", env=["POSTGRES_PASSWORD=devpass"] ) ``` ## Logs Server logs are written to `podman_mcp.log`: ```bash # View logs tail -f podman_mcp.log # Check for errors grep ERROR podman_mcp.log ``` ## Troubleshooting ### Port Already in Use ```bash # Use a different port MCP_PORT=3002 uv run main.py ``` ### Broadcasting Not Working Check multicast connectivity: ```bash # Verify multicast is enabled ip route show # Check firewall sudo firewall-cmd --add-service=mdns --permanent ``` ### Podman Connection Error Ensure Podman is running: ```bash # Start Podman service systemctl start podman # Verify connection podman ps ``` ## Performance Considerations - **Container operations**: Most operations complete within 100-500ms - **Log retrieval**: Depends on log size and network speed - **Broadcasting overhead**: Minimal (30-byte UDP packets every 30 seconds) - **Connection pooling**: Configured with pool_size=5 for efficiency ## Security ### Best Practices 1. **Run in isolated networks**: Deploy in trusted network environments 2. **Use firewall rules**: Restrict access to the MCP port 3. **Disable broadcasting in untrusted networks**: Set `MCP_ENABLE_BROADCAST=false` 4. **Monitor logs**: Regularly check for unauthorized access attempts ### Limitations - No built-in authentication (rely on network security) - No resource quotas (AI can run unlimited containers) - Commands run with same privileges as Podman daemon Consider adding a reverse proxy with authentication for production use. ## Requirements - Python 3.10+ - FastAPI - SQLAlchemy - FastMCP - python-dotenv ## Contributing Improvements welcome! Areas for enhancement: - Container networking configuration - Image building and pushing - Volume management - Container health monitoring - Network performance metrics ## License MIT License - See LICENSE file for details ## Support - Issues: Report on GitHub - Documentation: See MCP Discovery Hub wiki - Examples: Check examples/ directory

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/kunwarmahen/podman-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server