Skip to main content
Glama
docker-compose.yaml917 B
version: '3.8' services: cortex-resource-manager: build: context: . dockerfile: Dockerfile image: ghcr.io/ry-ops/cortex-resource-manager:latest container_name: cortex-resource-manager restart: unless-stopped env_file: - .env volumes: # Mount your kubeconfig file - ${KUBECONFIG:-~/.kube/config}:/config/kubeconfig:ro environment: - KUBECONFIG=/config/kubeconfig - K8S_NAMESPACE=${K8S_NAMESPACE:-cortex} - LOG_LEVEL=${LOG_LEVEL:-INFO} # MCP servers communicate via stdio, not network ports # Use docker exec or attach for MCP communication stdin_open: true tty: true healthcheck: test: ["CMD", "python", "-c", "import sys; sys.exit(0)"] interval: 30s timeout: 10s retries: 3 start_period: 10s labels: - "com.cortex.mcp-server=resource-manager" - "com.cortex.version=1.0.0"

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/ry-ops/cortex-resource-manager'

If you have feedback or need assistance with the MCP directory API, please join our Discord server