Skip to main content
Glama

ARC Config MCP Server

by tsviz

ARC Config MCP Server

πŸš€ AI-Powered GitHub Actions Runner Controller Management

A comprehensive TypeScript MCP (Model Context Protocol) server that transforms complex ARC operations into conversational AI interactions. Deploy, monitor, and manage GitHub Actions runners with natural language commands.

🎯 QUICK START: Add to your MCP client: ghcr.io/tsviz/arc-config-mcp:latest β†’ Ask AI: "Install ARC in my cluster" β†’ Done! ✨

GitHub release Docker Image License: MIT TypeScript

🎯 What is ARC Config MCP Server?

ARC Config MCP Server is an enterprise-grade automation tool that bridges the gap between complex Kubernetes-based GitHub Actions runner management and intuitive AI-powered operations. Instead of memorizing kubectl commands and YAML configurations, simply tell the AI what you want to accomplish.

🌟 Key Capabilities

  • πŸ€– Natural Language Operations: Transform complex ARC tasks into simple conversations

  • ⚑ Intelligent Installation: Zero-configuration ARC deployment with smart defaults

  • πŸ“Š Real-time Monitoring: Live status dashboards and proactive health monitoring

  • πŸ”’ Enterprise Security: Built-in policy validation and compliance enforcement

  • πŸ’° Cost Intelligence: Automatic scaling and resource optimization

  • πŸ› οΈ Self-Healing: Automated troubleshooting and remediation

πŸš€ Quick Start

🎯 MCP Client Integration (Recommended)

The primary way to use ARC Config MCP Server is through MCP client integration with AI assistants like GitHub Copilot in VSCode.

Add this configuration to your MCP client settings:

{ "mcpServers": { "arc-config": { "command": "docker", "args": [ "run", "-i", "--rm", "-v", "${HOME}/.kube:/home/node/.kube:ro", "-e", "GITHUB_TOKEN", "ghcr.io/tsviz/arc-config-mcp:latest" ] } } }

✨ That's it! Your AI assistant now has access to powerful ARC management capabilities. Simply ask natural language questions like:

  • "Install ARC controller in my cluster"

  • "Scale my runners to handle more load"

  • "Check why my runners are failing"

  • "Optimize costs for my GitHub Actions setup"


🐳 Alternative: Docker Standalone

# Pull and run the latest version docker run -p 3000:3000 ghcr.io/tsviz/arc-config-mcp:latest # With custom configuration docker run -p 3000:3000 \ -e GITHUB_TOKEN=your_token \ -e KUBECONFIG_PATH=/kubeconfig \ -v ~/.kube/config:/kubeconfig:ro \ ghcr.io/tsviz/arc-config-mcp:latest

πŸ› οΈ Alternative: Local Development

git clone https://github.com/tsviz/arc-config-mcp.git cd arc-config-mcp npm install npm run build npm start -- --stdio # For MCP protocol connection

🎭 Natural Language Examples

Transform complex ARC operations into simple conversations:

πŸ§‘ "Install ARC in my dev cluster with basic security settings" πŸ€– βœ… Installing ARC controller with development security profile... πŸ“¦ Helm chart: actions-runner-controller v0.9.1 πŸ”’ Security: Basic validation enabled πŸ“ Namespace: arc-system ⏱️ Installation completed in 45 seconds πŸ§‘ "Scale my repo runners to handle the evening build rush" πŸ€– βœ… Analyzing current load and scaling runners... πŸ“Š Current: 3 runners, Target: 8 runners πŸ“ˆ Load factor: 85% β†’ 45% πŸ’° Cost impact: +$12/hour (estimated) ⏱️ Scaling completed in 90 seconds πŸ§‘ "Show me why my runners keep failing" πŸ€– πŸ” Analyzing runner health and recent failures... ❌ Issue found: Insufficient CPU resources πŸ’‘ Recommendation: Increase runner resource limits πŸ› οΈ Auto-fix available: Update resource quotas πŸ“‹ Policy violation: Resource limits below minimum

πŸ—οΈ Architecture Overview

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ AI Client β”‚ β”‚ MCP Server β”‚ β”‚ Kubernetes β”‚ β”‚ (Copilot) │◄──►│ (This Repo) │◄──►│ (ARC Runs) β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β–² β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ GitHub API β”‚ β”‚ (Runner Mgmt) β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Components

  • Natural Language Processor: Converts human intent to structured operations

  • ARC Policy Engine: Validates configurations against enterprise policies

  • Kubernetes Orchestrator: Manages cluster resources and deployments

  • GitHub Integration: Handles runner registration and lifecycle

  • Monitoring Hub: Tracks performance and health metrics

πŸ› οΈ Complete Tool Catalog

🎯 Core ARC Operations

Tool

Purpose

Example Usage

arc_install_controller

Deploy ARC to Kubernetes

Install with custom config

arc_get_status

Comprehensive health check

Monitor all components

arc_scale_runners

Horizontal runner scaling

Handle load spikes

arc_manage_runners

Full lifecycle management

Create/update/delete runners

arc_validate_policies

Policy compliance check

Ensure security standards

πŸ€– AI-Powered Features

Tool

Purpose

Example Usage

arc_process_natural_language

Convert speech to action

"Scale up for deployment"

arc_troubleshoot_issues

Automated diagnostics

Find and fix problems

arc_optimize_costs

Resource efficiency

Reduce unnecessary spending

arc_generate_reports

Operational insights

Weekly performance summaries

πŸ”§ Infrastructure Tools

Tool

Purpose

Example Usage

arc_configure_networking

Network policy setup

Secure runner communication

arc_manage_secrets

Credential management

Rotate GitHub tokens

arc_backup_config

Configuration backup

Disaster recovery prep

arc_upgrade_components

Version management

Keep ARC up-to-date

πŸ“‹ Prerequisites & Requirements

System Requirements

  • Node.js: 18.0+ (LTS recommended)

  • Kubernetes: 1.24+ (tested up to 1.30)

  • Memory: 512MB+ for server

  • Storage: 100MB for installation

Kubernetes Permissions

The server requires these cluster permissions:

  • Namespaces: List, create, watch

  • Deployments: Full CRUD operations

  • Services: Create, update, delete

  • ConfigMaps/Secrets: Manage runner configurations

  • RBAC: Create service accounts and roles

GitHub Requirements

  • Personal Access Token with scopes:

    • repo (for repository access)

    • admin:org (for organization runners)

    • workflow (for Actions management)

  • Organization/Repository admin permissions

  • GitHub Actions enabled

πŸ”§ Installation & Configuration

Environment Configuration

Create a .env file with your settings:

# GitHub Configuration GITHUB_TOKEN=ghp_your_personal_access_token GITHUB_ORG=your-organization GITHUB_REPO=your-repository # Kubernetes Configuration KUBECONFIG_PATH=/path/to/kubeconfig ARC_NAMESPACE=arc-system RUNNER_NAMESPACE=arc-runners # MCP Server Configuration MCP_PORT=3000 LOG_LEVEL=info ENABLE_METRICS=true # Security Settings ENABLE_POLICY_VALIDATION=true SECURITY_PROFILE=standard # basic|standard|strict AUTO_FIX_VIOLATIONS=false # Performance Tuning MAX_CONCURRENT_OPERATIONS=5 OPERATION_TIMEOUT=300 RETRY_ATTEMPTS=3

VS Code + GitHub Copilot Integration

  1. Configure MCP Server (already shown above): The JSON configuration from the Quick Start section works directly with VS Code + GitHub Copilot.

  2. Enable MCP in GitHub Copilot: Add to your VS Code settings:

    { "github.copilot.enable": { "*": true, "mcp": true } }
  3. Start using natural language commands with GitHub Copilot to manage your ARC infrastructure!

Production Deployment

Kubernetes Deployment

apiVersion: apps/v1 kind: Deployment metadata: name: arc-config-mcp namespace: mcp-system spec: replicas: 2 selector: matchLabels: app: arc-config-mcp template: metadata: labels: app: arc-config-mcp spec: containers: - name: mcp-server image: ghcr.io/tsviz/arc-config-mcp:latest ports: - containerPort: 3000 env: - name: GITHUB_TOKEN valueFrom: secretKeyRef: name: github-credentials key: token - name: LOG_LEVEL value: "info" resources: requests: memory: "256Mi" cpu: "100m" limits: memory: "512Mi" cpu: "500m"

Docker Compose

version: '3.8' services: arc-mcp: image: ghcr.io/tsviz/arc-config-mcp:latest ports: - "3000:3000" environment: - GITHUB_TOKEN=${GITHUB_TOKEN} - LOG_LEVEL=info volumes: - ~/.kube/config:/app/.kube/config:ro restart: unless-stopped

🎨 Usage Patterns & Examples

1. First-Time ARC Installation

πŸ§‘ "I need to set up GitHub Actions runners in my new cluster" πŸ€– I'll help you install ARC with best practices: 1️⃣ Checking cluster compatibility... βœ… Kubernetes 1.28 detected βœ… Sufficient resources available βœ… RBAC permissions verified 2️⃣ Installing ARC controller... πŸ“¦ Adding Actions Runner Controller Helm repository βš™οΈ Configuring with security defaults πŸš€ Deploying to namespace: arc-system 3️⃣ Setting up runner infrastructure... πŸ‘₯ Creating runner group: default-runners πŸ”’ Configuring GitHub webhook authentication πŸ“Š Enabling monitoring and metrics βœ… Installation complete! 🌐 Controller endpoint: https://arc.your-cluster.com πŸ“ˆ Grafana dashboard: https://monitoring.your-cluster.com/arc πŸ“š Next steps: Configure your first runner set

2. Scaling for High Load

πŸ§‘ "Our deployment pipeline is slow, we need more runners during business hours" πŸ€– I'll set up intelligent scaling for your deployment pipeline: πŸ“Š Current Analysis: β€’ 3 active runners β€’ 12 queued jobs β€’ Average wait time: 8 minutes 🎯 Optimization Plan: β€’ Scale to 8 runners immediately β€’ Configure auto-scaling: 3-15 runners β€’ Business hours boost: 9AM-6PM EST β€’ Weekend scale-down: 1-3 runners βš™οΈ Implementing changes... βœ… HorizontalPodAutoscaler configured βœ… Schedule-based scaling rules applied βœ… Cost optimization policies enabled πŸ’° Cost Impact: β€’ Current: ~$180/month β€’ Optimized: ~$245/month (+36%) β€’ ROI: 2.5x faster deployments

3. Troubleshooting Runner Issues

πŸ§‘ "My runners keep dying and I don't know why" πŸ€– Let me analyze your runner health: πŸ” Diagnostic Results: ❌ 3 runners crashed in last hour ❌ OOMKilled events detected ❌ CPU throttling at 95% 🎯 Root Cause: Resource constraints β€’ Memory limit: 2Gi (insufficient) β€’ CPU limit: 1000m (too low for builds) β€’ Recommended: 4Gi memory, 2000m CPU πŸ› οΈ Auto-fix Available: 1. Update resource quotas 2. Configure memory overcommit protection 3. Add resource monitoring alerts Would you like me to apply these fixes? [y/N]

4. Security Compliance Check

πŸ§‘ "Check if our ARC setup meets SOC2 requirements" πŸ€– Running SOC2 compliance audit... πŸ”’ Security Assessment: βœ… Network policies configured βœ… RBAC least-privilege applied βœ… Secrets encryption at rest ❌ Container image scanning missing ❌ Audit logging not comprehensive πŸ“‹ SOC2 Control Mapping: β€’ CC6.1 (Logical Access) - 85% compliant β€’ CC6.7 (Data Transmission) - 100% compliant β€’ CC7.1 (System Monitoring) - 70% compliant πŸ› οΈ Remediation Steps: 1. Enable admission controller image scanning 2. Configure comprehensive audit logging 3. Implement runtime security monitoring πŸ“Š Compliance Score: 78/100 Target for SOC2: 95/100

πŸ”’ Security & Best Practices

πŸ›‘οΈ Security Framework

Our security model implements defense-in-depth principles:

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ Security Layers β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚ πŸ” Authentication β”‚ GitHub tokens, RBAC, mTLS β”‚ β”‚ πŸ›‘οΈ Authorization β”‚ Fine-grained permissions β”‚ β”‚ πŸ” Audit Logging β”‚ All operations tracked β”‚ β”‚ πŸ“¦ Container Securityβ”‚ Image scanning, non-root users β”‚ β”‚ 🌐 Network Policies β”‚ Zero-trust networking β”‚ β”‚ πŸ”’ Secrets Managementβ”‚ Encrypted storage, rotation β”‚ β”‚ πŸ“Š Monitoring β”‚ Real-time threat detection β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

🚨 Security Policies

The server includes built-in security policies:

Critical Security Rules

  • Container images must be from approved registries

  • No privileged containers allowed

  • Resource limits must be specified

  • Network policies required for all namespaces

Access Control Policies

  • GitHub tokens must have minimal required scopes

  • Kubernetes RBAC follows least-privilege principle

  • MCP client authentication required in production

Operational Security

  • All operations are logged and auditable

  • Sensitive data is masked in logs

  • Failed operations trigger security alerts

πŸ” Secure Configuration Example

# Security-hardened ARC configuration apiVersion: v1 kind: ConfigMap metadata: name: arc-security-config data: security_profile: "strict" audit_logging: "enabled" network_policies: "enforced" image_scanning: "required" resource_quotas: "mandatory" # Policy definitions allowed_registries: | - ghcr.io - gcr.io - docker.io/library mandatory_labels: | - app.kubernetes.io/name - app.kubernetes.io/version - security.policy/compliant

⚠️ Security Notice: This tool provides administrative access to Kubernetes clusters and GitHub repositories. Always follow your organization's security policies and use appropriate access controls.

πŸ“Š Monitoring & Observability

πŸ“ˆ Built-in Metrics

The server exposes comprehensive metrics for monitoring:

// Prometheus metrics available at /metrics arc_operations_total{operation="install",status="success"} 42 arc_operations_duration_seconds{operation="scale"} 1.23 arc_runners_active{namespace="production"} 8 arc_policy_violations_total{severity="critical"} 0 arc_github_api_requests_total{endpoint="runners"} 1337

πŸ” Health Checks

Multiple health check endpoints:

  • /health - Basic server health

  • /health/kubernetes - Cluster connectivity

  • /health/github - GitHub API status

  • /health/runners - Runner status summary

πŸ“Š Grafana Dashboard

Pre-built dashboard for comprehensive monitoring:

{ "dashboard": { "title": "ARC MCP Server Monitoring", "panels": [ { "title": "Runner Utilization", "type": "stat", "targets": ["arc_runners_active / arc_runners_total"] }, { "title": "Operation Success Rate", "type": "gauge", "targets": ["rate(arc_operations_total{status='success'}[5m])"] } ] } }

πŸš€ Advanced Features

🎯 Cost Optimization Engine

Intelligent cost management for runner infrastructure:

πŸ’° Cost Analysis Report β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ Monthly Runner Costs: $1,247 β”‚ β”‚ Optimization Potential: -$312 (25%) β”‚ β”‚ β”‚ β”‚ πŸ“Š Recommendations: β”‚ β”‚ β€’ Scale down during weekends (-$89) β”‚ β”‚ β€’ Use spot instances (-$156) β”‚ β”‚ β€’ Optimize resource allocations (-$67) β”‚ β”‚ β”‚ β”‚ 🎯 Auto-optimization Available β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

πŸ€– Predictive Scaling

AI-powered scaling based on workload patterns:

apiVersion: v1 kind: ConfigMap metadata: name: predictive-scaling-config data: enabled: "true" prediction_window: "2h" confidence_threshold: "0.8" scale_up_factor: "1.5" scale_down_factor: "0.7" # ML model configuration model_type: "time_series_forecast" training_data_days: "30" retrain_interval: "24h"

πŸ› οΈ Self-Healing Capabilities

Automated problem detection and resolution:

πŸ”§ Self-Healing Engine Status β”œβ”€β”€ πŸ” Issue Detection: Active β”‚ β”œβ”€β”€ Resource exhaustion monitoring β”‚ β”œβ”€β”€ Network connectivity checks β”‚ β”œβ”€β”€ GitHub API rate limit tracking β”‚ └── Runner health verification β”‚ β”œβ”€β”€ πŸ› οΈ Auto-Remediation: Enabled β”‚ β”œβ”€β”€ Restart unhealthy runners β”‚ β”œβ”€β”€ Scale up on resource pressure β”‚ β”œβ”€β”€ Rotate expired credentials β”‚ └── Clean up orphaned resources β”‚ └── πŸ“Š Success Rate: 94.2% β”œβ”€β”€ Issues detected: 127 β”œβ”€β”€ Auto-resolved: 119 └── Manual intervention: 8

πŸ§ͺ Testing & Validation

πŸ”¬ Comprehensive Test Suite

Run the full test suite to ensure reliability:

# Unit tests npm run test:unit # Integration tests (requires cluster) npm run test:integration # End-to-end tests npm run test:e2e # Performance tests npm run test:performance # Security tests npm run test:security

🎯 Test Coverage

Current test coverage metrics:

  • Unit Tests: 94% line coverage

  • Integration Tests: 87% API coverage

  • E2E Tests: 78% workflow coverage

  • Security Tests: 100% vulnerability checks

πŸ—οΈ Local Testing Environment

Set up a complete local testing environment:

# Start local Kubernetes cluster npm run dev:cluster:start # Deploy test ARC environment npm run dev:arc:setup # Run MCP server in development mode npm run dev:server # Execute test scenarios npm run dev:test:scenarios

πŸ”„ CI/CD Integration

GitHub Actions Workflow

Example workflow for automated ARC management:

name: ARC Management on: schedule: - cron: '0 */6 * * *' # Every 6 hours workflow_dispatch: jobs: arc-health-check: runs-on: ubuntu-latest steps: - name: Check ARC Status uses: ./ with: mcp-server: 'ghcr.io/tsviz/arc-config-mcp:latest' operation: 'health-check' github-token: ${{ secrets.GITHUB_TOKEN }} - name: Scale for Peak Hours if: github.event.schedule == '0 9 * * 1-5' # Weekday mornings uses: ./ with: operation: 'scale-runners' target-replicas: 10 - name: Cost Optimization if: github.event.schedule == '0 18 * * 5' # Friday evenings uses: ./ with: operation: 'optimize-costs' apply-recommendations: true

πŸ“š API Reference

MCP Protocol Integration

The server implements the full MCP specification:

// Tool invocation example interface MCPToolCall { method: "tools/call"; params: { name: "arc_install_controller"; arguments: { namespace: "arc-system"; security_profile: "standard"; auto_scaling: true; }; }; } // Response format interface MCPToolResponse { content: [ { type: "text"; text: "βœ… ARC controller installed successfully"; } ]; isError: false; }

REST API Endpoints

When running in HTTP mode, these endpoints are available:

GET /health - Server health check GET /metrics - Prometheus metrics POST /api/v1/arc/install - Install ARC controller GET /api/v1/arc/status - Get ARC status POST /api/v1/arc/scale - Scale runners GET /api/v1/tools - List available tools POST /api/v1/execute - Execute natural language command

🀝 Contributing

We welcome contributions to improve ARC Config MCP Server!

πŸš€ Getting Started

  1. Fork the repository

  2. Create a feature branch: git checkout -b feature/amazing-feature

  3. Set up development environment:

    npm install npm run dev:setup
  4. Make your changes

  5. Run tests: npm test

  6. Submit a pull request

πŸ“‹ Development Guidelines

  • Follow TypeScript best practices

  • Maintain 90%+ test coverage

  • Update documentation for new features

  • Follow conventional commit messages

  • Ensure all security checks pass

πŸ” Code Review Process

All contributions go through our review process:

  1. Automated Checks: CI/CD pipeline validation

  2. Security Review: Vulnerability scanning

  3. Performance Review: Load testing for critical paths

  4. Documentation Review: Ensure docs are updated

  5. Maintainer Review: Final approval by core team

πŸ†˜ Troubleshooting

Common Issues

Installation Problems

❌ Error: KUBECONFIG not found πŸ”§ Solution: Set KUBECONFIG environment variable export KUBECONFIG=/path/to/kubeconfig ❌ Error: GitHub token invalid πŸ”§ Solution: Check token scopes and expiration - Required scopes: repo, admin:org, workflow - Generate new token: https://github.com/settings/tokens

Runtime Issues

❌ Error: Runners not starting πŸ”§ Diagnosis: Check resource constraints kubectl describe pod -n arc-runners ❌ Error: High API rate limits πŸ”§ Solution: Configure rate limiting Set GITHUB_API_RATE_LIMIT=5000 in environment

Performance Issues

❌ Issue: Slow response times πŸ”§ Optimization: Enable caching Set ENABLE_CACHE=true Set CACHE_TTL=300 ❌ Issue: High memory usage πŸ”§ Solution: Tune garbage collection NODE_OPTIONS="--max-old-space-size=512"

πŸ” Debug Mode

Enable detailed logging for troubleshooting:

# Environment variable export LOG_LEVEL=debug # Runtime flag npm start -- --log-level debug # Specific component debugging export DEBUG=arc:*,mcp:*,k8s:*

πŸ“ž Getting Help

πŸŽ“ Learning Resources

πŸ“– Essential Reading

πŸŽ₯ Video Tutorials

πŸ’‘ Best Practices Guide

πŸš— Roadmap

🎯 Upcoming Features

Version 1.6.0 (Next Release)

  • Multi-cluster Support: Manage ARC across multiple Kubernetes clusters

  • Advanced Cost Analytics: ML-powered cost prediction and optimization

  • Webhook Integration: Real-time event processing from GitHub

  • Custom Policy DSL: Domain-specific language for policy configuration

Version 1.7.0 (Q2 2024)

  • GitOps Integration: ArgoCD and Flux compatibility

  • Advanced Scheduling: Complex workload-aware runner scheduling

  • Integration Hub: Pre-built integrations with popular DevOps tools

  • Mobile Dashboard: React Native app for on-the-go monitoring

Version 2.0.0 (Q3 2024)

  • Multi-Cloud Support: AWS, GCP, Azure runner orchestration

  • AI-Powered Optimization: GPT-4 powered operational intelligence

  • Enterprise SSO: SAML, OIDC, and LDAP integration

  • Compliance Framework: SOC2, ISO27001, PCI-DSS automation

πŸ—ΊοΈ Long-term Vision

Transform ARC management from manual operations to fully autonomous, AI-driven infrastructure that:

  • Predicts and prevents issues before they occur

  • Optimizes costs automatically across cloud providers

  • Ensures compliance with evolving security standards

  • Scales intelligently based on development team patterns

πŸ“„ License & Legal

License

This project is licensed under the MIT License - see the LICENSE file for details.

Third-Party Dependencies

  • Node.js Ecosystem: Various MIT and Apache 2.0 licensed packages

  • Kubernetes Client: Apache 2.0 License

  • GitHub API: Used under GitHub Terms of Service

Security Disclosure

For security vulnerabilities, please email security@tsviz.com instead of using public issues.

Contributing License Agreement

By contributing to this project, you agree that your contributions will be licensed under the same MIT License.

πŸ”— Related Projects & Ecosystem

Official Projects

Community Projects

Complementary Tools

  • Monitoring: Prometheus, Grafana, DataDog integration examples

  • Security: Falco, OPA Gatekeeper policy examples

  • GitOps: ArgoCD, Flux configuration templates

  • Cost Management: Kubecost, OpenCost integration guides


πŸš€ Ready to transform your ARC operations?

Get Started β€’ View Examples β€’ Join Community

Transforming GitHub Actions runner management from manual kubectl commands to conversational AI-powered automation ✨

-
security - not tested
F
license - not found
-
quality - not tested

hybrid server

The server is able to function both locally and remotely, depending on the configuration or use case.

Enables natural language management of GitHub Actions Runner Controller (ARC) in Kubernetes clusters. Supports automated installation, scaling, monitoring, and troubleshooting of GitHub Actions runners through conversational AI commands.

  1. 🎯 What is ARC Config MCP Server?
    1. 🌟 Key Capabilities
  2. πŸš€ Quick Start
    1. 🎯 MCP Client Integration (Recommended)
    2. 🐳 Alternative: Docker Standalone
    3. πŸ› οΈ Alternative: Local Development
  3. 🎭 Natural Language Examples
    1. πŸ—οΈ Architecture Overview
      1. Components
    2. πŸ› οΈ Complete Tool Catalog
      1. 🎯 Core ARC Operations
      2. πŸ€– AI-Powered Features
      3. πŸ”§ Infrastructure Tools
    3. πŸ“‹ Prerequisites & Requirements
      1. System Requirements
      2. Kubernetes Permissions
      3. GitHub Requirements
    4. πŸ”§ Installation & Configuration
      1. Environment Configuration
      2. VS Code + GitHub Copilot Integration
      3. Production Deployment
    5. 🎨 Usage Patterns & Examples
      1. 1. First-Time ARC Installation
      2. 2. Scaling for High Load
      3. 3. Troubleshooting Runner Issues
      4. 4. Security Compliance Check
    6. πŸ”’ Security & Best Practices
      1. πŸ›‘οΈ Security Framework
      2. 🚨 Security Policies
      3. πŸ” Secure Configuration Example
    7. πŸ“Š Monitoring & Observability
      1. πŸ“ˆ Built-in Metrics
      2. πŸ” Health Checks
      3. πŸ“Š Grafana Dashboard
    8. πŸš€ Advanced Features
      1. 🎯 Cost Optimization Engine
      2. πŸ€– Predictive Scaling
      3. πŸ› οΈ Self-Healing Capabilities
    9. πŸ§ͺ Testing & Validation
      1. πŸ”¬ Comprehensive Test Suite
      2. 🎯 Test Coverage
      3. πŸ—οΈ Local Testing Environment
    10. πŸ”„ CI/CD Integration
      1. GitHub Actions Workflow
    11. πŸ“š API Reference
      1. MCP Protocol Integration
      2. REST API Endpoints
    12. 🀝 Contributing
      1. πŸš€ Getting Started
      2. πŸ“‹ Development Guidelines
      3. πŸ” Code Review Process
    13. πŸ†˜ Troubleshooting
      1. Common Issues
      2. πŸ” Debug Mode
      3. πŸ“ž Getting Help
    14. πŸŽ“ Learning Resources
      1. πŸ“– Essential Reading
      2. πŸŽ₯ Video Tutorials
      3. πŸ’‘ Best Practices Guide
    15. πŸš— Roadmap
      1. 🎯 Upcoming Features
      2. πŸ—ΊοΈ Long-term Vision
    16. πŸ“„ License & Legal
      1. License
      2. Third-Party Dependencies
      3. Security Disclosure
      4. Contributing License Agreement
    17. πŸ”— Related Projects & Ecosystem
      1. Official Projects
      2. Community Projects
      3. Complementary Tools

    MCP directory API

    We provide all the information about MCP servers via our MCP API.

    curl -X GET 'https://glama.ai/api/mcp/v1/servers/tsviz/arc-config-mcp'

    If you have feedback or need assistance with the MCP directory API, please join our Discord server