How to Deploy GitHub ARC on Kubernetes
Learn how to deploy self-hosted GitHub Actions runners on Kubernetes using ARC. Reduce costs, improve performance, and gain full control over your CI/CD infrastructure.
TL;DR
Deploy self-hosted GitHub Actions runners on your Kubernetes cluster using Actions Runner Controller (ARC) to reduce costs, improve performance, and gain full control over your CI/CD infrastructure. This guide covers installing ARC on GKE with ARM64 support, Docker-in-Docker configuration, and multi-architecture builds. Perfect for teams looking to optimize their GitHub Actions workflows and reduce dependency on GitHub-hosted runners.
Key Benefits:
- 🚀 70% reduction in CI/CD costs by eliminating GitHub Actions minutes
- ⚡ 3x faster builds with dedicated resources and caching
- 🔒 Enhanced security with runners in your private network
- 🎯 Full control over runner specifications and configurations
Introduction: Why Self-Hosted GitHub Actions Runners?
GitHub Actions has revolutionized CI/CD workflows, but relying solely on GitHub-hosted runners can be expensive and limiting. For organizations running hundreds of workflows daily, the costs can quickly escalate to thousands of dollars per month.
That's where GitHub Actions Runner Controller (ARC) comes in – it enables you to run GitHub Actions workflows on your own Kubernetes infrastructure, giving you complete control while dramatically reducing costs.
The Challenge We Solved
Our client was facing several challenges with GitHub-hosted runners:
- High costs: Over $2,000/month in GitHub Actions minutes
- Architecture mismatches: ARM64 production environment vs AMD64 runners
- Build times: 15+ minute builds due to no persistent caching
- Security concerns: Builds running outside their private network
By implementing ARC on their existing GKE cluster, we achieved:
- 90% cost reduction in CI/CD expenses
- Native ARM64 builds matching production architecture
- 5-minute average build times with Docker layer caching
- Enhanced security with runners in private subnets
Prerequisites
Before we begin, ensure you have:
- ✅ Kubernetes cluster (1.23+) with ARM64 or AMD64 nodes
- ✅
kubectl
configured with cluster access - ✅ Helm 3.x installed
- ✅ GitHub Personal Access Token or GitHub App credentials
- ✅ Docker registry access for storing container images
Architecture Overview
ARC consists of two main components:
- Controller: Manages the lifecycle of runner pods
- Runner Scale Sets: Auto-scaling groups of runners that execute workflows
┌──────────────┐ ┌─────────────────┐
│ GitHub │ ◄────► │ ARC Controller │
│ Actions │ │ (arc-systems) │
└──────────────┘ └─────────────────┘
│
▼
┌───────────────────────┐
│ Runner Scale Set │
│ (arc-runners) │
├───────────────────────┤
│ ┌─────────────────┐ │
│ │ Runner Pod 1 │ │
│ ├─────────────────┤ │
│ │ Runner Pod 2 │ │
│ ├─────────────────┤ │
│ │ Runner Pod N │ │
│ └─────────────────┘ │
└───────────────────────┘
Step 1: Install the ARC Controller
The controller is the brain of ARC, managing runner lifecycle and communicating with GitHub.
Create the Controller Namespace
kubectl create namespace arc-systems
Install the Controller with Helm
helm install arc \
--namespace arc-systems \
oci://ghcr.io/actions/actions-runner-controller-charts/gha-runner-scale-set-controller \
--version 0.9.3
Verify Controller Installation
kubectl get pods -n arc-systems
# Expected output:
NAME READY STATUS RESTARTS AGE
arc-gha-rs-controller-6775c7cdcf-xxxxx 1/1 Running 0 2m
Step 2: Configure Runner Scale Set
Now we'll configure the runners that will execute your GitHub Actions workflows.
Create Configuration Directory
mkdir -p k8s/arc
cd k8s/arc
Create Runner Values File
Create runner-scale-set-values.yaml
with the following configuration:
# GitHub repository configuration
githubConfigUrl: "https://github.com/YOUR_ORG/YOUR_REPO"
# Auto-scaling configuration
minRunners: 1 # Always have 1 runner ready
maxRunners: 5 # Scale up to 5 runners based on demand
# Runner template configuration
template:
spec:
# Node selection for ARM64 architecture
nodeSelector:
kubernetes.io/arch: arm64 # Change to amd64 for x86_64
# Tolerations for dedicated nodes (optional)
tolerations:
- key: "ci-runners"
operator: "Equal"
value: "true"
effect: "NoSchedule"
containers:
# Docker-in-Docker sidecar for isolated Docker daemon
- name: dind
image: docker:24-dind
securityContext:
privileged: true
env:
- name: DOCKER_TLS_CERTDIR
value: "" # Disable TLS for simplicity
volumeMounts:
- name: docker-storage
mountPath: /var/lib/docker
resources:
requests:
cpu: "1"
memory: "2Gi"
limits:
cpu: "2"
memory: "4Gi"
# Main runner container
- name: runner
image: ghcr.io/actions/actions-runner:latest
command: ["/home/runner/run.sh"]
resources:
requests:
cpu: "2"
memory: "4Gi"
limits:
cpu: "4"
memory: "8Gi"
env:
- name: ACTIONS_RUNNER_REQUIRE_JOB_CONTAINER
value: "false"
- name: DOCKER_HOST
value: "tcp://localhost:2375" # Connect to DinD sidecar
- name: DOCKER_BUILDKIT
value: "1" # Enable BuildKit for better performance
volumeMounts:
- name: work
mountPath: /home/runner/_work
volumes:
- name: work
emptyDir: {}
- name: docker-storage
emptyDir: {}
Step 3: Create GitHub Token
You'll need a GitHub token for the runners to authenticate with GitHub.
For Repository-level Runners
- Go to GitHub Settings > Developer Settings > Personal Access Tokens
- Create a token with these scopes:
repo
(full control of private repositories)workflow
(update GitHub Actions workflows)
- Save the token securely
For Organization-level Runners
Add the admin:org
scope for organization-wide runner management.
Step 4: Deploy Runner Scale Set
Create Deployment Script
Create deploy-runner-scale-set.sh
:
#!/bin/bash
set -e
# Check if GitHub token is provided
if [ -z "$1" ]; then
echo "Error: GitHub token is required"
echo "Usage: $0 <GITHUB_TOKEN>"
exit 1
fi
GITHUB_TOKEN=$1
NAMESPACE="arc-runners"
RELEASE_NAME="arc-runner-set"
echo "Creating namespace ${NAMESPACE}..."
kubectl create namespace ${NAMESPACE} --dry-run=client -o yaml | kubectl apply -f -
echo "Deploying runner scale set..."
helm install ${RELEASE_NAME} \
--namespace ${NAMESPACE} \
--create-namespace \
--set githubConfigSecret.github_token="${GITHUB_TOKEN}" \
-f runner-scale-set-values.yaml \
oci://ghcr.io/actions/actions-runner-controller-charts/gha-runner-scale-set \
--version 0.9.3
echo "Runner scale set deployed successfully!"
Deploy the Runners
chmod +x deploy-runner-scale-set.sh
./deploy-runner-scale-set.sh <YOUR_GITHUB_TOKEN>
Step 5: Verify Deployment
Check Runner Status
# Check pods
kubectl get pods -n arc-runners
# Expected output:
NAME READY STATUS RESTARTS AGE
arc-runner-set-xxxxx-runner-xxxxx 2/2 Running 0 2m
Verify in GitHub
- Navigate to your repository settings
- Go to Actions > Runners
- You should see your self-hosted runners listed
Test Docker Functionality
# Test Docker in the runner
kubectl exec -n arc-runners <pod-name> -c runner -- docker version
# Test Docker buildx
kubectl exec -n arc-runners <pod-name> -c runner -- docker buildx version
Step 6: Update Your GitHub Workflows
Modify your workflows to use the self-hosted runners:
name: Build and Deploy
on:
push:
branches: [main]
jobs:
build:
runs-on: arc-runner-set # Use your runner scale set name
steps:
- uses: actions/checkout@v4
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Build multi-platform image
run: |
docker buildx build \
--platform linux/amd64,linux/arm64 \
--push \
-t myregistry/myapp:latest \
.
Troubleshooting Common Issues
Runner Not Appearing in GitHub
Problem: Runners don't show up in GitHub settings.
Solution:
# Check controller logs
kubectl logs -n arc-systems -l app.kubernetes.io/name=gha-runner-scale-set-controller
# Verify token permissions
# Ensure token has 'repo' and 'workflow' scopes
Docker Permission Denied
Problem: permission denied
when accessing Docker socket.
Solution: Use Docker-in-Docker (DinD) configuration as shown above, which provides an isolated Docker daemon for each runner.
Architecture Mismatch
Problem: exec format error
when running containers.
Solution:
# Ensure nodeSelector matches your cluster architecture
nodeSelector:
kubernetes.io/arch: arm64 # or amd64
Runner Pods Crashing
Problem: Runner pods in CrashLoopBackOff.
Solution:
# Check pod logs
kubectl logs -n arc-runners <pod-name> -c runner
# Check events
kubectl describe pod -n arc-runners <pod-name>
Advanced Configuration
Persistent Cache for Faster Builds
Add persistent volumes for Docker cache:
volumes:
- name: docker-storage
persistentVolumeClaim:
claimName: runner-docker-cache
Custom Runner Images
Build your own runner image with pre-installed tools:
FROM ghcr.io/actions/actions-runner:latest
USER root
RUN apt-get update && apt-get install -y \
python3 \
nodejs \
yarn \
&& rm -rf /var/lib/apt/lists/*
USER runner
Multi-Architecture Builds
Configure buildx for multi-platform builds:
env:
- name: DOCKER_CLI_EXPERIMENTAL
value: "enabled"
- name: DOCKER_BUILDKIT
value: "1"
Performance Optimization
1. Resource Allocation
Optimize resource requests based on your workload:
resources:
requests:
cpu: "4" # Increase for CPU-intensive builds
memory: "8Gi" # Increase for memory-heavy operations
limits:
cpu: "8"
memory: "16Gi"
2. Node Affinity
Dedicate specific nodes for runners:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: workload-type
operator: In
values:
- ci-runners
3. Caching Strategies
Implement build caching for faster builds:
- Docker layer caching with persistent volumes
- Dependency caching with cache actions
- Artifact caching between jobs
Security Best Practices
1. Network Isolation
Run runners in isolated subnets:
# Network policy to restrict runner traffic
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: runner-isolation
spec:
podSelector:
matchLabels:
app: github-runner
policyTypes:
- Ingress
- Egress
egress:
- to:
- namespaceSelector: {}
ports:
- protocol: TCP
port: 443 # GitHub API
2. Secret Management
Use Kubernetes secrets for sensitive data:
kubectl create secret generic github-token \
--from-literal=token=<YOUR_TOKEN> \
-n arc-runners
3. RBAC Configuration
Limit runner permissions:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: runner-role
rules:
- apiGroups: [""]
resources: ["pods", "services"]
verbs: ["get", "list"]
Cost Analysis
GitHub-Hosted Runners vs Self-Hosted
Metric | GitHub-Hosted | Self-Hosted (ARC) | Savings |
---|---|---|---|
Monthly Cost | $2,000 | $200 | 90% |
Build Time | 15 min | 5 min | 66% |
Concurrent Jobs | 5 | Unlimited* | - |
Architecture | AMD64 only | Any | - |
Network | Public | Private | - |
*Limited by cluster resources
ROI Calculation
Initial Setup Cost: 8 hours × $150/hour = $1,200
Monthly Savings: $2,000 - $200 = $1,800
ROI Period: Less than 1 month
Annual Savings: $21,600
Monitoring and Observability
Metrics Collection
Monitor runner performance with Prometheus:
apiVersion: v1
kind: Service
metadata:
name: runner-metrics
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "8080"
spec:
selector:
app: github-runner
ports:
- port: 8080
name: metrics
Key Metrics to Track
- Runner utilization: Jobs per runner per hour
- Queue time: Time jobs wait for available runners
- Build duration: Average job completion time
- Success rate: Percentage of successful builds
- Resource usage: CPU/Memory consumption
Grafana Dashboard
Create dashboards to visualize:
- Active runners count
- Job queue length
- Build success/failure rates
- Resource utilization trends
Scale Your CI/CD Infrastructure with EaseCloud
Running GitHub Actions on Kubernetes is just the beginning. At EaseCloud, we specialize in building robust, scalable CI/CD infrastructure that grows with your business.
Our Docker & Kubernetes Services Include:
- 🚀 Kubernetes Migration: Seamlessly migrate your workloads to Kubernetes
- 🔧 CI/CD Pipeline Optimization: Reduce build times by up to 80%
- 📊 Cost Optimization: Cut cloud costs by 40-60% with proper resource management
- 🔒 Security Hardening: Implement zero-trust security for your clusters
- 📈 Auto-scaling Solutions: Handle 10x traffic spikes without manual intervention
- 🛠️ 24/7 Monitoring: Proactive monitoring and incident response
Our team has helped over 100+ companies optimize their Kubernetes infrastructure, resulting in:
- Average 65% reduction in infrastructure costs
- 3x improvement in deployment frequency
- 99.99% uptime for critical services
Explore Our Docker & Kubernetes Services
Conclusion
Implementing GitHub Actions Runner Controller on Kubernetes transforms your CI/CD pipeline from a cost center into a competitive advantage. With self-hosted runners, you gain:
- Complete control over your build environment
- Significant cost savings on GitHub Actions minutes
- Better performance with dedicated resources
- Enhanced security with private network isolation
- Flexibility to run any architecture or configuration
The initial setup investment pays for itself within the first month, and the long-term benefits compound as your team grows.
Ready to Transform Your DevOps?
Don't let CI/CD costs and limitations hold your team back. Whether you're looking to implement ARC, optimize your existing Kubernetes infrastructure, or build a complete DevOps transformation strategy, our experts are here to help.
Why Choose EaseCloud?
- ✅ 10+ years of Kubernetes expertise
- ✅ 500+ successful deployments
- ✅ 24/7 support and monitoring
- ✅ Proven ROI within 3 months
Take the first step towards DevOps excellence:
📞 Schedule a Free Consultation →
Our DevOps architects will:
- Analyze your current infrastructure
- Identify optimization opportunities
- Create a custom transformation roadmap
- Provide a detailed ROI analysis
Limited Time Offer: Get a FREE Infrastructure Audit ($2,000 value) when you schedule a consultation this month.
Have questions about implementing ARC or need help with your Kubernetes infrastructure? Our team is ready to help you build a world-class DevOps pipeline. Get in Touch for personalized guidance.