Building a GitHub Actions Pipeline That Deploys to Kubernetes
Build a CI/CD pipeline from scratch: test on every pull request, build and push a Docker image on merge to main, then deploy to Kubernetes automatically. No third-party deployment tools required.
Before you begin
- A GitHub repository with your application
- A Kubernetes cluster (local or cloud)
- Docker Hub or GitHub Container Registry account
- kubectl configured for your cluster
You need two pipelines: one that validates pull requests (tests must pass before merge), and one that deploys after merge. This tutorial builds both using GitHub Actions and deploys to Kubernetes without any additional tooling.
What You'll Build
Push to feature branch → Run tests (PR check)
Merge to main → Build image → Push to registry → Update Kubernetes deployment
Step 1: Store Secrets in GitHub
Go to your repository → Settings → Secrets and variables → Actions → New repository secret.
Add:
DOCKERHUB_USERNAME— your Docker Hub usernameDOCKERHUB_TOKEN— a Docker Hub access token (not your password — create one at hub.docker.com → Account Settings → Security)KUBE_CONFIG— base64-encoded kubeconfig for your cluster
Generate the kubeconfig secret:
cat ~/.kube/config | base64 | tr -d '\n'Copy the output into the KUBE_CONFIG secret.
For production, use a restricted kubeconfig that only has access to the namespace you're deploying to. Don't paste your admin kubeconfig into GitHub secrets.
Step 2: Create the Test Workflow
mkdir -p .github/workflows1# .github/workflows/test.yml
2name: Test
3
4on:
5 pull_request:
6 branches: [main]
7 push:
8 branches: [main]
9
10jobs:
11 test:
12 runs-on: ubuntu-latest
13
14 steps:
15 - name: Checkout
16 uses: actions/checkout@v4
17
18 - name: Set up Node.js
19 uses: actions/setup-node@v4
20 with:
21 node-version: "20"
22 cache: "npm"
23
24 - name: Install dependencies
25 run: npm ci
26
27 - name: Run tests
28 run: npm test
29
30 - name: Run linter
31 run: npm run lintAdapt the language steps to your stack (Python: setup-python + pip install -r requirements.txt + pytest; Go: setup-go + go test ./...).
Step 3: Create the Deploy Workflow
1# .github/workflows/deploy.yml
2name: Deploy
3
4on:
5 push:
6 branches: [main]
7
8env:
9 IMAGE: ${{ secrets.DOCKERHUB_USERNAME }}/my-app
10 DEPLOYMENT_NAME: my-app
11 NAMESPACE: production
12
13jobs:
14 deploy:
15 runs-on: ubuntu-latest
16 needs: [] # Add test job name here if you want to require tests first
17
18 steps:
19 - name: Checkout
20 uses: actions/checkout@v4
21
22 - name: Set image tag
23 id: tag
24 run: echo "TAG=${GITHUB_SHA::8}" >> $GITHUB_OUTPUT
25
26 - name: Log in to Docker Hub
27 uses: docker/login-action@v3
28 with:
29 username: ${{ secrets.DOCKERHUB_USERNAME }}
30 password: ${{ secrets.DOCKERHUB_TOKEN }}
31
32 - name: Build and push Docker image
33 uses: docker/build-push-action@v5
34 with:
35 context: .
36 push: true
37 tags: |
38 ${{ env.IMAGE }}:${{ steps.tag.outputs.TAG }}
39 ${{ env.IMAGE }}:latest
40 cache-from: type=gha
41 cache-to: type=gha,mode=max
42
43 - name: Configure kubectl
44 run: |
45 mkdir -p ~/.kube
46 echo "${{ secrets.KUBE_CONFIG }}" | base64 -d > ~/.kube/config
47 chmod 600 ~/.kube/config
48
49 - name: Deploy to Kubernetes
50 run: |
51 kubectl set image deployment/${{ env.DEPLOYMENT_NAME }} \
52 app=${{ env.IMAGE }}:${{ steps.tag.outputs.TAG }} \
53 -n ${{ env.NAMESPACE }}
54
55 kubectl rollout status deployment/${{ env.DEPLOYMENT_NAME }} \
56 -n ${{ env.NAMESPACE }} \
57 --timeout=5m
58
59 - name: Verify deployment
60 run: |
61 kubectl get deployment ${{ env.DEPLOYMENT_NAME }} \
62 -n ${{ env.NAMESPACE }} \
63 -o jsonpath='{.spec.template.spec.containers[0].image}'The image tag uses the first 8 characters of the Git commit SHA — unique per commit, traceable back to the source.
Step 4: Create the Kubernetes Deployment
Make sure your Kubernetes deployment exists before the pipeline runs. The workflow uses kubectl set image which updates an existing deployment — it doesn't create one.
1kubectl apply -f - <<EOF
2apiVersion: apps/v1
3kind: Deployment
4metadata:
5 name: my-app
6 namespace: production
7spec:
8 replicas: 2
9 selector:
10 matchLabels:
11 app: my-app
12 strategy:
13 type: RollingUpdate
14 rollingUpdate:
15 maxSurge: 1
16 maxUnavailable: 0
17 template:
18 metadata:
19 labels:
20 app: my-app
21 spec:
22 containers:
23 - name: app
24 image: myusername/my-app:latest
25 ports:
26 - containerPort: 3000
27 readinessProbe:
28 httpGet:
29 path: /health
30 port: 3000
31 initialDelaySeconds: 5
32 periodSeconds: 5
33EOFStep 5: Add a Rollback on Failure
If kubectl rollout status fails (the new pods never become ready), roll back automatically:
1 - name: Deploy to Kubernetes
2 run: |
3 kubectl set image deployment/${{ env.DEPLOYMENT_NAME }} \
4 app=${{ env.IMAGE }}:${{ steps.tag.outputs.TAG }} \
5 -n ${{ env.NAMESPACE }}
6
7 if ! kubectl rollout status deployment/${{ env.DEPLOYMENT_NAME }} \
8 -n ${{ env.NAMESPACE }} --timeout=5m; then
9 echo "Rollout failed, rolling back..."
10 kubectl rollout undo deployment/${{ env.DEPLOYMENT_NAME }} \
11 -n ${{ env.NAMESPACE }}
12 exit 1
13 fiStep 6: Use GitHub Container Registry Instead of Docker Hub
GitHub Container Registry (ghcr.io) doesn't require a separate account and uses your GitHub token for auth:
1 - name: Log in to GitHub Container Registry
2 uses: docker/login-action@v3
3 with:
4 registry: ghcr.io
5 username: ${{ github.actor }}
6 password: ${{ secrets.GITHUB_TOKEN }}
7
8 - name: Build and push
9 uses: docker/build-push-action@v5
10 with:
11 context: .
12 push: true
13 tags: ghcr.io/${{ github.repository }}:${{ steps.tag.outputs.TAG }}GITHUB_TOKEN is automatically available in every workflow — no secret configuration needed.
Step 7: Validate the Pipeline
Push a commit to main and watch the Actions tab:
git add .github/workflows/
git commit -m "ci: add test and deploy workflows"
git push origin mainCheck GitHub → Actions → the running workflow. When it completes:
# Confirm the new image is running
kubectl get deployment my-app -n production \
-o jsonpath='{.spec.template.spec.containers[0].image}'
# myusername/my-app:a1b2c3d4Production Improvements
Environment protection rules — in GitHub Settings → Environments, require a manual approval before deploying to production.
Separate staging and production workflows — trigger staging on merge to main, production on a tagged release (on: push: tags: ['v*']).
Store image tag in git — instead of kubectl set image, commit the new tag to a values file and let ArgoCD or Flux detect the change. This gives you a git audit trail of every deployment.
Official References
- GitHub Actions Documentation — Official docs covering workflows, jobs, steps, secrets, and environments
- GitHub Actions: Deploying to Kubernetes — GitHub's own guide for deploying to EKS, GKE, and AKS from Actions
- azure/setup-kubectl Action — Install kubectl in a GitHub Actions runner
- docker/build-push-action — The standard action for building and pushing Docker images with cache support
- GitHub Environments and Deployment Protection Rules — How to add required reviewers and wait timers before production deploys
We built Podscape to simplify Kubernetes workflows like this — logs, events, and cluster state in one interface, without switching tools.
Struggling with this in production?
We help teams fix these exact issues. Our engineers have deployed these patterns across production environments at scale.