Setting Up Kubernetes RBAC from Scratch
A step-by-step guide to configuring Role-Based Access Control in Kubernetes. You'll create users, define roles with least-privilege permissions, bind them, and verify access — all with real kubectl commands.
Before you begin
- kubectl installed and configured
- Access to a running Kubernetes cluster
- Basic familiarity with Kubernetes namespaces and pods
Kubernetes RBAC is one of those things that's easy to ignore — until you're running in production and realise that half your engineers have cluster-admin because that was the path of least resistance.
This tutorial walks you through setting up RBAC properly: creating users and service accounts, writing roles with least-privilege permissions, binding them, and verifying the access is exactly what you intended.
By the end you'll have a repeatable pattern you can apply to every namespace in your cluster.
What You'll Build
A three-tier access model for a production namespace:
- Reader — can view pods, services, and deployments. Cannot modify anything.
- Developer — can view everything + exec into pods + manage ConfigMaps and Secrets.
- CI Bot — a ServiceAccount that can update deployments (for rolling releases) but nothing else.
Step 1: Create the Namespace
Start with a clean namespace to test in:
kubectl create namespace productionVerify it exists:
kubectl get namespace production
# NAME STATUS AGE
# production Active 5sStep 2: Create the Reader Role
A Role grants permissions within a single namespace. This one allows viewing pods, services, and deployments but nothing else:
1kubectl apply -f - <<EOF
2apiVersion: rbac.authorization.k8s.io/v1
3kind: Role
4metadata:
5 name: reader
6 namespace: production
7rules:
8 - apiGroups: [""]
9 resources: ["pods", "services", "endpoints", "configmaps"]
10 verbs: ["get", "list", "watch"]
11 - apiGroups: ["apps"]
12 resources: ["deployments", "replicasets"]
13 verbs: ["get", "list", "watch"]
14EOFA few things worth noting:
apiGroups: [""]means the core API group — pods, services, configmaps all live here.apiGroups: ["apps"]covers deployments and replicasets, which live in the apps API group.- Verbs
get,list,watchare read-only. Addingcreate,update,delete, orpatchwould grant write access.
Verify the role was created:
kubectl get role reader -n production -o yamlStep 3: Create the Developer Role
The developer role extends the reader role with exec access and the ability to manage ConfigMaps and Secrets:
1kubectl apply -f - <<EOF
2apiVersion: rbac.authorization.k8s.io/v1
3kind: Role
4metadata:
5 name: developer
6 namespace: production
7rules:
8 - apiGroups: [""]
9 resources: ["pods", "services", "endpoints", "configmaps", "secrets"]
10 verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
11 - apiGroups: ["apps"]
12 resources: ["deployments", "replicasets", "statefulsets"]
13 verbs: ["get", "list", "watch", "update", "patch"]
14 - apiGroups: [""]
15 resources: ["pods/exec", "pods/log", "pods/portforward"]
16 verbs: ["create", "get"]
17EOFThe pods/exec sub-resource is what controls kubectl exec. Without it, even if a user can get pods, they can't exec into them. The pods/log sub-resource similarly controls kubectl logs.
Step 4: Create a ServiceAccount for the CI Bot
ServiceAccounts are for machines, not humans. Create one for your CI pipeline:
kubectl create serviceaccount ci-bot -n productionVerify:
kubectl get serviceaccount ci-bot -n production
# NAME SECRETS AGE
# ci-bot 0 3sNow create its role — it only needs to update deployments:
1kubectl apply -f - <<EOF
2apiVersion: rbac.authorization.k8s.io/v1
3kind: Role
4metadata:
5 name: ci-deployer
6 namespace: production
7rules:
8 - apiGroups: ["apps"]
9 resources: ["deployments"]
10 verbs: ["get", "list", "watch", "update", "patch"]
11 - apiGroups: [""]
12 resources: ["pods"]
13 verbs: ["get", "list", "watch"]
14EOFThis lets the CI bot update a deployment image tag but nothing else. It can't create new deployments, touch secrets, or exec into pods.
Step 5: Create User Identities
Kubernetes doesn't manage users directly — it delegates to your identity provider. For this tutorial, we'll create client certificates, which work with any cluster.
Create a private key and certificate signing request for a user named alice:
# Generate private key
openssl genrsa -out alice.key 2048
# Generate CSR
openssl req -new -key alice.key -out alice.csr -subj "/CN=alice/O=developers"The /CN=alice becomes the username in Kubernetes. The /O=developers sets the group — you can use groups in RoleBindings to manage multiple users at once.
Submit the CSR to Kubernetes for signing:
1cat <<EOF | kubectl apply -f -
2apiVersion: certificates.k8s.io/v1
3kind: CertificateSigningRequest
4metadata:
5 name: alice
6spec:
7 request: $(cat alice.csr | base64 | tr -d '\n')
8 signerName: kubernetes.io/kube-apiserver-client
9 expirationSeconds: 86400
10 usages:
11 - client auth
12EOFApprove it:
kubectl certificate approve aliceExtract the signed certificate:
kubectl get csr alice -o jsonpath='{.status.certificate}' | base64 -d > alice.crtStep 6: Bind Roles to Users and ServiceAccounts
A RoleBinding connects a Role to a user, group, or ServiceAccount within a namespace.
Bind the reader role to alice:
1kubectl apply -f - <<EOF
2apiVersion: rbac.authorization.k8s.io/v1
3kind: RoleBinding
4metadata:
5 name: alice-reader
6 namespace: production
7subjects:
8 - kind: User
9 name: alice
10 apiGroup: rbac.authorization.k8s.io
11roleRef:
12 kind: Role
13 name: reader
14 apiGroup: rbac.authorization.k8s.io
15EOFBind the ci-deployer role to the ci-bot ServiceAccount:
1kubectl apply -f - <<EOF
2apiVersion: rbac.authorization.k8s.io/v1
3kind: RoleBinding
4metadata:
5 name: ci-bot-deployer
6 namespace: production
7subjects:
8 - kind: ServiceAccount
9 name: ci-bot
10 namespace: production
11roleRef:
12 kind: Role
13 name: ci-deployer
14 apiGroup: rbac.authorization.k8s.io
15EOFStep 7: Configure kubectl for Alice
Add alice's credentials to your kubeconfig:
1kubectl config set-credentials alice \
2 --client-certificate=alice.crt \
3 --client-key=alice.key
4
5kubectl config set-context alice-production \
6 --cluster=$(kubectl config current-context | cut -d@ -f2) \
7 --user=alice \
8 --namespace=productionSwitch to alice's context:
kubectl config use-context alice-productionStep 8: Verify Access
Test what alice can do — and what she can't:
1# Should succeed — alice has read access to pods
2kubectl get pods -n production
3
4# Should succeed
5kubectl get deployments -n production
6
7# Should fail — alice only has read access, not exec
8kubectl exec -it some-pod -n production -- /bin/sh
9# Error from server (Forbidden): pods "some-pod" is forbidden:
10# User "alice" cannot create resource "pods/exec" in API group ""
11
12# Should fail — alice has no access outside production
13kubectl get pods -n default
14# Error from server (Forbidden)Switch back to your admin context:
kubectl config use-context <your-admin-context>Use kubectl auth can-i to check permissions without switching contexts — faster for bulk verification:
1# Check what alice can do
2kubectl auth can-i get pods --as alice -n production # yes
3kubectl auth can-i delete pods --as alice -n production # no
4kubectl auth can-i create deployments --as alice -n production # no
5
6# Check the ci-bot ServiceAccount
7kubectl auth can-i update deployments \
8 --as system:serviceaccount:production:ci-bot \
9 -n production # yes
10
11kubectl auth can-i delete secrets \
12 --as system:serviceaccount:production:ci-bot \
13 -n production # noThe --as flag impersonates any user or service account without needing their credentials. It's the fastest way to audit RBAC in bulk.
Step 9: Get the CI Bot Token
To use the ServiceAccount from your CI pipeline, create a long-lived token (or use the short-lived projected token that's auto-mounted):
kubectl create token ci-bot -n production --duration=8760hThis outputs a JWT. Store it in your CI system's secret manager (GitHub Actions secrets, GitLab CI variables, etc.) and use it with:
kubectl config set-credentials ci-bot \
--token=<token-from-above>For production, prefer short-lived tokens via the TokenRequest API or use Workload Identity (AWS IRSA, GCP Workload Identity Federation) instead of static tokens.
Common Mistakes to Avoid
Granting cluster-admin "just to get it working" — this bypasses RBAC entirely and grants unrestricted access to the entire cluster. There's almost no legitimate reason for a non-admin human or service account to have this role in production.
Using ClusterRole when you need Role — a ClusterRoleBinding that binds a ClusterRole grants permissions across all namespaces, not just the target namespace. Use a RoleBinding (even for a ClusterRole) to scope it to one namespace.
Wildcards in rules — resources: ["*"] and verbs: ["*"] are rarely appropriate. Define only what the subject actually needs.
Not auditing regularly — roles accumulate. Run this periodically to list all RoleBindings and their subjects:
kubectl get rolebindings -A -o wide
kubectl get clusterrolebindings -o wideCleanup
kubectl delete namespace production
kubectl delete csr alice
kubectl config delete-context alice-production
kubectl config delete-user alice
rm alice.key alice.crt alice.csrWhat's Next
- Set up an OPA Gatekeeper policy that prevents any RoleBinding from granting
cluster-admin - Integrate with your identity provider (OIDC) so users authenticate with SSO instead of client certificates
- Use
kube-rbac-proxyto add RBAC enforcement to custom metrics endpoints - Automate RBAC auditing with
rbac-lookuporkubectl-who-can
Official References
- Using RBAC Authorization — Complete Kubernetes RBAC reference: Roles, ClusterRoles, RoleBindings, ClusterRoleBindings, and aggregation
- Controlling Access to the Kubernetes API — How authentication, authorization, and admission control layer together
- kubectl auth can-i — Reference for the
kubectl auth can-icommand used to verify RBAC permissions - Service Accounts — How Kubernetes ServiceAccounts work, token projection, and when to use them
We built Podscape to simplify Kubernetes workflows like this — logs, events, and cluster state in one interface, without switching tools.
Struggling with this in production?
We help teams fix these exact issues. Our engineers have deployed these patterns across production environments at scale.