Kubernetes
11 min readMarch 3, 2026

Kubernetes RBAC in Practice: Least Privilege Without the Headache

Kubernetes RBAC is powerful but easy to misconfigure. Most teams either over-grant (cluster-admin everywhere) or under-understand (cryptic 403s at 2 AM). Here's how to do it right.

AJ
Ajeet Yadav
Platform & Cloud Engineer
Kubernetes RBAC in Practice: Least Privilege Without the Headache

The first time a developer gets a forbidden: User "system:serviceaccount:default:default" cannot list resource "pods" error, their instinct is to bind cluster-admin and move on.

If you're already managing cluster security with advanced tools like Tetragon or Cilium, RBAC remains your first and most fundamental layer of defense. The instinct is wrong. Kubernetes RBAC is one of those systems that punishes misunderstanding in both directions. Too permissive and you have a blast radius problem. Too restrictive and you have a debugging nightmare. Getting it right means understanding the model, not just the commands.

This post is the practical guide I wish I had when I first started running production clusters.


The Model in Two Minutes

RBAC in Kubernetes has four primitives:

  • Role — a set of permissions scoped to a single namespace
  • ClusterRole — a set of permissions scoped to the entire cluster (or non-namespaced resources)
  • RoleBinding — grants a Role to a subject within a namespace
  • ClusterRoleBinding — grants a ClusterRole to a subject cluster-wide

A subject is a User, Group, or ServiceAccount. In practice, most automated workloads use ServiceAccounts. Human users typically authenticate via OIDC (AWS SSO, Google, Okta) and map to Groups.

The key insight is this: permissions are additive and there is no deny rule. If a subject has any binding that grants a permission, they have it. You cannot take a permission away by adding another rule. This is the most important thing to internalize before you start writing RBAC manifests.


ServiceAccounts: The Part Most Teams Get Wrong

Every Pod in Kubernetes runs as a ServiceAccount. If you don't specify one, it uses the default ServiceAccount in its namespace. The default ServiceAccount has no permissions by default — except in older clusters, where the legacy token automount behavior gave it surprising read access.

The first thing to do in any namespace is understand what the default ServiceAccount is allowed to do:

bash
kubectl auth can-i --list --as=system:serviceaccount:my-namespace:default -n my-namespace

In most cases, the answer should be: almost nothing. If it's not, you have ambient permissions you didn't intend to grant.

Create a dedicated ServiceAccount per workload

Don't share ServiceAccounts across workloads. The cost of creating one is zero; the cost of a shared ServiceAccount with broad permissions being exploited is not.

yaml
1apiVersion: v1
2kind: ServiceAccount
3metadata:
4  name: payment-service
5  namespace: payments
6  annotations:
7    # For EKS IRSA — grant specific AWS permissions to this SA only
8    eks.amazonaws.com/role-arn: arn:aws:iam::123456789:role/payment-service-role
9automountServiceAccountToken: false  # opt-in, not opt-out

Note automountServiceAccountToken: false. The default is true, which mounts a token into every Pod that can be used to authenticate to the Kubernetes API. If your application doesn't talk to the API, it doesn't need this token. Disable it and reduce the attack surface.


Writing Roles That Actually Make Sense

A Role defines what operations are permitted on which resources. The verbs are: get, list, watch, create, update, patch, delete, deletecollection, and * (all).

The mistake I see constantly is using * for resources and verbs:

yaml
# Don't do this
rules:
- apiGroups: ["*"]
  resources: ["*"]
  verbs: ["*"]

That's cluster-admin spelled differently. Instead, be explicit:

yaml
1apiVersion: rbac.authorization.k8s.io/v1
2kind: Role
3metadata:
4  name: payment-service-role
5  namespace: payments
6rules:
7# Read its own ConfigMap and Secret
8- apiGroups: [""]
9  resources: ["configmaps", "secrets"]
10  resourceNames: ["payment-config", "payment-creds"]
11  verbs: ["get"]
12# If you're using an external provider, see [secrets management with Vault vs ESO](/blog/secrets-management-kubernetes-vault-vs-eso)
13# Read service endpoints for health checks
14- apiGroups: [""]
15  resources: ["endpoints"]
16  verbs: ["get", "list", "watch"]

The resourceNames field is underused. It limits a rule to specific named objects rather than all objects of that type. If your app only needs to read one ConfigMap, grant access to only that ConfigMap.


ClusterRoles vs Roles: When to Use Which

Use a ClusterRole when you need access to:

  • Cluster-scoped resources: nodes, persistentvolumes, storageclasses, clusterroles
  • Non-resource URLs: /healthz, /metrics
  • Resources across all namespaces

Use a Role for everything else. Even if you end up creating the same Role in five namespaces, namespace-scoped roles are safer because a bug in your RoleBinding can't accidentally grant permissions outside the intended scope.

One useful pattern: define a ClusterRole for read-only access to common resources, then bind it at the namespace level with a RoleBinding. This lets you reuse the permission definition without granting cluster-wide access:

yaml
1apiVersion: rbac.authorization.k8s.io/v1
2kind: ClusterRole
3metadata:
4  name: pod-reader
5rules:
6- apiGroups: [""]
7  resources: ["pods", "pods/log"]
8  verbs: ["get", "list", "watch"]
9---
10# Bind the ClusterRole only within the 'payments' namespace
11apiVersion: rbac.authorization.k8s.io/v1
12kind: RoleBinding
13metadata:
14  name: payment-team-pod-reader
15  namespace: payments
16subjects:
17- kind: Group
18  name: payment-team
19  apiGroup: rbac.authorization.k8s.io
20roleRef:
21  kind: ClusterRole
22  name: pod-reader
23  apiGroup: rbac.authorization.k8s.io

Common Mistakes

1. Using cluster-admin for CI/CD pipelines

Your CI/CD pipeline does not need to read secrets across all namespaces or modify RBAC policies. It needs to update Deployments, maybe read ConfigMaps, and possibly manage Ingress objects in specific namespaces.

Create a dedicated ServiceAccount for your pipeline and grant it the minimum permissions needed to deploy your application. Audit it once a quarter.

yaml
1apiVersion: rbac.authorization.k8s.io/v1
2kind: Role
3metadata:
4  name: deployer
5  namespace: payments
6rules:
7- apiGroups: ["apps"]
8  resources: ["deployments"]
9  verbs: ["get", "list", "patch", "update"]
10- apiGroups: [""]
11  resources: ["configmaps"]
12  verbs: ["get", "list"]
13- apiGroups: ["networking.k8s.io"]
14  resources: ["ingresses"]
15  verbs: ["get", "list", "update", "patch"]

2. Not understanding aggregated ClusterRoles

Kubernetes ships with four built-in ClusterRoles: cluster-admin, admin, edit, and view. Most teams know about cluster-admin and use it everywhere. edit is what most developer workflows actually need — it grants read/write access to most resources in a namespace without allowing RBAC modifications.

Use the built-in roles as a starting point. Don't reinvent them.

ClusterRoleWhat it grantsTypical use
cluster-adminFull access to everythingBreak-glass only
adminFull namespace access including RBACNamespace owners
editRead/write most resources, no RBACDevelopers, CI/CD
viewRead-only most resourcesOn-call, monitoring tools

3. Ignoring the system: groups

Kubernetes uses system: prefixed groups internally. system:masters maps to cluster-admin. system:authenticated means any authenticated user. Be careful when writing rules that apply to these groups — you may be granting access more broadly than you intend.

4. Forgetting about subresources

pods and pods/exec are different resources. A user who can get pods cannot exec into them unless they also have permission for pods/exec. Similarly, pods/log is a separate subresource.

yaml
# Granting pod access without allowing exec (safer for production)
rules:
- apiGroups: [""]
  resources: ["pods", "pods/log"]
  verbs: ["get", "list", "watch"]
# pods/exec intentionally omitted

This is a source of confusion in debugging. Always check subresources when something doesn't work despite appearing to have the right permissions.

5. Default ServiceAccount abuse via operator permissions

Many operators (Argo CD, cert-manager, external-dns) create ServiceAccounts with significant ClusterRole bindings. It's easy to overlook these when auditing. Always check what operators have installed:

bash
kubectl get clusterrolebindings -o json | \
  jq -r '.items[] | select(.subjects[]? | .kind == "ServiceAccount") | "\(.metadata.name): \(.subjects[].name)@\(.subjects[].namespace // "cluster")"'

Auditing What You've Built

You cannot secure what you cannot see. These commands are the foundation of a RBAC audit.

What can a specific ServiceAccount do?

bash
kubectl auth can-i --list \
  --as=system:serviceaccount:payments:payment-service \
  -n payments

Who can create Pods in a namespace? (requires kubectl-who-can plugin)

bash
kubectl who-can create pods -n payments

Install it with: kubectl krew install who-can

Find all ClusterRoleBindings that grant cluster-admin:

bash
kubectl get clusterrolebindings -o json | \
  jq '.items[] | select(.roleRef.name=="cluster-admin") | .metadata.name + ": " + (.subjects[]? | .kind + "/" + .name)'

Run this on a production cluster for the first time and you will almost certainly find ServiceAccounts with cluster-admin that were added "temporarily" months ago.

Detect overly-broad wildcard rules:

bash
kubectl get clusterroles,roles --all-namespaces -o json | \
  jq '[.items[] | select(.rules[]? | .verbs[] == "*" or .resources[] == "*") | .metadata.name]'

rbac-lookup is another useful tool that shows you what roles a subject has:

bash
# Install
kubectl krew install rbac-lookup

# What roles does the 'payment-team' group have?
kubectl rbac-lookup payment-team -k group

IRSA and Workload Identity: AWS-Native Least Privilege

If you're running on EKS, IRSA (IAM Roles for Service Accounts) is the right way to give Pods access to AWS services. The alternative — instance profile permissions on the node — means every Pod on a node gets every AWS permission that node has. That's a significant blast radius problem.

IRSA works by annotating a Kubernetes ServiceAccount with an IAM role ARN. The Pod's projected service account token is exchanged for AWS credentials scoped to that role. The IAM role has a trust policy that limits which cluster and namespace/ServiceAccount can assume it.

yaml
1# The Kubernetes side
2apiVersion: v1
3kind: ServiceAccount
4metadata:
5  name: s3-reader
6  namespace: data-pipeline
7  annotations:
8    eks.amazonaws.com/role-arn: arn:aws:iam::123456789012:role/data-pipeline-s3-reader
json
1// The IAM trust policy (terraform-managed)
2{
3  "Effect": "Allow",
4  "Principal": {
5    "Federated": "arn:aws:iam::123456789012:oidc-provider/oidc.eks.us-east-1.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE"
6  },
7  "Action": "sts:AssumeRoleWithWebIdentity",
8  "Condition": {
9    "StringEquals": {
10      "oidc.eks.us-east-1.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE:sub": "system:serviceaccount:data-pipeline:s3-reader",
11      "oidc.eks.us-east-1.amazonaws.com/id/EXAMPLED539D4633E53DE1B71EXAMPLE:aud": "sts.amazonaws.com"
12    }
13  }
14}

The StringEquals condition on sub is critical. Without it, any ServiceAccount in any namespace on your cluster can assume this role. Always lock it to the specific namespace and ServiceAccount.

For GKE, the equivalent is Workload Identity. The pattern is nearly identical: annotate a Kubernetes ServiceAccount to bind it to a GCP service account, and only that Kubernetes SA can assume those GCP permissions.


A Practical Namespace Template

When creating a new namespace, I use this baseline RBAC pattern:

yaml
1# 1. Dedicated SA for the workload
2apiVersion: v1
3kind: ServiceAccount
4metadata:
5  name: app-sa
6  namespace: my-app
7automountServiceAccountToken: false
8
9---
10# 2. Developer access (edit for deployers)
11apiVersion: rbac.authorization.k8s.io/v1
12kind: RoleBinding
13metadata:
14  name: developer-edit
15  namespace: my-app
16subjects:
17- kind: Group
18  name: my-app-developers
19  apiGroup: rbac.authorization.k8s.io
20roleRef:
21  kind: ClusterRole
22  name: edit
23  apiGroup: rbac.authorization.k8s.io
24
25---
26# 3. Read-only for on-call
27apiVersion: rbac.authorization.k8s.io/v1
28kind: RoleBinding
29metadata:
30  name: oncall-view
31  namespace: my-app
32subjects:
33- kind: Group
34  name: platform-oncall
35  apiGroup: rbac.authorization.k8s.io
36roleRef:
37  kind: ClusterRole
38  name: view
39  apiGroup: rbac.authorization.k8s.io

Three YAML documents, clear intent, no surprises.


RBAC Is a Practice, Not a Configuration

RBAC is not a one-time configuration. It's a practice. Every time you add a new workload, ask: what does this actually need to talk to? Every time someone asks for more permissions, ask: what's the minimum that unblocks them?

The teams I've seen run the cleanest clusters treat RBAC like code — it lives in Git, it goes through review, and it's audited on a schedule. The teams that struggle treat it like a firewall rule they'll fix later.

Later never comes. And the blast radius of "I'll just use cluster-admin for now" is always bigger than you expect. Run the cluster-admin binding audit script above on your cluster right now. You will be surprised by what you find.


Frequently Asked Questions

What is the difference between a Role and a ClusterRole?

A Role is scoped to a specific namespace, while a ClusterRole is cluster-wide. Use a Role for permissions that should only exist within a single namespace (like managing local deployments) and a ClusterRole for cluster-scoped resources (like Nodes or PVs) or for permissions that apply across all namespaces.

Is cluster-admin ever the right choice?

Yes, for break-glass scenarios or for a very small set of cluster administrators who genuinely need full control over every resource. It should never be granted to a standard application-level ServiceAccount or to developers for day-to-day work.

How do I troubleshoot RBAC errors efficiently?

The kubectl auth can-i command is your best friend. It allows you to check permissions for any subject without actually running the operation. For complex issues, look for open-source tools like rbac-lookup and kubectl-who-can which make the relationships between roles and bindings much clearer than kubectl get.

Does RBAC apply to kubectl exec?

Yes. Access to exec is controlled through the pods/exec subresource. Granting someone the ability to get or list pods does not automatically give them the permission to exec into them. This is a common security best practice: only allow exec access when strictly necessary for debugging.


Need help auditing or restructuring RBAC across your Kubernetes clusters? Talk to us at Coding Protocols. We help platform teams build security postures they can actually maintain without slowing down their developers.

Related Topics

Kubernetes
RBAC
Security
Platform Engineering
DevOps
IRSA
AWS

Read Next