Secrets Management in Kubernetes: Native Secrets, ESO, Vault, and SOPS Compared
Base64 is not encryption. If you're storing Kubernetes Secrets in Git or relying on etcd's default config, your secrets are not secret. Here's a practical guide to every credible option.

Let me say the thing that often gets buried in Kubernetes documentation footnotes: Kubernetes Secrets are not encrypted by default.
Whether you are using standard Kubernetes RBAC or more advanced eBPF-based enforcement, your secrets remain the highest-value target in your cluster. They are base64-encoded, which is encoding — not encryption. Anyone with etcd read access or kubectl get secret permissions can read every secret in your cluster in plain text.
I've worked with teams who genuinely believed their secrets were protected because they were stored as Kubernetes Secrets. They were operating on a false assumption that had real security implications. This post exists to fix that and to walk through every credible option for secrets management in Kubernetes, with honest tradeoffs.
The Problem With Native Kubernetes Secrets
Before dismissing native Secrets entirely, it's worth being precise about what the actual risks are.
The base64 encoding is not the core issue — it's that etcd, where Secrets are stored, is not encrypted at rest by default. In a managed Kubernetes service (EKS, GKE, AKS), you can enable envelope encryption using a KMS key, which encrypts Secrets in etcd. This is a significant improvement. If you're on a managed cluster, enable it.
# EKS: check if secrets encryption is enabled
aws eks describe-cluster --name my-cluster \
--query 'cluster.encryptionConfig'But etcd encryption only addresses the storage-at-rest problem. The other problems remain:
- Secrets in Git. If you apply a Secret manifest from a file that lives in a repository, that plaintext YAML is in your Git history forever. You cannot rotate your way out of a secret that's been in Git.
- RBAC access is coarse.
get secretsin a namespace gives you all secrets. There's no native way to grant access to a specific secret type without RBAC'sresourceNamesscoping. - No audit trail. Native Secrets have no built-in audit log for who read a secret and when, beyond what Kubernetes API audit logging captures.
- No automatic rotation. Secrets don't rotate themselves. Manual rotation is easy to forget.
These limitations aren't theoretical — they're the failure modes that have caused real incidents. The question is which solution best addresses them for your situation.
Option 1: AWS Secrets Manager + External Secrets Operator
External Secrets Operator (ESO) is my default recommendation for teams running on AWS. It solves the Git problem cleanly: you never store secret values in your repository. Instead, you store a reference to where the secret lives (AWS Secrets Manager, GCP Secret Manager, Azure Key Vault, etc.), and ESO syncs the actual value into a Kubernetes Secret at runtime.
The architecture is simple:
- Secret values live in AWS Secrets Manager
- ESO runs as a controller in your cluster
- You create an
ExternalSecretresource that says "fetch this secret and put it here" - ESO creates and maintains a native Kubernetes Secret with the actual value
- When the secret rotates in Secrets Manager, ESO picks up the new value on the next sync cycle
Installation
helm repo add external-secrets https://charts.external-secrets.io
helm install external-secrets external-secrets/external-secrets \
-n external-secrets \
--create-namespace \
--set installCRDs=trueThe SecretStore
First, tell ESO how to authenticate to your secrets backend. For AWS with IRSA:
1apiVersion: external-secrets.io/v1beta1
2kind: ClusterSecretStore
3metadata:
4 name: aws-secretsmanager
5spec:
6 provider:
7 aws:
8 service: SecretsManager
9 region: us-east-1
10 auth:
11 jwt:
12 serviceAccountRef:
13 name: external-secrets
14 namespace: external-secretsThe external-secrets ServiceAccount needs an IRSA annotation pointing to an IAM role with secretsmanager:GetSecretValue and secretsmanager:DescribeSecret permissions.
The ExternalSecret
1apiVersion: external-secrets.io/v1beta1
2kind: ExternalSecret
3metadata:
4 name: payment-service-db-creds
5 namespace: payments
6spec:
7 refreshInterval: 1h
8 secretStoreRef:
9 name: aws-secretsmanager
10 kind: ClusterSecretStore
11 target:
12 name: db-credentials # the K8s Secret that gets created
13 creationPolicy: Owner
14 deletionPolicy: Retain
15 template:
16 type: kubernetes.io/basic-auth
17 data:
18 username: "{{ .username }}"
19 password: "{{ .password }}"
20 data:
21 - secretKey: username
22 remoteRef:
23 key: payments/db/credentials
24 property: username
25 - secretKey: password
26 remoteRef:
27 key: payments/db/credentials
28 property: passwordThe refreshInterval controls how often ESO polls for updates. For secrets that rotate automatically (Aurora credential rotation, for example), set this to something reasonable — 1 hour is sensible. For static secrets, longer is fine.
deletionPolicy: Retain is important: if you delete the ExternalSecret resource, the underlying K8s Secret is kept rather than deleted. This prevents accidental data loss in application downtime.
What This Solves
ESO addresses the Git problem (references only, not values), adds a natural audit trail (Secrets Manager has CloudTrail), and supports automatic rotation via the refreshInterval. The RBAC story is also cleaner — developers can see ExternalSecret resources without seeing the actual values.
What it doesn't solve: the K8s Secret that ESO creates is still a native Kubernetes Secret. If someone has kubectl get secret access in the namespace, they can read the value. ESO is about source-of-truth management, not access control at the consumption layer.
Option 2: HashiCorp Vault
Vault is the most capable secrets management solution in this space. It also has the highest operational overhead. I'm not going to pretend otherwise.
Vault gives you:
- Fine-grained access control per secret path (not just per secret type)
- Dynamic secrets: database credentials generated on-demand with automatic expiry
- A full audit log with every read, write, and renewal
- Multiple authentication methods: Kubernetes auth, AWS IAM, LDAP, OIDC
- Secret leasing and automatic renewal
The Vault Agent Sidecar Injector pattern is the most common way to integrate with Kubernetes. A mutating webhook automatically injects a Vault Agent sidecar into annotated Pods. The agent authenticates to Vault, fetches the specified secrets, and writes them to a shared in-memory volume.
1apiVersion: apps/v1
2kind: Deployment
3metadata:
4 name: payment-service
5 namespace: payments
6spec:
7 template:
8 metadata:
9 annotations:
10 vault.hashicorp.com/agent-inject: "true"
11 vault.hashicorp.com/role: "payment-service"
12 vault.hashicorp.com/agent-inject-secret-db-creds: "payments/data/db/credentials"
13 vault.hashicorp.com/agent-inject-template-db-creds: |
14 {{- with secret "payments/data/db/credentials" -}}
15 DB_USERNAME={{ .Data.data.username }}
16 DB_PASSWORD={{ .Data.data.password }}
17 {{- end -}}
18 spec:
19 containers:
20 - name: payment-service
21 image: payment-service:latest
22 env:
23 - name: VAULT_SECRETS_FILE
24 value: /vault/secrets/db-credsThe secret is written to /vault/secrets/db-creds as an env-file-formatted string. Your application reads from that file path rather than from environment variables directly. This keeps secrets out of the process environment (visible in /proc).
When Vault Is Worth It
Vault's overhead is real: you need to operate the Vault cluster itself (HA setup, storage backend, unseal process), manage Vault policies, handle upgrades, and train your team on the Vault mental model. That's a non-trivial operational investment.
It's worth it when:
- You need dynamic secrets (per-application, time-limited database credentials are a significant security improvement)
- You have compliance requirements that demand a full audit trail of secret access
- You're already operating Vault for non-Kubernetes workloads and want a unified secrets plane
- You need fine-grained path-based access control (RBAC on specific secret paths, not just K8s resources)
It's probably not worth it when:
- Your team is small and you need to move fast
- You're running only on one cloud provider and that provider's native solution (Secrets Manager, Secret Manager) meets your needs
- You don't have dedicated platform engineers to own the Vault operations
Option 3: SOPS as a Lightweight Alternative
SOPS (Secrets OPerationS) from Mozilla takes a different approach: it lets you store encrypted secrets directly in Git. Instead of keeping secret values out of the repository, SOPS encrypts them so that the ciphertext is safe to commit.
The typical pattern with SOPS uses AWS KMS as the encryption key provider. You encrypt a YAML file locally; the encrypted version goes into Git; your CD pipeline decrypts it at deploy time using KMS.
1# Encrypt a secret file using AWS KMS
2sops --kms arn:aws:kms:us-east-1:123456789:key/mrk-1234abcd \
3 --encrypt secrets.yaml > secrets.enc.yaml
4
5# The encrypted file (safe to commit)
6# secrets.enc.yaml looks like this:
7# db_password: ENC[AES256_GCM,data:abc123...,iv:...,tag:...,type:str]
8# sops:
9# kms:
10# - arn: arn:aws:kms:us-east-1:...
11# ...# Decrypt at deploy time (requires KMS decrypt permissions)
sops --decrypt secrets.enc.yaml | kubectl apply -f -SOPS integrates with Argo CD via ksops or the argocd-vault-plugin, and with Flux via the SOPS decryption provider.
SOPS Tradeoffs
SOPS is lightweight and Git-native. There's no server to operate. If your team is comfortable with KMS and Git, it's a remarkably low-friction solution.
The downsides: secrets do live in Git (encrypted, but still there). Key rotation requires re-encrypting all your secrets files. There's no automatic rotation, no audit trail beyond Git history, and no dynamic secrets. If a KMS key is compromised, you need to re-encrypt everything.
For small teams or early-stage companies where "no server to operate" is a meaningful constraint, SOPS is a perfectly reasonable choice. For mature platform engineering teams with compliance requirements, it's probably a stepping stone.
Comparison Table
| Capability | Native K8s Secrets | ESO + Secrets Manager | HashiCorp Vault | SOPS + KMS |
|---|---|---|---|---|
| Encrypted at rest | Optional (KMS) | Yes (Secrets Manager) | Yes | Yes (KMS) |
| Secrets in Git | Yes (if manifests committed) | No (reference only) | No | Yes (encrypted) |
| Automatic rotation | No | Via Secrets Manager | Yes (leases) | No |
| Audit trail | API audit logs only | CloudTrail | Full (per-access) | Git history |
| Dynamic secrets | No | No | Yes | No |
| Operational overhead | None | Low | High | Very low |
| Fine-grained access | K8s RBAC only | K8s RBAC + IAM | Vault policy engine | IAM |
| Best for | Non-sensitive config | AWS-native teams | Compliance/dynamic | Small teams, GitOps |
My Recommendation
For most teams running on AWS EKS, start with ESO + AWS Secrets Manager. It solves the most common real-world problems — secrets out of Git, centralized management, rotation support — with low operational overhead. Enable KMS encryption for etcd at the same time. That combination covers the majority of the attack surface.
Add HashiCorp Vault when you genuinely need dynamic secrets or a unified multi-cloud secrets plane. Not before.
Use SOPS for teams that are just starting out, want a Git-native workflow, and don't yet have the operational capacity for ESO or Vault. It's a legitimate choice with clear constraints you can grow out of later.
What you shouldn't do is commit plaintext Secret manifests to Git and call it done. I've seen that choice survive for years in production environments before it became a problem. It always becomes a problem eventually.
Frequently Asked Questions
Is External Secrets Operator (ESO) safe to use?
Yes, ESO is widely used in production. Its primary risk is that it still creates native Kubernetes Secrets in your cluster to make the values available to pods. If you have sensitive compliance requirements, ensure that you have also enabled KMS encryption for your cluster's etcd and restricted RBAC access to secrets.
Can I use Vault without the sidecar injector?
Absolutely. Applications can use the Vault SDK directly to fetch secrets via the API, or you can use the Vault CSI Driver to mount secrets as files. The sidecar injector is popular because it requires zero application code changes, but it does add a proxy to every pod.
Does SOPS support multi-cloud environments?
Yes. SOPS can be configured with multiple "Master Keys" across different providers (e.g., one AWS KMS key and one Google Cloud KMS key). This allows the same file to be decrypted by either provider, providing a useful safety net for multi-cloud or migration scenarios.
Why should I use Secrets Manager instead of just native K8s Secrets?
The main reason is Source of Truth. AWS Secrets Manager provides built-in rotation, fine-grained IAM access control, and a full CloudTrail audit log. Managing secrets in a dedicated service rather than in Kubernetes manifests prevents them from being committed to Git and makes them accessible to non-Kubernetes workloads as well.
Trying to figure out the right secrets management approach for your Kubernetes environment? Talk to us at Coding Protocols. We help platform teams design security architectures that are both rigorous and operationally sustainable.


