HashiCorp Vault: Kubernetes Auth and Dynamic Secrets
Configure Vault's Kubernetes auth method so pods authenticate using their ServiceAccount token, then generate short-lived database credentials on demand instead of storing static passwords in Kubernetes Secrets.
Before you begin
- A running Kubernetes cluster
- kubectl and Helm installed
- Basic understanding of Vault concepts (policies
- roles
- paths)
- A PostgreSQL database (or any Vault-supported database)
Static database passwords in Kubernetes Secrets have three problems: they never rotate, they're visible to anyone with RBAC read access to the namespace, and when they leak you have to update every service that uses them.
Vault's Kubernetes auth method and dynamic secrets engine solve all three. A pod authenticates using its ServiceAccount JWT (which Kubernetes already provides), gets a short-lived database password that's only valid for its session, and Vault revokes it automatically when the lease expires.
Architecture
Pod starts
→ Pod has a ServiceAccount JWT token
→ Pod calls Vault: "here's my JWT, I want the role db-app-role"
→ Vault validates JWT with Kubernetes API
→ Vault checks: does the ServiceAccount match db-app-role's binding?
→ Vault generates a new PostgreSQL user with 1-hour TTL
→ Pod receives username + password
→ 1 hour later, Vault drops the PostgreSQL user automatically
Step 1: Deploy Vault with Helm
1helm repo add hashicorp https://helm.releases.hashicorp.com
2helm repo update
3
4helm install vault hashicorp/vault \
5 --namespace vault \
6 --create-namespace \
7 --set "server.ha.enabled=false" \
8 --set "server.dev.enabled=true" # Dev mode: unsealed, in-memory, root token = "root"Dev mode is not for production — it resets on restart. For production, use the HA setup with Raft storage.
Wait for Vault to start:
kubectl wait --for=condition=Ready pod/vault-0 -n vault --timeout=60sInitialize the Vault CLI:
1# Port-forward for local CLI access
2kubectl port-forward vault-0 8200:8200 -n vault &
3
4export VAULT_ADDR='http://127.0.0.1:8200'
5export VAULT_TOKEN='root' # Dev mode token
6
7vault statusStep 2: Enable the Kubernetes Auth Method
vault auth enable kubernetesConfigure it to talk to the Kubernetes API:
1# Get the Kubernetes API server address from inside the cluster
2KUBE_CA=$(kubectl config view --raw --minify --flatten \
3 -o jsonpath='{.clusters[].cluster.certificate-authority-data}')
4KUBE_HOST=$(kubectl config view --raw --minify --flatten \
5 -o jsonpath='{.clusters[].cluster.server}')
6
7vault write auth/kubernetes/config \
8 kubernetes_host="$KUBE_HOST" \
9 kubernetes_ca_cert="$(echo $KUBE_CA | base64 -d)"When running inside the cluster (the Vault pod itself), Vault can discover these automatically:
vault write auth/kubernetes/config \
kubernetes_host="https://kubernetes.default.svc.cluster.local:443"Step 3: Enable the Database Secrets Engine
vault secrets enable databaseConfigure a connection to PostgreSQL:
vault write database/config/my-postgres \
plugin_name=postgresql-database-plugin \
allowed_roles="app-role,readonly-role" \
connection_url="postgresql://{{username}}:{{password}}@postgres.production.svc.cluster.local:5432/appdb" \
username="vault_admin" \
password="vault_admin_password"vault_admin must have CREATEROLE and LOGIN permissions in PostgreSQL:
CREATE USER vault_admin WITH CREATEROLE LOGIN PASSWORD 'vault_admin_password';
GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA public TO vault_admin WITH GRANT OPTION;Create a role that Vault uses to generate credentials:
1vault write database/roles/app-role \
2 db_name=my-postgres \
3 creation_statements="
4 CREATE ROLE \"{{name}}\" WITH LOGIN PASSWORD '{{password}}' VALID UNTIL '{{expiration}}';
5 GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA public TO \"{{name}}\";
6 GRANT USAGE ON ALL SEQUENCES IN SCHEMA public TO \"{{name}}\";
7 " \
8 default_ttl="1h" \
9 max_ttl="24h"Test credential generation manually:
vault read database/creds/app-role
# Key Value
# lease_duration 1h
# username v-root-app-role-AbCdEf123456
# password A1b2C3d4E5f6G7h8I9j0The generated user exists in PostgreSQL and disappears when the lease expires.
Step 4: Create a Vault Policy
The policy defines what a pod can access:
1vault policy write my-app - <<EOF
2# Read dynamic database credentials
3path "database/creds/app-role" {
4 capabilities = ["read"]
5}
6
7# Renew leases
8path "sys/leases/renew" {
9 capabilities = ["update"]
10}
11
12# Revoke own leases
13path "sys/leases/revoke" {
14 capabilities = ["update"]
15}
16EOFStep 5: Create a Kubernetes Auth Role
This role says: "pods in namespace production with ServiceAccount my-app can use the my-app policy":
vault write auth/kubernetes/role/my-app \
bound_service_account_names=my-app \
bound_service_account_namespaces=production \
policies=my-app \
ttl=1hCreate the ServiceAccount in Kubernetes:
kubectl create serviceaccount my-app -n productionStep 6: Use Vault Agent Sidecar for Secret Injection
The Vault Agent sidecar runs alongside your pod, authenticates to Vault, and writes secrets to a shared volume. Your application reads files instead of calling Vault directly.
Enable the sidecar injector (installed with the Helm chart, but needs the mutating webhook):
helm upgrade vault hashicorp/vault \
--namespace vault \
--set "injector.enabled=true" \
--reuse-valuesAnnotate your deployment to inject secrets:
1apiVersion: apps/v1
2kind: Deployment
3metadata:
4 name: my-app
5 namespace: production
6spec:
7 template:
8 metadata:
9 annotations:
10 vault.hashicorp.com/agent-inject: "true"
11 vault.hashicorp.com/role: "my-app"
12 vault.hashicorp.com/agent-inject-secret-db-creds: "database/creds/app-role"
13 vault.hashicorp.com/agent-inject-template-db-creds: |
14 {{- with secret "database/creds/app-role" -}}
15 export DB_USERNAME="{{ .Data.username }}"
16 export DB_PASSWORD="{{ .Data.password }}"
17 {{- end }}
18 spec:
19 serviceAccountName: my-app
20 containers:
21 - name: app
22 image: my-app:latest
23 command: ["/bin/sh", "-c"]
24 args:
25 - source /vault/secrets/db-creds && exec /app/serverThe sidecar writes to /vault/secrets/db-creds. Your container sources it to get DB_USERNAME and DB_PASSWORD as environment variables.
Step 7: Verify Injection
1kubectl get pod -n production -l app=my-app
2
3# Check the init container ran successfully
4kubectl describe pod <pod-name> -n production | grep -A 10 "vault-agent-init"
5
6# Check the secret file exists
7kubectl exec -n production <pod-name> -c app -- cat /vault/secrets/db-creds
8# export DB_USERNAME="v-kubernet-app-role-AbCdEf"
9# export DB_PASSWORD="A1b2C3d4"Step 8: Use the Vault SDK Instead of Files (Alternative)
For more control, authenticate directly in your application:
1import * as vault from 'node-vault';
2
3async function getDatabaseCredentials() {
4 const client = vault.default({ endpoint: process.env.VAULT_ADDR });
5
6 // Read the ServiceAccount JWT from the mounted volume
7 const jwt = fs.readFileSync('/var/run/secrets/kubernetes.io/serviceaccount/token', 'utf8');
8
9 // Authenticate with Kubernetes auth method
10 const auth = await client.kubernetesLogin({
11 role: 'my-app',
12 jwt,
13 });
14
15 client.token = auth.auth.client_token;
16
17 // Get dynamic database credentials
18 const creds = await client.read('database/creds/app-role');
19 return {
20 username: creds.data.username,
21 password: creds.data.password,
22 leaseId: creds.lease_id,
23 leaseDuration: creds.lease_duration,
24 };
25}Schedule credential renewal before the TTL expires:
async function renewCredentials(leaseId: string, leaseDuration: number) {
// Renew at 80% of TTL
setTimeout(async () => {
await client.write('sys/leases/renew', { lease_id: leaseId, increment: 3600 });
}, leaseDuration * 0.8 * 1000);
}Production Considerations
Vault HA with Raft: For production, use the integrated Raft storage (3-node minimum):
helm install vault hashicorp/vault \
--set "server.ha.enabled=true" \
--set "server.ha.raft.enabled=true" \
--set "server.ha.replicas=3"Auto-unseal: Production Vault requires unsealing after restart. Use AWS KMS, GCP KMS, or Azure Key Vault for auto-unseal so Vault recovers automatically without manual key entry.
Audit logging: Enable before going to production:
vault audit enable file file_path=/vault/logs/audit.logLeast-privilege vault_admin: The Vault database admin user should only have CREATEROLE on the schemas your app uses, not superuser.
Official References
- Vault Kubernetes Auth Method — Official HashiCorp docs for the Kubernetes auth method: configuration, roles, and JWT validation
- Vault Database Secrets Engine — Dynamic credential generation for PostgreSQL, MySQL, MongoDB, and other databases
- Vault Agent Injector — How the mutating webhook injects Vault Agent sidecars and the full annotation reference
- Vault Helm Chart — Official Helm chart docs for deploying Vault in HA mode with Raft storage
- Vault Production Hardening — HashiCorp's checklist for production Vault deployments: auto-unseal, audit logging, and TLS
We built Podscape to simplify Kubernetes workflows like this — logs, events, and cluster state in one interface, without switching tools.
Struggling with this in production?
We help teams fix these exact issues. Our engineers have deployed these patterns across production environments at scale.