Security

HashiCorp Vault: Kubernetes Auth and Dynamic Secrets

Advanced60 min to complete14 min read

Configure Vault's Kubernetes auth method so pods authenticate using their ServiceAccount token, then generate short-lived database credentials on demand instead of storing static passwords in Kubernetes Secrets.

Before you begin

  • A running Kubernetes cluster
  • kubectl and Helm installed
  • Basic understanding of Vault concepts (policies
  • roles
  • paths)
  • A PostgreSQL database (or any Vault-supported database)
Vault
Kubernetes
Security
Secrets Management
DevOps

Static database passwords in Kubernetes Secrets have three problems: they never rotate, they're visible to anyone with RBAC read access to the namespace, and when they leak you have to update every service that uses them.

Vault's Kubernetes auth method and dynamic secrets engine solve all three. A pod authenticates using its ServiceAccount JWT (which Kubernetes already provides), gets a short-lived database password that's only valid for its session, and Vault revokes it automatically when the lease expires.

Architecture

Pod starts
  → Pod has a ServiceAccount JWT token
  → Pod calls Vault: "here's my JWT, I want the role db-app-role"
  → Vault validates JWT with Kubernetes API
  → Vault checks: does the ServiceAccount match db-app-role's binding?
  → Vault generates a new PostgreSQL user with 1-hour TTL
  → Pod receives username + password
  → 1 hour later, Vault drops the PostgreSQL user automatically

Step 1: Deploy Vault with Helm

bash
1helm repo add hashicorp https://helm.releases.hashicorp.com
2helm repo update
3
4helm install vault hashicorp/vault \
5  --namespace vault \
6  --create-namespace \
7  --set "server.ha.enabled=false" \
8  --set "server.dev.enabled=true"   # Dev mode: unsealed, in-memory, root token = "root"

Dev mode is not for production — it resets on restart. For production, use the HA setup with Raft storage.

Wait for Vault to start:

bash
kubectl wait --for=condition=Ready pod/vault-0 -n vault --timeout=60s

Initialize the Vault CLI:

bash
1# Port-forward for local CLI access
2kubectl port-forward vault-0 8200:8200 -n vault &
3
4export VAULT_ADDR='http://127.0.0.1:8200'
5export VAULT_TOKEN='root'   # Dev mode token
6
7vault status

Step 2: Enable the Kubernetes Auth Method

bash
vault auth enable kubernetes

Configure it to talk to the Kubernetes API:

bash
1# Get the Kubernetes API server address from inside the cluster
2KUBE_CA=$(kubectl config view --raw --minify --flatten \
3  -o jsonpath='{.clusters[].cluster.certificate-authority-data}')
4KUBE_HOST=$(kubectl config view --raw --minify --flatten \
5  -o jsonpath='{.clusters[].cluster.server}')
6
7vault write auth/kubernetes/config \
8  kubernetes_host="$KUBE_HOST" \
9  kubernetes_ca_cert="$(echo $KUBE_CA | base64 -d)"

When running inside the cluster (the Vault pod itself), Vault can discover these automatically:

bash
vault write auth/kubernetes/config \
  kubernetes_host="https://kubernetes.default.svc.cluster.local:443"

Step 3: Enable the Database Secrets Engine

bash
vault secrets enable database

Configure a connection to PostgreSQL:

bash
vault write database/config/my-postgres \
  plugin_name=postgresql-database-plugin \
  allowed_roles="app-role,readonly-role" \
  connection_url="postgresql://{{username}}:{{password}}@postgres.production.svc.cluster.local:5432/appdb" \
  username="vault_admin" \
  password="vault_admin_password"

vault_admin must have CREATEROLE and LOGIN permissions in PostgreSQL:

sql
CREATE USER vault_admin WITH CREATEROLE LOGIN PASSWORD 'vault_admin_password';
GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA public TO vault_admin WITH GRANT OPTION;

Create a role that Vault uses to generate credentials:

bash
1vault write database/roles/app-role \
2  db_name=my-postgres \
3  creation_statements="
4    CREATE ROLE \"{{name}}\" WITH LOGIN PASSWORD '{{password}}' VALID UNTIL '{{expiration}}';
5    GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA public TO \"{{name}}\";
6    GRANT USAGE ON ALL SEQUENCES IN SCHEMA public TO \"{{name}}\";
7  " \
8  default_ttl="1h" \
9  max_ttl="24h"

Test credential generation manually:

bash
vault read database/creds/app-role
# Key                Value
# lease_duration     1h
# username           v-root-app-role-AbCdEf123456
# password           A1b2C3d4E5f6G7h8I9j0

The generated user exists in PostgreSQL and disappears when the lease expires.

Step 4: Create a Vault Policy

The policy defines what a pod can access:

bash
1vault policy write my-app - <<EOF
2# Read dynamic database credentials
3path "database/creds/app-role" {
4  capabilities = ["read"]
5}
6
7# Renew leases
8path "sys/leases/renew" {
9  capabilities = ["update"]
10}
11
12# Revoke own leases
13path "sys/leases/revoke" {
14  capabilities = ["update"]
15}
16EOF

Step 5: Create a Kubernetes Auth Role

This role says: "pods in namespace production with ServiceAccount my-app can use the my-app policy":

bash
vault write auth/kubernetes/role/my-app \
  bound_service_account_names=my-app \
  bound_service_account_namespaces=production \
  policies=my-app \
  ttl=1h

Create the ServiceAccount in Kubernetes:

bash
kubectl create serviceaccount my-app -n production

Step 6: Use Vault Agent Sidecar for Secret Injection

The Vault Agent sidecar runs alongside your pod, authenticates to Vault, and writes secrets to a shared volume. Your application reads files instead of calling Vault directly.

Enable the sidecar injector (installed with the Helm chart, but needs the mutating webhook):

bash
helm upgrade vault hashicorp/vault \
  --namespace vault \
  --set "injector.enabled=true" \
  --reuse-values

Annotate your deployment to inject secrets:

yaml
1apiVersion: apps/v1
2kind: Deployment
3metadata:
4  name: my-app
5  namespace: production
6spec:
7  template:
8    metadata:
9      annotations:
10        vault.hashicorp.com/agent-inject: "true"
11        vault.hashicorp.com/role: "my-app"
12        vault.hashicorp.com/agent-inject-secret-db-creds: "database/creds/app-role"
13        vault.hashicorp.com/agent-inject-template-db-creds: |
14          {{- with secret "database/creds/app-role" -}}
15          export DB_USERNAME="{{ .Data.username }}"
16          export DB_PASSWORD="{{ .Data.password }}"
17          {{- end }}
18    spec:
19      serviceAccountName: my-app
20      containers:
21        - name: app
22          image: my-app:latest
23          command: ["/bin/sh", "-c"]
24          args:
25            - source /vault/secrets/db-creds && exec /app/server

The sidecar writes to /vault/secrets/db-creds. Your container sources it to get DB_USERNAME and DB_PASSWORD as environment variables.

Step 7: Verify Injection

bash
1kubectl get pod -n production -l app=my-app
2
3# Check the init container ran successfully
4kubectl describe pod <pod-name> -n production | grep -A 10 "vault-agent-init"
5
6# Check the secret file exists
7kubectl exec -n production <pod-name> -c app -- cat /vault/secrets/db-creds
8# export DB_USERNAME="v-kubernet-app-role-AbCdEf"
9# export DB_PASSWORD="A1b2C3d4"

Step 8: Use the Vault SDK Instead of Files (Alternative)

For more control, authenticate directly in your application:

typescript
1import * as vault from 'node-vault';
2
3async function getDatabaseCredentials() {
4  const client = vault.default({ endpoint: process.env.VAULT_ADDR });
5
6  // Read the ServiceAccount JWT from the mounted volume
7  const jwt = fs.readFileSync('/var/run/secrets/kubernetes.io/serviceaccount/token', 'utf8');
8
9  // Authenticate with Kubernetes auth method
10  const auth = await client.kubernetesLogin({
11    role: 'my-app',
12    jwt,
13  });
14
15  client.token = auth.auth.client_token;
16
17  // Get dynamic database credentials
18  const creds = await client.read('database/creds/app-role');
19  return {
20    username: creds.data.username,
21    password: creds.data.password,
22    leaseId: creds.lease_id,
23    leaseDuration: creds.lease_duration,
24  };
25}

Schedule credential renewal before the TTL expires:

typescript
async function renewCredentials(leaseId: string, leaseDuration: number) {
  // Renew at 80% of TTL
  setTimeout(async () => {
    await client.write('sys/leases/renew', { lease_id: leaseId, increment: 3600 });
  }, leaseDuration * 0.8 * 1000);
}

Production Considerations

Vault HA with Raft: For production, use the integrated Raft storage (3-node minimum):

bash
helm install vault hashicorp/vault \
  --set "server.ha.enabled=true" \
  --set "server.ha.raft.enabled=true" \
  --set "server.ha.replicas=3"

Auto-unseal: Production Vault requires unsealing after restart. Use AWS KMS, GCP KMS, or Azure Key Vault for auto-unseal so Vault recovers automatically without manual key entry.

Audit logging: Enable before going to production:

bash
vault audit enable file file_path=/vault/logs/audit.log

Least-privilege vault_admin: The Vault database admin user should only have CREATEROLE on the schemas your app uses, not superuser.

Official References

We built Podscape to simplify Kubernetes workflows like this — logs, events, and cluster state in one interface, without switching tools.

Struggling with this in production?

We help teams fix these exact issues. Our engineers have deployed these patterns across production environments at scale.