Kubernetes

Configuring NetworkPolicies to Isolate Namespaces

Intermediate35 min to complete10 min read

By default, every pod in a Kubernetes cluster can talk to every other pod. NetworkPolicies let you enforce zero-trust networking — allowing only the traffic you explicitly permit. This tutorial shows you how.

Before you begin

  • kubectl configured with cluster access
  • A CNI that supports NetworkPolicy (Calico
  • Cilium
  • or Weave — not Flannel by default)
  • Basic understanding of Kubernetes namespaces and pods
Kubernetes
NetworkPolicy
Security
Networking
Zero-Trust

Kubernetes networking is flat by default. Every pod can reach every other pod on any port, regardless of namespace. In a multi-tenant cluster, that means a compromised pod in the dev namespace can reach your production database.

NetworkPolicies fix this. They're declarative firewall rules at the pod level, enforced by your CNI plugin.

Verify Your CNI Supports NetworkPolicy

NetworkPolicies require a CNI that enforces them. Check which CNI you're running:

bash
kubectl get pods -n kube-system | grep -E "calico|cilium|weave|flannel"

Flannel does not enforce NetworkPolicies. Calico, Cilium, and Weave do. If you're on a managed cluster (EKS, GKE, AKS), NetworkPolicy support is available but may need enabling.

The Default: No Policies = Allow All

Without any NetworkPolicy, all pods can communicate freely:

bash
1# Create two test namespaces
2kubectl create namespace frontend
3kubectl create namespace backend
4
5# Deploy pods in each
6kubectl run web --image=nginx -n frontend
7kubectl run db --image=nginx -n backend
8
9# Verify the db pod IP
10DB_IP=$(kubectl get pod db -n backend -o jsonpath='{.status.podIP}')
11
12# web can reach db — this is what we'll prevent
13kubectl exec -n frontend web -- curl -s --max-time 2 http://$DB_IP
14# Returns HTML — unrestricted access

Step 1: Default Deny All Ingress

Start by denying all ingress to the backend namespace. Any pod that doesn't match a subsequent allow rule gets dropped:

bash
1kubectl apply -f - <<EOF
2apiVersion: networking.k8s.io/v1
3kind: NetworkPolicy
4metadata:
5  name: default-deny-ingress
6  namespace: backend
7spec:
8  podSelector: {}      # Applies to all pods in namespace
9  policyTypes:
10    - Ingress
11EOF

Test that the frontend can no longer reach backend:

bash
kubectl exec -n frontend web -- curl -s --max-time 2 http://$DB_IP
# curl: (28) Connection timed out

Step 2: Allow Specific Ingress from Frontend

Now allow only the frontend namespace to access the backend database on port 5432:

bash
1kubectl apply -f - <<EOF
2apiVersion: networking.k8s.io/v1
3kind: NetworkPolicy
4metadata:
5  name: allow-frontend-to-db
6  namespace: backend
7spec:
8  podSelector:
9    matchLabels:
10      app: db          # Only applies to pods labelled app=db
11  policyTypes:
12    - Ingress
13  ingress:
14    - from:
15        - namespaceSelector:
16            matchLabels:
17              kubernetes.io/metadata.name: frontend
18          podSelector:
19            matchLabels:
20              app: web  # Only from pods labelled app=web
21      ports:
22        - protocol: TCP
23          port: 5432
24EOF

Label the pods so the selectors match:

bash
kubectl label pod web -n frontend app=web
kubectl label pod db -n backend app=db

The namespaceSelector and podSelector within the same -from list item are ANDed — traffic must come from both a pod labelled app=web AND in the frontend namespace. If they were separate list items, they'd be ORed.

Step 3: Default Deny All Egress

Deny all outbound traffic from the backend namespace, then explicitly allow what's needed:

bash
1kubectl apply -f - <<EOF
2apiVersion: networking.k8s.io/v1
3kind: NetworkPolicy
4metadata:
5  name: default-deny-egress
6  namespace: backend
7spec:
8  podSelector: {}
9  policyTypes:
10    - Egress
11EOF

This blocks everything outbound — including DNS. Your pods can't resolve hostnames now. Always allow DNS when denying egress:

bash
1kubectl apply -f - <<EOF
2apiVersion: networking.k8s.io/v1
3kind: NetworkPolicy
4metadata:
5  name: allow-dns-egress
6  namespace: backend
7spec:
8  podSelector: {}
9  policyTypes:
10    - Egress
11  egress:
12    - to:
13        - namespaceSelector:
14            matchLabels:
15              kubernetes.io/metadata.name: kube-system
16      ports:
17        - protocol: UDP
18          port: 53
19        - protocol: TCP
20          port: 53
21EOF

Step 4: Allow Egress to a Specific External Service

If your backend needs to reach an external database or API:

bash
1kubectl apply -f - <<EOF
2apiVersion: networking.k8s.io/v1
3kind: NetworkPolicy
4metadata:
5  name: allow-external-db-egress
6  namespace: backend
7spec:
8  podSelector:
9    matchLabels:
10      app: db
11  policyTypes:
12    - Egress
13  egress:
14    - to:
15        - ipBlock:
16            cidr: 10.0.1.50/32   # RDS endpoint IP
17      ports:
18        - protocol: TCP
19          port: 5432
20EOF

For managed cloud databases, get the IP via nslookup or your cloud console and use ipBlock.

Step 5: Complete Namespace Isolation Pattern

This is the pattern I apply to every production namespace:

bash
1# 1. Deny all ingress and egress
2kubectl apply -f - <<EOF
3apiVersion: networking.k8s.io/v1
4kind: NetworkPolicy
5metadata:
6  name: default-deny-all
7  namespace: production
8spec:
9  podSelector: {}
10  policyTypes:
11    - Ingress
12    - Egress
13EOF
14
15# 2. Allow DNS
16kubectl apply -f - <<EOF
17apiVersion: networking.k8s.io/v1
18kind: NetworkPolicy
19metadata:
20  name: allow-dns
21  namespace: production
22spec:
23  podSelector: {}
24  policyTypes:
25    - Egress
26  egress:
27    - ports:
28        - protocol: UDP
29          port: 53
30        - protocol: TCP
31          port: 53
32EOF
33
34# 3. Allow intra-namespace communication
35kubectl apply -f - <<EOF
36apiVersion: networking.k8s.io/v1
37kind: NetworkPolicy
38metadata:
39  name: allow-same-namespace
40  namespace: production
41spec:
42  podSelector: {}
43  policyTypes:
44    - Ingress
45    - Egress
46  ingress:
47    - from:
48        - podSelector: {}
49  egress:
50    - to:
51        - podSelector: {}
52EOF

After these three, pods within production can talk to each other, but nothing from other namespaces can get in, and pods can't reach outside the namespace (except DNS).

Step 6: Verify Your Policies

bash
1# List all policies in a namespace
2kubectl get networkpolicy -n backend
3
4# Describe a policy
5kubectl describe networkpolicy allow-frontend-to-db -n backend
6
7# Test connectivity — should be blocked
8kubectl exec -n frontend web -- curl -s --max-time 2 http://$DB_IP:80
9# curl: (28) Connection timed out
10
11# Test DNS still works (if you applied allow-dns)
12kubectl exec -n backend db -- nslookup kubernetes.default.svc.cluster.local

With Cilium, you can get a network policy verdict in real time:

bash
cilium monitor --type drop

Common Mistakes

Forgetting DNS egress — the most common mistake when adding a default-deny-egress policy. Your pods immediately stop resolving hostnames. Always add the DNS egress policy in the same apply.

OR vs AND in from selectors — multiple items in the from list are ORed. Items within a single from entry (namespaceSelector + podSelector together) are ANDed.

Policies don't apply to host-networked pods — pods with hostNetwork: true bypass NetworkPolicies. kube-proxy and CNI pods typically use host networking.

Missing policyTypes — if you don't specify policyTypes, a policy with only ingress rules only affects ingress, and egress is unaffected.

Official References

We built Podscape to simplify Kubernetes workflows like this — logs, events, and cluster state in one interface, without switching tools.

Struggling with this in production?

We help teams fix these exact issues. Our engineers have deployed these patterns across production environments at scale.