Skip to main content
In this lesson we’ll demonstrate how to create and test L3 (IP/label-based) Cilium network policies. Examples cover common scenarios: namespace scoping, allowing/denying ingress and egress, matching multiple namespaces, and CIDR-based egress.
A presentation slide showing the word "Demo" on the left and a teal graphic on the right that reads "Cilium Network Policy Part 1." A small copyright notice for KodeKloud appears in the bottom-left corner.
Overview
  • Goal: Apply an L3 ingress policy to the app1 pod in the dev namespace so only pods with label app=app2 (in specific namespaces) can talk to app1.
  • Test image: nicolaka/netshoot (contains telnet, curl, and other troubleshooting tools).
  • Environment: three namespaces (dev, prod, staging), each with app1, app2, app3.
Useful links Environment: example deployment Use one Deployment per app (adjust name/namespace/labels accordingly). Example manifest for app1 in dev:
apiVersion: apps/v1
kind: Deployment
metadata:
  name: app1
  namespace: dev
  labels:
    app: app1-dev
spec:
  replicas: 1
  selector:
    matchLabels:
      app: app1
  template:
    metadata:
      labels:
        app: app1
    spec:
      containers:
      - name: app1
        image: nicolaka/netshoot
        command: ["sleep", "999999"]
Repeat the same pattern for app2 and app3 in dev, and similarly for prod and staging. Verify cluster namespaces and pods
  • Check namespaces:
kubectl get ns
NAME                   STATUS    AGE
cilium-secrets         Active    34m
default                Active    36m
dev                    Active    31m
kube-node-lease        Active    36m
kube-public            Active    36m
kube-system            Active    36m
local-path-storage     Active    36m
prod                   Active    31m
staging                Active    32m
  • List all pods (excluding kube-system) to view IPs and nodes:
kubectl get pod -A -o wide | grep -iv kube-system
NAMESPACE   NAME                                 READY   STATUS    RESTARTS   AGE   IP           NODE
dev         app1-75c78488c4-rzf48                1/1     Running   0          31m   10.0.2.88    my-cluster-worker
dev         app2-5957957d5b-kgkt6                1/1     Running   0          31m   10.0.2.89    my-cluster-worker
dev         app3-87d98dbbb-k6xzk                 1/1     Running   0          31m   10.0.2.26    my-cluster-worker
prod        app1-75c78488c4-2v5jp                1/1     Running   0          31m   10.0.0.76    my-cluster-worker2
prod        app2-5957957d5b-lkwt6                1/1     Running   0          31m   10.0.2.70    my-cluster-worker
prod        app3-87d98dbbb-ldkfl                 1/1     Running   0          31m   10.0.0.50    my-cluster-worker2
staging     app1-75c78488c4-6kptd                1/1     Running   0          31m   10.0.2.122   my-cluster-worker
staging     app2-5957957d5b-lzn4j                1/1     Running   0          31m   10.0.0.93    my-cluster-worker2
staging     app3-87d98dbbb-5fwjw                 1/1     Running   0          31m   10.0.2.136   my-cluster-worker
  • Check labels in the dev namespace:
kubectl get pod -n dev --show-labels
NAME                        READY   STATUS    RESTARTS   AGE   LABELS
app1-75c78488c4-rzf48       1/1     Running   0          39m   app=app1,pod-template-hash=75c78488c4
app2-5957957d5b-kgkt6       1/1     Running   0          39m   app=app2,pod-template-hash=5957957d5b
app3-87d98dbbb-k6xzk        1/1     Running   0          39m   app=app3,pod-template-hash=87d98dbbb
Connectivity testing with telnet
  • Exec into a pod and use telnet to test TCP connectivity.
  • Two possible outcomes to interpret:
    • “Connection refused” — the TCP SYN reached the destination pod (no process listening on that port). This confirms network reachability.
    • Telnet hangs without response — the packet was likely dropped before reaching the pod (network reachability blocked).
Telnet note: “Connection refused” means the traffic reached the destination pod (but no process is listening on that port). A hanging telnet indicates the traffic was blocked or dropped.
Example telnet test
# Exec into app1 in dev
kubectl exec -it app1-75c78488c4-rzf48 -n dev -- bash

# From inside the pod:
telnet 10.0.2.89 3000
# Output:
# Telnet to an IP that does not exist -> will hang (CTRL+C to cancel)
telnet 10.0.2.112 3000
# (hangs, showing that traffic did not get to any pod)
Basic L3 ingress policy: allow only app2 (same namespace)
  • Important: selectors in a CiliumNetworkPolicy are namespace-scoped by default (selectors without explicit namespace qualifiers match endpoints in the same namespace as the policy).
Example CiliumNetworkPolicy (allow ingress to app1 from app2 in dev):
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
  name: l3-policy
  namespace: dev
spec:
  endpointSelector:
    matchLabels:
      app: app1
  ingress:
  - fromEndpoints:
    - matchLabels:
        app: app2
Apply and verify:
kubectl apply -f l3-policy.yaml
# Output:
kubectl get ciliumnetworkpolicy -n dev
# NAME       AGE   VALID
# l3-policy  10s   True
Testing ingress behavior
  • From app2 in dev -> should be allowed (telnet returns “Connection refused”).
  • From app3 in dev -> should be blocked (telnet will hang).
Example:
# From app2 in dev (allowed)
kubectl exec -it app2-5957957d5b-kgkt6 -n dev -- bash
telnet 10.0.2.88 80
# From app3 in dev (blocked)
kubectl exec -it app3-87d98dbbb-k6xzk -n dev -- bash
telnet 10.0.2.88 80
# (hangs; CTRL+C to cancel)
Namespace scoping and cross-namespace rules
  • By default, label selectors reference the policy namespace.
  • To allow endpoints from another namespace, use the namespace-qualified label: “k8s:io.kubernetes.pod.namespace”: “<namespace>”.
  • YAML tip: keys containing colons must be quoted.
Example — allow app=app2 from the prod namespace to reach app1 in dev:
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
  name: l3-policy
  namespace: dev
spec:
  endpointSelector:
    matchLabels:
      app: app1
  ingress:
  - fromEndpoints:
    - matchLabels:
        "k8s:io.kubernetes.pod.namespace": "prod"
        app: app2
Apply and test:
  • app2 in prod -> allowed.
  • app2 in dev -> still allowed due to original matchLabels (if preserved).
  • app3 in dev -> still blocked.
Match multiple namespaces (matchExpressions)
  • Use matchExpressions when you need to match a label key against multiple values.
Example — allow app2 from prod OR staging:
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
  name: l3-policy
  namespace: dev
spec:
  endpointSelector:
    matchLabels:
      app: app1
  ingress:
  - fromEndpoints:
    - matchLabels:
        app: app2
      matchExpressions:
      - key: "k8s:io.kubernetes.pod.namespace"
        operator: In
        values:
        - prod
        - staging
  • After applying, app2 in prod and app2 in staging will be allowed.
  • app2 in dev will be allowed only if you include dev in the values or keep a matchLabels that matches dev.
Allow all endpoints in the policy namespace
  • An empty endpoint selector () in fromEndpoints or toEndpoints matches all endpoints in the same namespace as the CiliumNetworkPolicy.
Example — allow all pods in dev to talk to app1:
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
  name: l3-allow-all-dev
  namespace: dev
spec:
  endpointSelector:
    matchLabels:
      app: app1
  ingress:
  - fromEndpoints:
    - {}
Deny all ingress (default deny for selected endpoints)
  • If you select endpoints and provide an empty ingress list, ingress is denied to those endpoints (default deny).
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
  name: l3-deny-all
  namespace: dev
spec:
  endpointSelector:
    matchLabels:
      app: app1
  ingress: []
Egress rules (toEndpoints, toCIDR, toCIDRSet)
  • Egress uses toEndpoints and follows the same matching patterns as ingress (namespace-qualified labels, matchExpressions, empty selector).
  • Examples below show common egress controls.
Example — allow app1 in dev to talk only to app2 in dev:
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
  name: l3-egress-to-app2
  namespace: dev
spec:
  endpointSelector:
    matchLabels:
      app: app1
  egress:
  - toEndpoints:
    - matchLabels:
        app: app2
Example — allow egress to app2 in prod namespace:
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
  name: l3-egress-to-app2-prod
  namespace: dev
spec:
  endpointSelector:
    matchLabels:
      app: app1
  egress:
  - toEndpoints:
    - matchLabels:
        app: app2
        "k8s:io.kubernetes.pod.namespace": "prod"
Allow egress to all endpoints in same namespace:
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
  name: l3-egress-allow-all-dev
  namespace: dev
spec:
  endpointSelector:
    matchLabels:
      app: app1
  egress:
  - toEndpoints:
    - {}
Deny all egress:
  • Provide an empty egress list to deny all egress from the selected endpoints:
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
  name: l3-egress-deny-all
  namespace: dev
spec:
  endpointSelector:
    matchLabels:
      app: app1
  egress: []
Egress to CIDR ranges
  • Use toCIDR to specify CIDR entries directly.
  • Use toCIDRSet to specify CIDRs with an except list.
Example — allow egress to specific CIDRs:
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
  name: l3-egress-to-cidrs
  namespace: dev
spec:
  endpointSelector:
    matchLabels:
      app: app1
  egress:
  - toCIDR:
    - 172.16.1.0/24
    - 10.12.1.0/24
Example — allow egress to a large CIDR but exclude specific subranges:
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
  name: l3-egress-cidrset-with-except
  namespace: dev
spec:
  endpointSelector:
    matchLabels:
      app: app1
  egress:
  - toCIDRSet:
    - cidr: 10.0.0.0/8
      except:
      - 10.100.0.0/24
      - 10.200.200.11/32
Quick reference table — common policy patterns
Resource TypeTypical Use CaseExample snippet
Allow ingress from same namespaceLimit ingress to specific app in same namespacefromEndpoints: - matchLabels: app: app2
Allow ingress from other namespaceCross-namespace allow using namespace labelfromEndpoints: - matchLabels: “k8s:io.kubernetes.pod.namespace”:“prod” app: app2
Allow all in policy namespacePermit all pods in namespacefromEndpoints: -
Deny all ingressDefault deny for selected endpointsingress: []
Egress to CIDRAllow external rangesegress: - toCIDR: - 172.16.1.0/24
Egress CIDR with exceptionsExclude subrangestoCIDRSet: - cidr: 10.0.0.0/8 except: - 10.100.0.0/24
Key reminders and best practices
When a CiliumNetworkPolicy selects endpoints, it restricts traffic for those endpoints according to the policy rules. If you intend to keep open communication, do not create overly broad selectors without the desired allow rules (use empty from/to endpoints carefully).
Quick YAML tip: Keys containing special characters (such as colons) must be quoted in YAML. Example: “k8s:io.kubernetes.pod.namespace”: “prod”
  • Default behavior: without any network policy selecting an endpoint, pods can communicate freely.
  • Once a CiliumNetworkPolicy selects an endpoint, traffic is restricted according to the policy rules you define.
  • Use endpoint selectors, matchLabels, and matchExpressions for precise targeting.
  • To match endpoints in another namespace, use the namespace-qualified label key (quoted).
  • fromEndpoints / toEndpoints with matches all endpoints in the policy namespace.
  • To explicitly deny all ingress/egress to selected endpoints, provide an empty ingress/egress list.
  • Extend these L3/L3-only examples for L4/L7 controls and service-aware policies as needed. For advanced use cases, consult the official Cilium policy documentation: https://docs.cilium.io/en/stable/policy/

Watch Video