Skip to main content
In this lesson you will learn how to configure Cilium to either run alongside kube-proxy (default) or act as a complete replacement using Cilium’s eBPF-based datapath. The walkthrough uses a kind cluster, but the steps apply to other Kubernetes variants (Minikube, kubeadm) with minor adjustments.
A presentation slide with the word "Demo" on the left and "Kube Proxy" displayed on the right over a blue-green curved shape. A small "© Copyright KodeKloud" appears in the bottom-left.

Prerequisites

Create a kind cluster

Save the following as kind.config and create a cluster named my-cluster. This configuration disables the default CNI so we can install Cilium:
# kind.config
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: my-cluster
networking:
  ipFamily: dual
  disableDefaultCNI: true
nodes:
- role: control-plane
- role: worker
- role: worker
Create the cluster:
kind create cluster --config kind.config
After creation, check node status (nodes will be NotReady until a CNI is installed):
kubectl get nodes
Example output:
NAME                         STATUS     ROLES          AGE   VERSION
my-cluster-control-plane     NotReady   control-plane  77s   v1.32.2
my-cluster-worker            NotReady   <none>         66s   v1.32.2
my-cluster-worker2           NotReady   <none>         66s   v1.32.2
Verify kube-system pods — kube-proxy is present by default (one pod per node):
kubectl get pod -n kube-system
Example output highlighting kube-proxy:
NAME                                                   READY   STATUS    RESTARTS   AGE
coredns-668d6bf9bc-5j6sz                               0/1     Pending   0          80s
coredns-668d6bf9bc-mhq8g                               0/1     Pending   0          80s
etcd-my-cluster-control-plane                          1/1     Running   0          87s
kube-apiserver-my-cluster-control-plane                1/1     Running   0          85s
kube-controller-manager-my-cluster-control-plane       1/1     Running   0          85s
kube-proxy-ckwhs                                       1/1     Running   0          77s
kube-proxy-d7bhz                                       1/1     Running   0          80s
kube-proxy-kd267                                       1/1     Running   0          77s
kube-scheduler-my-cluster-control-plane                1/1     Running   0          85s

Install Cilium (run alongside kube-proxy)

Install Cilium via the official Helm chart. By default, Cilium’s Helm chart sets kubeProxyReplacement to “false”, which means Cilium runs alongside kube-proxy and does not change service handling. Snippet example from values.yaml:
readinessProbe:
  # failure threshold of readiness probe
  failureThreshold: 3
  # interval between checks of the readiness probe
  periodSeconds: 30

# Configure the kube-proxy replacement in Cilium BPF datapath
# Valid options: "false" or "true" (check your chart's values; some Cilium versions/methods also expose modes like "partial"/"strict")
# ref: https://docs.cilium.io/en/stable/network/kubernetes/kubeproxy-free/
kubeProxyReplacement: "false"
Install Cilium with Helm (adjust file paths as needed):
helm repo add cilium https://helm.cilium.io/
helm repo update
helm install cilium cilium/cilium -n kube-system -f values.yaml
Verify Cilium pods are running:
kubectl get pod -n kube-system | grep -i cilium
Example output:
cilium-envoy-5s5hl               1/1     Running   0    3m1s
cilium-envoy-k26kd               1/1     Running   0    3m1s
cilium-envoy-m99v9               1/1     Running   0    3m1s
cilium-gb5nk                     1/1     Running   0    3m1s
cilium-glbl6                     1/1     Running   0    3m1s
cilium-nbqld                     1/1     Running   0    3m1s
cilium-operator-59944f4b8f-bmr98 1/1     Running   0    3m1s
cilium-operator-59944f4b8f-mhq7s 1/1     Running   0    3m1s
Verify the kube-proxy replacement setting from inside a Cilium agent pod using the Cilium debug tool: Get a Cilium agent pod name:
kubectl -n kube-system get pods -l k8s-app=cilium
Exec into a Cilium agent pod and check status:
kubectl -n kube-system exec -it <cilium-pod> -- cilium-dbg status | grep -i kubeproxyreplacement
Expected output when running alongside kube-proxy:
Defaulted container "cilium-agent" out of: cilium-agent, config (init), mount-cgroup (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), clean-cilium-state (init), install-cni-binaries (init)
KubeProxyReplacement:  False
This confirms Cilium is not replacing kube-proxy and leaves kube-proxy active.

Switching Cilium to replace kube-proxy

Cilium’s kube-proxy replacement (kubeProxyReplacement: “true”) hands over Kubernetes service handling from kube-proxy (iptables/ipvs) to Cilium’s eBPF-based datapath. High-level steps:
  1. Remove kube-proxy components (daemonset and configmap).
  2. Clean up kube-proxy-created iptables chains (environment-dependent).
  3. Update Cilium configuration to enable kube-proxy replacement and configure direct API server connectivity (k8sServiceHost/k8sServicePort).
  4. Upgrade the Cilium Helm release with new values.
  5. Verify replacement is active and confirm service connectivity.
Deleting kube-proxy and flushing iptables can disrupt cluster networking. Ensure you have console access to nodes and a recovery plan before making these changes on production clusters.
Important notes:
  • On kind clusters (Docker-in-Docker), iptables changes from inside containers may not affect the host. Proceed with caution and skip iptables cleanup on kind unless you understand the host context.
  • Ensure Cilium agents can reach the API server directly when kube-proxy is removed (set k8sServiceHost and k8sServicePort appropriately).

Remove kube-proxy daemonset and configmap

Delete the kube-proxy daemonset:
kubectl -n kube-system delete ds kube-proxy
Delete the kube-proxy configmap:
kubectl -n kube-system delete cm kube-proxy
You should see confirmation that resources were deleted.

iptables cleanup (environment-dependent)

kube-proxy typically creates Kubernetes-specific iptables chains. In production you would remove those so that Cilium’s eBPF datapath becomes authoritative for services. Do not run these commands on kind unless you know the correct host context. Example inspection and (cautious) commands:
# Inspect kube-proxy chains (example)
sudo iptables -t nat -S | grep KUBE-

# Example flush (only if you know what you're doing):
# sudo iptables -t nat -F KUBE-SERVICES
# sudo iptables -t nat -F KUBE-NODEPORTS
# sudo iptables -t nat -F KUBE-EXTERNAL-SERVICES
On kind clusters running inside Docker containers, modifying host iptables from the container may not have the intended effect. Skip iptables cleanup on kind unless you know the correct host context.

Enable kube-proxy replacement in values.yaml

Edit your values.yaml to enable kube-proxy replacement and add API server connectivity settings. Example modifications:
readinessProbe:
  failureThreshold: 10
  periodSeconds: 30

# Enable Cilium's kube-proxy replacement
kubeProxyReplacement: "true"

# Kubernetes connection settings for kube-proxy replacement
kubeConfigPath: ""
k8sServiceHost: "my-cluster-control-plane"
k8sServicePort: "6443"
Notes:
  • k8sServiceHost should be a hostname or IP reachable from worker nodes.
  • k8sServicePort is typically 6443 (API server port).
Table — kubeProxyReplacement options:
kubeProxyReplacement valueMeaning
”false”Cilium runs alongside kube-proxy; kube-proxy handles services.
”true”Cilium replaces kube-proxy; Cilium’s eBPF datapath handles services.
”partial”/“strict”Some Cilium versions/methods may expose additional modes; consult Cilium docs.
Reference: Cilium kube-proxy replacement docs

Push updated configuration with Helm

Upgrade the Cilium release with the modified values:
helm upgrade cilium cilium/cilium -f values.yaml -n kube-system
After the upgrade, verify that Cilium reports kube-proxy replacement enabled. Get a Cilium agent pod and inspect status:
kubectl -n kube-system get pods -l k8s-app=cilium
kubectl -n kube-system exec -it <cilium-pod> -- cilium-dbg status --verbose | grep -A 20 "KubeProxyReplacement"
You should see a section similar to:
KubeProxyReplacement Details:
  Status:                   True
  Socket LB:                Enabled
  Socket LB Tracing:        Enabled
  Socket LB Coverage:       Full
  Devices:                  eth0  172.19.0.3 ...
  Mode:                     SNAT
  Session Affinity:         Enabled
  Graceful Termination:     Enabled
  ...
Services:
 - ClusterIP:               Enabled
 - NodePort:                Enabled (Range: 30000-32767)
 - LoadBalancer:            Enabled
 - externalIPs:             Enabled
 - HostPort:                Enabled
This confirms Cilium is now handling kube-proxy responsibilities.

Test service connectivity (NodePort example)

Create a simple nginx deployment and a NodePort service to verify service handling through Cilium’s replacement. deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:stable
        ports:
        - containerPort: 80
service.yaml:
apiVersion: v1
kind: Service
metadata:
  name: my-service
spec:
  type: NodePort
  selector:
    app: nginx
  ports:
  - port: 80
    targetPort: 80
    nodePort: 30007
Apply the manifests:
kubectl apply -f deployment.yaml
kubectl apply -f service.yaml
Confirm the service:
kubectl get svc
Example output:
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP        19m
my-service   NodePort    10.96.101.131   <none>        80:30007/TCP   4s
Find node internal IPs:
kubectl get node -o wide
Example excerpt:
NAME                       STATUS   ROLES          AGE   VERSION    INTERNAL-IP
my-cluster-control-plane   Ready    control-plane  20m   v1.32.2    172.19.0.4
my-cluster-worker          Ready    <none>         20m   v1.32.2    172.19.0.3
my-cluster-worker2         Ready    <none>         20m   v1.32.2    172.19.0.2
From a host that can reach the node IP (here using worker node IP and nodePort 30007), confirm HTTP response:
curl 172.19.0.3:30007
Expected nginx response (truncated):
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and working.</p>
...
</html>
If you receive the nginx page, Cilium’s kube-proxy replacement is successfully servicing NodePort traffic via the eBPF datapath.

Summary

  • By default Cilium runs alongside kube-proxy (kubeProxyReplacement: “false”).
  • To let Cilium replace kube-proxy:
    • Remove kube-proxy components.
    • Clean iptables chains where required (environment-dependent).
    • Set kubeProxyReplacement: “true” and configure k8sServiceHost/k8sServicePort.
    • Upgrade the Cilium Helm release and verify with cilium-dbg status.
    • Validate service traffic with a test Deployment + Service.
  • Always test carefully and have recovery access when changing core networking components.
Links and references

Watch Video

Practice Lab