Guide to configuring Cilium to run alongside or replace kube-proxy using eBPF on a kind cluster, including installation, kube-proxy removal, iptables cleanup and service connectivity tests.
In this lesson you will learn how to configure Cilium to either run alongside kube-proxy (default) or act as a complete replacement using Cilium’s eBPF-based datapath. The walkthrough uses a kind cluster, but the steps apply to other Kubernetes variants (Minikube, kubeadm) with minor adjustments.
Install Cilium via the official Helm chart. By default, Cilium’s Helm chart sets kubeProxyReplacement to “false”, which means Cilium runs alongside kube-proxy and does not change service handling.Snippet example from values.yaml:
Copy
readinessProbe: # failure threshold of readiness probe failureThreshold: 3 # interval between checks of the readiness probe periodSeconds: 30# Configure the kube-proxy replacement in Cilium BPF datapath# Valid options: "false" or "true" (check your chart's values; some Cilium versions/methods also expose modes like "partial"/"strict")# ref: https://docs.cilium.io/en/stable/network/kubernetes/kubeproxy-free/kubeProxyReplacement: "false"
Install Cilium with Helm (adjust file paths as needed):
Cilium’s kube-proxy replacement (kubeProxyReplacement: “true”) hands over Kubernetes service handling from kube-proxy (iptables/ipvs) to Cilium’s eBPF-based datapath. High-level steps:
Remove kube-proxy components (daemonset and configmap).
Clean up kube-proxy-created iptables chains (environment-dependent).
Update Cilium configuration to enable kube-proxy replacement and configure direct API server connectivity (k8sServiceHost/k8sServicePort).
Upgrade the Cilium Helm release with new values.
Verify replacement is active and confirm service connectivity.
Deleting kube-proxy and flushing iptables can disrupt cluster networking. Ensure you have console access to nodes and a recovery plan before making these changes on production clusters.
Important notes:
On kind clusters (Docker-in-Docker), iptables changes from inside containers may not affect the host. Proceed with caution and skip iptables cleanup on kind unless you understand the host context.
Ensure Cilium agents can reach the API server directly when kube-proxy is removed (set k8sServiceHost and k8sServicePort appropriately).
kube-proxy typically creates Kubernetes-specific iptables chains. In production you would remove those so that Cilium’s eBPF datapath becomes authoritative for services. Do not run these commands on kind unless you know the correct host context.Example inspection and (cautious) commands:
Copy
# Inspect kube-proxy chains (example)sudo iptables -t nat -S | grep KUBE-# Example flush (only if you know what you're doing):# sudo iptables -t nat -F KUBE-SERVICES# sudo iptables -t nat -F KUBE-NODEPORTS# sudo iptables -t nat -F KUBE-EXTERNAL-SERVICES
On kind clusters running inside Docker containers, modifying host iptables from the container may not have the intended effect. Skip iptables cleanup on kind unless you know the correct host context.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEkubernetes ClusterIP 10.96.0.1 <none> 443/TCP 19mmy-service NodePort 10.96.101.131 <none> 80:30007/TCP 4s
Find node internal IPs:
Copy
kubectl get node -o wide
Example excerpt:
Copy
NAME STATUS ROLES AGE VERSION INTERNAL-IPmy-cluster-control-plane Ready control-plane 20m v1.32.2 172.19.0.4my-cluster-worker Ready <none> 20m v1.32.2 172.19.0.3my-cluster-worker2 Ready <none> 20m v1.32.2 172.19.0.2
From a host that can reach the node IP (here using worker node IP and nodePort 30007), confirm HTTP response:
Copy
curl 172.19.0.3:30007
Expected nginx response (truncated):
Copy
<!DOCTYPE html><html><head><title>Welcome to nginx!</title>...<h1>Welcome to nginx!</h1><p>If you see this page, the nginx web server is successfully installed and working.</p>...</html>
If you receive the nginx page, Cilium’s kube-proxy replacement is successfully servicing NodePort traffic via the eBPF datapath.