Skip to main content
In this lesson you’ll learn how to replace kube-proxy with Cilium so that service routing and load balancing run via eBPF. Cilium can be deployed in two main modes:
  • Coexist with kube-proxy: Cilium provides CNI, ingress (e.g., Gateway API) and observability, while kube-proxy continues to perform service routing/load balancing using iptables or ipvs.
  • Replace kube-proxy: Cilium implements service routing and load balancing directly with eBPF, removing the need for kube-proxy.
Replacing kube-proxy with an eBPF-based data plane (Cilium) yields measurable performance and scalability benefits compared to the traditional iptables approach. The fundamental difference is how packet lookup is performed. With kube-proxy in iptables mode, each packet is evaluated against a sequential list of rules. As the number of services and endpoints grows, packet processing may require scanning many rules, and kube-proxy frequently rebuilds iptables rules on service or endpoint changes. This sequential scanning and frequent rebuilding does not scale well in highly dynamic Kubernetes clusters.
A slide titled "Why Replace iptables With eBPF?" showing that kube-proxy (iptables) requires a full rebuild when rules change, illustrated by a vertical arrow and five services each paired with lists of backend IP addresses.
Cilium replaces sequential rule scanning with eBPF maps and per-CPU hash tables. Hash lookups are O(1) on average, so increasing the number of services and endpoints does not proportionally increase packet-processing latency as with iptables. This architecture delivers lower latency, improved scalability, more efficient load balancing, and richer observability and debugging.
A slide diagram titled "Why Replace iptables With eBPF?" showing an eBPF-based per-CPU hash table that maps names (John Smith, Lisa Smith, Sandra Dee) to bucket indices and phone-number-like values (e.g., 521-8976, 521-9655). It illustrates keys hashing to the same bucket slots across per-CPU tables.
Key benefits of replacing kube-proxy with Cilium (eBPF)
  • Lower packet-processing latency
  • Better scalability for clusters with many services and endpoints
  • More efficient and flexible load balancing (including session affinity and L4/L7 options)
  • Enhanced observability, tracing, and debugging via Cilium tooling
Summary comparison
Aspectkube-proxy (iptables/ipvs)Cilium (eBPF, kube-proxy replacement)
Packet lookup modelSequential rule matching (iptables)Hash-based eBPF maps (per-CPU)
ScalabilityDegrades with many services/endpointsScales well due to O(1) lookups
Rule rebuildsFrequent, on service/endpoint changeMinimal, maps updated efficiently
ObservabilityLimitedRich telemetry integrated in Cilium
Load balancingiptables/ipvseBPF-based, more efficient
Default Cilium behavior By default Cilium does not replace kube-proxy; it coexists and delegates service routing to kube-proxy:
kubeProxyReplacement: "false"
You can confirm the current kube-proxy replacement setting from a Cilium agent:
kubectl -n kube-system exec ds/cilium -- cilium-dbg status | grep KubeProxyReplacement
If the output shows KubeProxyReplacement: false, kube-proxy remains active and continues handling service routing. When to replace kube-proxy Consider replacing kube-proxy with Cilium when you need:
  • Lower L4 packet latency for high-throughput workloads
  • A more scalable service datapath for large clusters
  • Integrated observability and easier troubleshooting
  • Reduced operational complexity from iptables/ipvs rule churn
Before you proceed, validate control-plane connectivity and cluster-specific networking requirements. Step-by-step: Replacing kube-proxy with Cilium
  1. Remove kube-proxy artifacts First, delete the kube-proxy DaemonSet and the associated ConfigMap. Removing the ConfigMap prevents kubeadm from automatically re-installing kube-proxy during control-plane upgrades on supported versions:
kubectl -n kube-system delete ds kube-proxy
# Delete the ConfigMap as well to avoid kube-proxy being reinstalled during a kubeadm upgrade
kubectl -n kube-system delete cm kube-proxy
Also ensure you clean up any leftover iptables or ipvs rules created by kube-proxy to avoid conflicts with Cilium.
Before removing kube-proxy, ensure you understand how control-plane and API-server access will be handled. Removing kube-proxy without configuring Cilium to communicate with the API server can lead to control-plane connectivity issues.
  1. Configure Cilium to replace kube-proxy When Cilium replaces kube-proxy, it must be able to contact the Kubernetes API server directly (since the kube-proxy Service IP may no longer exist). Provide explicit API server host and port in Cilium’s configuration and enable kubeProxyReplacement:
kubeProxyReplacement: "true"
k8sServiceHost: "host-ip-control-plane"
k8sServicePort: "6443"
Setting k8sServiceHost and k8sServicePort avoids a bootstrap problem: with kube-proxy removed, Cilium needs explicit connectivity information to the API server to initialize service discovery and control-plane interactions.
Apply the Cilium manifest or Helm chart with these settings and ensure Cilium DaemonSet/Pods restart with the new configuration.
  1. Verify kube-proxy replacement is active After deploying Cilium with kubeProxyReplacement enabled, validate the agent status:
kubectl -n kube-system exec ds/cilium -- cilium-dbg status | grep KubeProxyReplacement
For more verbose diagnostics:
kubectl -n kube-system exec ds/cilium -- cilium-dbg status --verbose | grep KubeProxyReplacement
A successful replacement shows KubeProxyReplacement: true, indicating Cilium is handling service routing/load balancing via eBPF. Operational notes and caveats
  • Remove all kube-proxy artifacts (DaemonSet, ConfigMap) and clear iptables/ipvs rules to avoid conflicts.
  • Ensure k8sServiceHost and k8sServicePort are correct so Cilium can reach the API server.
  • Test pod-to-pod, pod-to-service, and control-plane connectivity after enabling replacement.
  • Monitor Cilium logs and use Cilium diagnostics (cilium-dbg, Hubble) to validate traffic flows and troubleshoot.
Helpful links and references For more on configuring Cilium via Helm or manifests, see the official Cilium docs and Helm chart examples.

Watch Video