Skip to main content
This guide demonstrates Cilium’s L2 announcement feature and how it enables nodes to respond to ARP requests for LoadBalancer external IPs in a local Kubernetes cluster (Kind is used in this demo). The goal is to show how to assign external IPs to LoadBalancer services and have nodes announce those IPs on the host subnet so clients can reach services via ARP.
A presentation slide showing the word "Demo" on the left and a turquoise curved shape on the right with the text "L2 Announcement." Small copyright text "© Copyright KodeKloud" appears in the bottom-left corner.

Overview

  • Technologies: Cilium (L2 announcements), Kind (local cluster), Helm, kubectl.
  • Goal: Assign external IPs from a node subnet to LoadBalancer services and have selected nodes respond to ARP for those IPs using Cilium L2 announcements.
  • Key CRDs used:
    • CiliumLoadBalancerIPPool: allocate external IPs for LoadBalancer services.
    • CiliumL2AnnouncementPolicy: control which nodes/interfaces respond to ARP for which services.

Prerequisites

  • A running Kind cluster and kubectl configured to that cluster.
  • Helm installed and access to the Cilium Helm chart (cilium/cilium).
  • Docker available if using Kind node containers for debugging.
Useful references:

1. Check cluster state and prepare Cilium values

Before installing Cilium, the cluster nodes may appear NotReady because the CNI is missing:
kubectl get node
NAME                             STATUS     ROLES           AGE   VERSION
my-cluster-control-plane         NotReady   control-plane   30s   v1.32.2
my-cluster-worker                NotReady   <none>          19s   v1.32.2
my-cluster-worker2               NotReady   <none>          18s   v1.32.2
To enable L2 announcements, enable both the L2 announcement feature and kube-proxy replacement in Cilium. A typical Helm upgrade/install invocation looks like this:
helm upgrade cilium ./cilium \
  --namespace kube-system \
  --reuse-values \
  --set l2announcements.enabled=true \
  --set kubeProxyReplacement=true \
  --set k8sClientRateLimit.qps={QPS} \
  --set k8sClientRateLimit.burst={BURST} \
  --set k8sServiceHost=${API_SERVER_IP} \
  --set k8sServicePort=${API_SERVER_PORT}
When enabling kube-proxy replacement, provide the API server host and port (k8sServiceHost and k8sServicePort) so Cilium can reach the Kubernetes API. You can also edit the chart values file and set these keys before installing via Helm.
A good workflow is to fetch the default chart values, edit them, and then install:
helm show values cilium/cilium > values.yaml
# Edit values.yaml to set:
# kubeProxyReplacement: true
# l2announcements.enabled: true
# k8sServiceHost: <API_SERVER_IP>
helm install cilium cilium/cilium -n kube-system -f values.yaml
Example installation output (abbreviated):
LAST DEPLOYED: Wed Jun  4 23:58:52 2025
NAMESPACE: kube-system
STATUS: deployed
REVISION: 1
NOTES:
You have successfully installed Cilium with Hubble.
Your release version is 1.17.2.
Verify Cilium pods are running:
kubectl get pods -n kube-system

2. Deploy two simple applications with LoadBalancer services

Create a manifest named apps-and-svcs.yaml which deploys two HTTP echo applications and exposes each with a Service of type LoadBalancer. apps-and-svcs.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
  name: app1-deployment
  labels:
    app: app1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: app1
  template:
    metadata:
      labels:
        app: app1
    spec:
      containers:
      - name: app1
        image: hashicorp/http-echo
        args:
        - -listen=:80
        - -text="This is app1"
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: app1-service
  labels:
    app: myapp
spec:
  type: LoadBalancer
  selector:
    app: app1
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: app2-deployment
  labels:
    app: app2
spec:
  replicas: 1
  selector:
    matchLabels:
      app: app2
  template:
    metadata:
      labels:
        app: app2
    spec:
      containers:
      - name: app2
        image: hashicorp/http-echo
        args:
        - -listen=:80
        - -text="This is app2"
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: app2-service
  labels:
    app: myapp
spec:
  type: LoadBalancer
  selector:
    app: app2
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
Apply the manifest:
kubectl apply -f apps-and-svcs.yaml
In a local Kind cluster, LoadBalancer services initially show EXTERNAL-IP as <pending>:
kubectl get svc
NAME           TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
app1-service   LoadBalancer   10.96.207.254   <pending>     80:31514/TCP   5s
app2-service   LoadBalancer   10.96.157.80    <pending>     80:32546/TCP   5s
kubernetes     ClusterIP      10.96.0.1       <none>        443/TCP        9m

3. Provide external IPs to LoadBalancer services with Cilium IPAM

Create a CiliumLoadBalancerIPPool so Cilium can allocate external IPs for your services from an address block on your node subnet. In this demo the node subnet is 172.19.0.0/16 and we pick a small range: ipam.yaml:
apiVersion: "cilium.io/v2alpha1"
kind: CiliumLoadBalancerIPPool
metadata:
  name: "default-pool"
spec:
  blocks:
  - start: "172.19.0.240"
    stop: "172.19.0.250"
Apply the pool:
kubectl apply -f ipam.yaml
# ciliumloadbalancerippool.cilium.io/default-pool created
After creating the pool, services receive external IPs from that pool:
kubectl get svc
NAME           TYPE           CLUSTER-IP      EXTERNAL-IP      PORT(S)          AGE
app1-service   LoadBalancer   10.96.207.254   172.19.0.240     80:31514/TCP     4m52s
app2-service   LoadBalancer   10.96.157.80    172.19.0.241     80:32546/TCP     4m52s
kubernetes     ClusterIP      10.96.0.1       <none>           443/TCP          14m
Because these external IPs are on the same subnet as the host nodes, clients on that subnet will attempt to reach them via ARP. If no node responds to ARP for those IPs, traffic will not reach your services (curl will time out). Example failing request before L2 announcement is configured:
curl 172.19.0.240
# (no response / times out)

4. Configure the Cilium L2 Announcement policy

Create a CiliumL2AnnouncementPolicy to control which nodes and interfaces respond to ARP for which services. The example below:
  • Matches services labeled app: myapp
  • Excludes the control-plane node so only worker nodes respond
  • Restricts interfaces with a regex ‘^eth[0-9]+’ (matches eth0, eth1, …)
  • Enables both externalIPs and loadBalancerIPs
l2announce.yaml:
apiVersion: "cilium.io/v2alpha1"
kind: CiliumL2AnnouncementPolicy
metadata:
  name: l2announce-policy
spec:
  serviceSelector:
    matchLabels:
      app: myapp
  nodeSelector:
    matchExpressions:
      - key: node-role.kubernetes.io/control-plane
        operator: DoesNotExist
  interfaces:
    - "^eth[0-9]+"
  externalIPs: true
  loadBalancerIPs: true
Apply the policy:
kubectl apply -f l2announce.yaml
# ciliuml2announcementpolicy.cilium.io/l2announce-policy created
Describe the policy to confirm settings:
kubectl describe ciliuml2announcementpolicy l2announce-policy
# Name: l2announce-policy
# Spec:
#   External IPs: true
#   Interfaces: ^eth[0-9]+
#   Load Balancer IPs: true
#   Node Selector: matchExpressions: node-role.kubernetes.io/control-plane DoesNotExist
#   Service Selector: matchLabels: app=myapp

5. Which node responds for each service IP? (Leases)

Cilium coordinates which node will answer ARP for a given external IP using Lease objects. You can list the leases in the kube-system namespace:
kubectl get lease -n kube-system
NAME                                      HOLDER                 AGE
cilium-l2announce-default-app1-service    my-cluster-worker2     65s
cilium-l2announce-default-app2-service    my-cluster-worker      65s
...
The HOLDER field shows which node currently holds the lease and will therefore respond to ARP for the service IP.

6. Test connectivity and validate ARP

After the L2 announcement policy is active, clients on the same subnet can reach the LoadBalancer external IPs:
curl 172.19.0.240
curl 172.19.0.241
# "This is app2"
Verify the local ARP table shows the service IP entries mapped to the node MAC addresses:
arp -a | egrep "172.19.0.24[01]"
# ? (172.19.0.240) at 02:42:ac:13:00:04 [ether] on br-2c6b6be1a367
# ? (172.19.0.241) at 02:42:ac:13:00:03 [ether] on br-2c6b6be1a367
In Kind-based setups the MAC addresses correspond to the Docker/Kind bridge interfaces for node containers. You can inspect the node container interfaces to confirm which node IP/MAC answered ARP. Example:
docker ps
# CONTAINER ID   IMAGE                NAMES
# ... my-cluster-worker2
# ... my-cluster-worker
docker exec my-cluster-worker2 ip addr show eth0
# ... inet 172.19.0.4/16 ...
# link/ether 02:42:ac:13:00:04
The link/ether value should match the ARP mapping for the external IP owned by that node.

7. Summary and quick references

  • Enable l2announcements and kube-proxy replacement in Cilium (via Helm values or values.yaml).
  • Create a CiliumLoadBalancerIPPool to allocate external IPs on the node subnet for LoadBalancer services.
  • Create a CiliumL2AnnouncementPolicy to specify which services, nodes, and interfaces should respond to ARP (externalIPs and/or loadBalancerIPs).
  • Inspect Leases to determine which node is announcing each external IP, and validate connectivity with curl and arp.
Key resources and commands:
Resource / ObjectPurposeExample command
Cilium Helm valuesEnable kube-proxy replacement and l2announcementshelm install cilium cilium/cilium -n kube-system -f values.yaml
CiliumLoadBalancerIPPoolProvide external IPs for LoadBalancer serviceskubectl apply -f ipam.yaml
CiliumL2AnnouncementPolicyControl ARP announcements per service/node/interfacekubectl apply -f l2announce.yaml
Lease objectsShow which node currently announces a service IPkubectl get lease -n kube-system
Further reading and docs: This completes the demonstration of Cilium’s L2 announcement feature for local clusters.

Watch Video