In this lesson we cover Cilium’s LoadBalancer IPAM: how Cilium can allocate external IP addresses for Kubernetes LoadBalancer Services and how to make those IPs reachable from your network. Why this mattersDocumentation Index
Fetch the complete documentation index at: https://notes.kodekloud.com/llms.txt
Use this file to discover all available pages before exploring further.
- In cloud providers (for example, EKS on AWS) creating a Service of type LoadBalancer usually triggers the cloud provider to provision a load balancer and return an external IP or DNS name (for example, an AWS ELB DNS entry).
- On-premise clusters typically use components like MetalLB to provide external IPs.
- Cilium’s LoadBalancer IPAM eliminates the need for separate tooling by allocating external IPs directly from configured pools and exposing them as the Service EXTERNAL-IP.
- apiVersion / kind / metadata.name: standard Kubernetes CR metadata.
- spec.blocks: one or more blocks from which external IPs are allocated.
- spec.serviceSelector: (optional) restricts allocation to Services matching the provided labels.
| Block type | Description | Example |
|---|---|---|
| CIDR | A CIDR range (IPv4 or IPv6) from which Cilium will allocate IPs | cidr: "10.0.10.0/24" |
| Start/Stop range | An explicit sequential range between two addresses | start: "20.0.20.100", stop: "20.0.20.200" |
| Single IP | A start-only entry to represent a single IP | start: "1.2.3.4" |
- If you include spec.serviceSelector, only Services that match those labels will be eligible for external IP allocation from this pool.
- In the example above, only Services labeled
color: redwill be assigned IPs fromblue-pool.
| Mechanism | When to use | Notes |
|---|---|---|
| BGP advertisements | Data center or network fabric supports BGP peering | Use a BGP speaker to advertise pool prefixes to upstream routers |
| L2 announcements (ARP/NDP) | Simple L2 networks or when BGP is not available | Cluster nodes respond to neighbor discovery for allocated IPs |
| Static routing (network config) | Small deployments or lab environments | Add static routes pointing to cluster nodes or load balancer devices |
- Verify the pool CR exists:
- kubectl get ciliumloadbalancerippool -A
- kubectl describe ciliumloadbalancerippool blue-pool
- Check Service allocation:
- kubectl get svc myapp-service -o wide
- kubectl describe svc myapp-service
- Confirm network reachability:
- From outside the cluster, ping or curl the EXTERNAL-IP.
- Use network tools on upstream routers to verify routes or BGP advertisements.
- Logs and Cilium status:
- Check Cilium agent logs on nodes for IPAM-related messages.
- Inspect Cilium control plane status (see Cilium documentation for cluster-specific commands).
- Only include pool ranges that your network fabric can route to the cluster.
- Use selectors to separate pools by environment (prod/staging) or by tenant.
- Prefer advertising the entire pool prefix via BGP when possible — it simplifies routing and scales better than per-IP static routes.
- Cilium LoadBalancer IPAM documentation
- Kubernetes Service types
- MetalLB project
- AWS Elastic Load Balancing (ELB)
- BGP overview
- BGP advertisements from the cluster (peering with a router or BGP speaker).
- L2 (ARP/NDP) announcements so the cluster responds to neighbor discovery for the IPs.
When creating pools, include only ranges that your network fabric can actually route to the cluster. Otherwise external traffic won’t reach the assigned IPs even though Services will show the EXTERNAL-IP.