Skip to main content
In this lesson we configure Cilium to act as a BGP speaker, enable load-balancer IPAM so Kubernetes LoadBalancer services receive IPs managed by Cilium, and validate end-to-end traffic flow for those services. We will cover three related topics, step by step:
  • Configuring Cilium BGP to advertise service and pod routes into your network.
  • Configuring Cilium load-balancer IPAM so LoadBalancer services get addresses from a Cilium-managed pool.
  • Validating the configuration and demonstrating end-to-end traffic flow for those services.
Each section contains configuration examples, verification commands, and recommended checks to confirm correct behavior.
Ensure you have cluster admin access and a working Cilium installation. If you haven’t installed Cilium yet, see the Cilium docs: https://docs.cilium.io/.

At-a-glance: What you’ll accomplish

TopicPurposeQuick command examples
Cilium BGPAdvertise Kubernetes/pod/service routes into external network via BGPkubectl get pods -n kube-system -l k8s-app=cilium
LoadBalancer IPAMAllocate external IPs from a Cilium-managed pool to Serviceskubectl get svc -o wide
Validation & traffic flowConfirm BGP neighbors, service external IPs, and end-to-end connectivitykubectl describe svc <name>; test with curl or telnet

Step-by-step overview

  1. Configure Cilium to run a BGP speaker and announce the desired prefixes to your network routers.
  2. Configure the load-balancer IPAM pool and enable the Cilium LoadBalancer IP assignment for Kubernetes services.
  3. Deploy a sample LoadBalancer service and confirm it receives an external IP from the pool.
  4. Verify BGP neighbor status on your routers and on any Cilium BGP-ready components.
  5. Test connectivity from outside your cluster to the LoadBalancer IP and trace the path (verify traffic reaches the service endpoints).

Example: high-level values file (illustrative)

Below is an illustrative values snippet showing how you might enable BGP and load-balancer IPAM in Cilium’s Helm/values configuration. Replace the placeholders with values appropriate for your environment.
# values.yaml (illustrative)
bgp:
  enabled: true
  # Add your BGP neighbors/peers here
  neighbors:
    - peer-address: 10.0.0.1
      peer-as: 65001
      my-as: 65000
loadBalancer:
  ipam:
    # Enable Cilium-managed IP pools for Service.loadBalancerIP
    enabled: true
    # An example pool of IPs to assign to LoadBalancer services
    pools:
      - name: lb-pool
        cidr: 192.0.2.0/28
        # optional: range to restrict the assigned subset
        range: 192.0.2.2-192.0.2.14
BGP advertises routes into your network—misconfiguration can cause traffic blackholes or route leaks. Coordinate prefix announcements and AS numbers with your network team before enabling BGP.

Typical configuration flow

  • Install or update Cilium with the desired Helm values (or update a Cilium ConfigMap); examples above are illustrative.
  • Ensure network/router side is configured to accept BGP sessions from your cluster nodes (peer IPs, AS numbers, route filters).
  • Configure the load-balancer IP pool in Cilium so that Service objects of type LoadBalancer can be assigned IPs from the pool.
  • Deploy a sample app and create a Service of type LoadBalancer to test the allocation and BGP announcements.

Verification checklist and commands

Use these checks in sequence to validate the setup:
  1. Verify Cilium agents are running
kubectl get pods -n kube-system -l k8s-app=cilium
kubectl logs -n kube-system -l k8s-app=cilium --tail=100
  1. Inspect the Service and the assigned external IP
kubectl apply -f samples/sample-loadbalancer-svc.yaml
kubectl get svc -o wide
kubectl describe svc <service-name>
  1. Confirm the external IP came from your configured pool
  • Check the Service’s external IP matches an address inside the pool CIDR.
  1. Check BGP advertisements and neighbor states
  • On your external router(s): verify BGP neighbor is established and that the prefix for the Service IP (or the service/pod CIDR) is being learned.
  • Example: Router CLI (platform dependent)
    • Cisco-like: show ip bgp summary / show ip route <prefix>
    • BIRD: birdc show status / birdc show route for 192.0.2.2
  • On the cluster: review Cilium logs and status for BGP-related messages
kubectl -n kube-system logs -l k8s-app=cilium | grep -i bgp || true
kubectl -n kube-system exec -it <cilium-pod> -- cilium status
  1. End-to-end connectivity test
  • From an external host in the same network, confirm TCP/HTTP reachability to the LoadBalancer IP:
curl -v http://192.0.2.2/    # Replace with actual external IP
# or
telnet 192.0.2.2 80
  1. Trace traffic flow to confirm it reaches the service endpoint
  • Use tcpdump on the node or Cilium endpoints to observe the incoming packets and their forwarding.
# Example: run on a node (requires appropriate access)
sudo tcpdump -i any host 192.0.2.2

References and further reading

This lesson will expand each bullet above into concrete configuration and verification steps, with example manifests and commands to run. Follow the sections in order to ensure safe rollout and predictable behavior.

Watch Video