Skip to main content
In this lesson you’ll enable and configure BGP on a Cilium-based Kubernetes cluster so the cluster can advertise Pod CIDRs and Service IPs to an external router. This allows your physical network to route directly to nodes and enables ECMP when multiple nodes advertise the same service IP. Topology overview:
A network diagram showing a control-plane and two worker nodes, each with an ens33 interface and pod CIDRs (10.0.1.0/24 and 10.0.2.0/24) connected to a central router (5.5.5.5). The worker nodes establish BGP peering with the router to advertise their networks.
  • 3-node cluster: control-plane, worker1, worker2.
  • Each node sits on a different L2 network; all are reachable via a central router (the “outside world”).
  • Router loopback: 5.5.5.5 — worker1 and worker2 will peer to that IP and advertise their Pod CIDRs and service IPs.
Overview of steps
  1. Enable Cilium’s BGP control plane via Helm values.
  2. Restart Cilium operator and agents to pick up the change.
  3. Label nodes to select where BGP instances should run.
  4. Create three Cilium CRDs in order:
    • CiliumBGPAdvertisement — defines what to advertise (PodCIDR, Service types, attributes).
    • CiliumBGPPeerConfig — peer-level settings (timers, multihop, address families).
    • CiliumBGPClusterConfig — cluster-level config (which nodes, BGP instances & peers).
  5. Apply resources and validate peers, sessions and routes.
  6. Inspect Cilium logs on the agent if troubleshooting is required.
Prerequisites
  • A running Kubernetes cluster with Cilium installed.
  • Helm access to upgrade Cilium (if needed).
  • Network reachability between nodes and the external router (ensure any firewall/NAT allows BGP TCP/179 or multihop TTL as configured).
Enable the BGP control plane in Cilium Edit your Cilium Helm values (values.yaml) and enable the BGP control plane:
# excerpt from values.yaml
bgpControlPlane:
  # -- Enables the BGP control plane.
  enabled: true
  # -- SecretsNamespace is the namespace which BGP support will retrieve secrets from.
Upgrade the Cilium release and restart relevant components so the changes take effect:
# (example; use your release name/context as appropriate)
helm upgrade cilium cilium/cilium -f values.yaml --namespace kube-system

# Restart the operator and agents to pick up the change
kubectl rollout restart deploy cilium-operator -n kube-system
kubectl rollout restart ds cilium -n kube-system
Label the nodes where BGP should run In this demo BGP runs only on worker1 and worker2. Add a label that the cluster config will match on (e.g. bgp=true):
kubectl label node worker1 bgp=true
kubectl label node worker2 bgp=true

# Verify labels
kubectl get nodes --show-labels
Cilium BGP CRDs — what to create and why You will create three Cilium CRD resources. Create files for each and apply them in the order shown below. Summary table of the CRDs:
ResourcePurposeExample filename
CiliumBGPAdvertisementDefine what prefixes to advertise (PodCIDR, Service IPs, route attributes)bgp-advertisement.yaml
CiliumBGPPeerConfigConfigure peer-level BGP parameters (timers, ebgp-multihop, families)bgp-peer-config.yaml
CiliumBGPClusterConfigEnable BGP instances on a set of selected nodes and define peersbgp-config.yaml
  1. bgp-advertisement.yaml — which prefixes and attributes to advertise
apiVersion: cilium.io/v2alpha1
kind: CiliumBGPAdvertisement
metadata:
  name: bgp-advertisements
  labels:
    advertise: bgp
spec:
  advertisements:
    - advertisementType: "PodCIDR"
      attributes:
        communities:
          standard: [ "65000:99" ]
        localPreference: 99
    - advertisementType: "Service"
      service:
        addresses:
          - ClusterIP
          - ExternalIP
          - LoadBalancerIP
        selector:
          matchExpressions:
            # this selector will not match any services (it is shown as an example); to select all services omit the selector or use an empty selector
            - { key: somekey, operator: NotIn, values: ['never-used-value'] }
Notes:
  • The first entry advertises each node’s PodCIDR and attaches route attributes (communities, local preference).
  • The second entry advertises service addresses for services matched by the selector. The example selector intentionally matches nothing; to advertise all services omit the selector or use an empty selector. In production prefer a controlled selector to avoid leaking internal service IPs.
  1. bgp-peer-config.yaml — peer-level timers, multihop and address families
apiVersion: cilium.io/v2alpha1
kind: CiliumBGPPeerConfig
metadata:
  name: cilium-peer
spec:
  timers:
    holdTimeSeconds: 9
    keepAliveTimeSeconds: 3
  ebgpMultihop: 5
  families:
    - afi: ipv4
      safi: unicast
Notes:
  • keepAlive (3s) and holdTime (9s) must match the external router configuration.
  • ebgpMultihop permits a TTL > 1 for eBGP sessions when peers are not directly connected.
  • This demo uses only IPv4 unicast.
  1. bgp-config.yaml — cluster-level config enabling BGP instances on selected nodes
apiVersion: cilium.io/v2alpha1
kind: CiliumBGPClusterConfig
metadata:
  name: cilium-bgp
spec:
  nodeSelector:
    matchLabels:
      bgp: "true"
  bgpInstances:
    - name: "instance-64000"
      localASN: 64000
      peers:
        - name: "router1"
          peerASN: 65000
          peerAddress: 5.5.5.5
          peerConfigRef:
            name: "cilium-peer"
Notes:
  • nodeSelector chooses nodes labeled with bgp=true.
  • localASN is the ASN used by the node’s BGP instance (here 64000). For eBGP your peer ASN should differ (here 65000).
  • peerConfigRef binds this peer to the peer settings defined earlier.
Apply the resources Apply the three manifests in the following order:
kubectl apply -f bgp-advertisement.yaml
kubectl apply -f bgp-peer-config.yaml
kubectl apply -f bgp-config.yaml
Validate BGP peering and routes Use the Cilium CLI and other tools to validate peers, sessions, and routes. List BGP peers (shows session state and route counts):
cilium bgp peers
Example output:
Node        Local AS  Peer AS  Peer Address  Session State  Uptime   Family        Received  Advertised
worker1     64000     65000    5.5.5.5       established    11m5s    ipv4/unicast  10        8
worker2     64000     65000    5.5.5.5       established    11m3s    ipv4/unicast  3         8
  • Session State “established” means the BGP neighborship is up.
  • Received & Advertised columns show route counts from/to the peer.
Show routes advertised by the cluster (IPv4 unicast):
cilium bgp routes advertised ipv4 unicast
Example output (truncated):
Node      VRouter  Peer     Prefix           Nexthop             Age    Attrs
worker1   64000    5.5.5.5  10.0.1.0/24      192.168.211.128     11m24s [{Origin: i} {AsPath: 64000} {Nexthop: 192.168.211.128} {Communities: 65000:99}]
worker2   64000    5.5.5.5  10.0.2.0/24      192.168.44.128      11m24s [{Origin: i} {AsPath: 64000} {Nexthop: 192.168.44.128} {Communities: 65000:99}]
# plus service IPs (per service address types) appear here as /32 prefixes
Router-side verification (example using Bird) On your external router, confirm it has learned the Pod and service routes. Example Bird command and sample result:
sudo birdc show route table master4
Example relevant lines:
10.0.1.0/24           unicast [worker1 00:18:02.463 from 192.168.211.128] * (100/?)[AS64000i]
    via 192.168.223.2 on ens33
10.0.2.0/24           unicast [worker2 00:18:04.466 from 192.168.44.128] * (100/?)[AS64000i]
    via 192.168.223.2 on ens33
172.19.255.121/32     unicast [worker2 ...]  (and also from worker1)  # external LB IP learned from both nodes
  • When the router learns the same service IP from multiple nodes, it can use ECMP (equal-cost multipath) for load distribution.
Troubleshooting — inspect Cilium agent logs If peers aren’t establishing, inspect the cilium-agent logs on the node running the BGP control plane and filter for bgp-control-plane entries:
# find a cilium agent pod for the node you want to inspect
kubectl get pods -n kube-system -o wide | grep -i cilium

# show logs and filter for bgp-control-plane messages
kubectl logs <cilium-agent-pod-name> -n kube-system | grep bgp-control-plane
Sample log snippets indicating BGP initialization and peer events:
time="..." level=info msg="Cilium BGP Control Plane Controller now running..." subsys=bgp-control-plane
time="..." level=info msg="Registering BGP instance" instance=instance-64000 subsys=bgp-control-plane
time="..." level=info msg="Adding peer" instance=instance-64000 peer=router1 reconciler=Neighbor subsys=bgp-control-plane
time="..." level=info msg="Peer Up" Key=5.5.5.5 State=BGP_FSM_OPENCONFIRM Topic=Peer asn=64000 subsys=bgp-control-plane
Best practices and tips
  • Choose ASNs, BGP timers and ebgp-multihop values consistent with your physical network and external router configuration.
  • Use selectors in CiliumBGPAdvertisement to control which services are announced; avoid advertising all service IPs in production unless intentionally required.
  • Monitor route advertisements and BGP session health with the Cilium CLI and your router tooling (e.g., Bird, FRR).
Wrap-up
  • You enabled Cilium’s BGP control plane, configured node-level BGP instances, established a peer to an external router, and advertised Pod CIDRs and service IPs.
  • With these announcements your physical network can natively route traffic to nodes and support ECMP for service IPs advertised by multiple nodes.
Links and references

Watch Video