Guide to enable and configure Cilium BGP control plane on Kubernetes to advertise Pod CIDRs and Service IPs to external routers for direct routing and ECMP.
Use this file to discover all available pages before exploring further.
In this lesson you’ll enable and configure BGP on a Cilium-based Kubernetes cluster so the cluster can advertise Pod CIDRs and Service IPs to an external router. This allows your physical network to route directly to nodes and enables ECMP when multiple nodes advertise the same service IP.Topology overview:
3-node cluster: control-plane, worker1, worker2.
Each node sits on a different L2 network; all are reachable via a central router (the “outside world”).
Router loopback: 5.5.5.5 — worker1 and worker2 will peer to that IP and advertise their Pod CIDRs and service IPs.
Overview of steps
Enable Cilium’s BGP control plane via Helm values.
Restart Cilium operator and agents to pick up the change.
Label nodes to select where BGP instances should run.
Create three Cilium CRDs in order:
CiliumBGPAdvertisement — defines what to advertise (PodCIDR, Service types, attributes).
Apply resources and validate peers, sessions and routes.
Inspect Cilium logs on the agent if troubleshooting is required.
Prerequisites
A running Kubernetes cluster with Cilium installed.
Helm access to upgrade Cilium (if needed).
Network reachability between nodes and the external router (ensure any firewall/NAT allows BGP TCP/179 or multihop TTL as configured).
Enable the BGP control plane in Cilium
Edit your Cilium Helm values (values.yaml) and enable the BGP control plane:
# excerpt from values.yamlbgpControlPlane: # -- Enables the BGP control plane. enabled: true # -- SecretsNamespace is the namespace which BGP support will retrieve secrets from.
Upgrade the Cilium release and restart relevant components so the changes take effect:
# (example; use your release name/context as appropriate)helm upgrade cilium cilium/cilium -f values.yaml --namespace kube-system# Restart the operator and agents to pick up the changekubectl rollout restart deploy cilium-operator -n kube-systemkubectl rollout restart ds cilium -n kube-system
Label the nodes where BGP should run
In this demo BGP runs only on worker1 and worker2. Add a label that the cluster config will match on (e.g. bgp=true):
Cilium BGP CRDs — what to create and why
You will create three Cilium CRD resources. Create files for each and apply them in the order shown below.Summary table of the CRDs:
Resource
Purpose
Example filename
CiliumBGPAdvertisement
Define what prefixes to advertise (PodCIDR, Service IPs, route attributes)
Enable BGP instances on a set of selected nodes and define peers
bgp-config.yaml
bgp-advertisement.yaml — which prefixes and attributes to advertise
apiVersion: cilium.io/v2alpha1kind: CiliumBGPAdvertisementmetadata: name: bgp-advertisements labels: advertise: bgpspec: advertisements: - advertisementType: "PodCIDR" attributes: communities: standard: [ "65000:99" ] localPreference: 99 - advertisementType: "Service" service: addresses: - ClusterIP - ExternalIP - LoadBalancerIP selector: matchExpressions: # this selector will not match any services (it is shown as an example); to select all services omit the selector or use an empty selector - { key: somekey, operator: NotIn, values: ['never-used-value'] }
Notes:
The first entry advertises each node’s PodCIDR and attaches route attributes (communities, local preference).
The second entry advertises service addresses for services matched by the selector. The example selector intentionally matches nothing; to advertise all services omit the selector or use an empty selector. In production prefer a controlled selector to avoid leaking internal service IPs.
bgp-peer-config.yaml — peer-level timers, multihop and address families
Validate BGP peering and routes
Use the Cilium CLI and other tools to validate peers, sessions, and routes.List BGP peers (shows session state and route counts):
cilium bgp peers
Example output:
Node Local AS Peer AS Peer Address Session State Uptime Family Received Advertisedworker1 64000 65000 5.5.5.5 established 11m5s ipv4/unicast 10 8worker2 64000 65000 5.5.5.5 established 11m3s ipv4/unicast 3 8
Session State “established” means the BGP neighborship is up.
Received & Advertised columns show route counts from/to the peer.
Show routes advertised by the cluster (IPv4 unicast):
cilium bgp routes advertised ipv4 unicast
Example output (truncated):
Node VRouter Peer Prefix Nexthop Age Attrsworker1 64000 5.5.5.5 10.0.1.0/24 192.168.211.128 11m24s [{Origin: i} {AsPath: 64000} {Nexthop: 192.168.211.128} {Communities: 65000:99}]worker2 64000 5.5.5.5 10.0.2.0/24 192.168.44.128 11m24s [{Origin: i} {AsPath: 64000} {Nexthop: 192.168.44.128} {Communities: 65000:99}]# plus service IPs (per service address types) appear here as /32 prefixes
Router-side verification (example using Bird)
On your external router, confirm it has learned the Pod and service routes. Example Bird command and sample result:
sudo birdc show route table master4
Example relevant lines:
10.0.1.0/24 unicast [worker1 00:18:02.463 from 192.168.211.128] * (100/?)[AS64000i] via 192.168.223.2 on ens3310.0.2.0/24 unicast [worker2 00:18:04.466 from 192.168.44.128] * (100/?)[AS64000i] via 192.168.223.2 on ens33172.19.255.121/32 unicast [worker2 ...] (and also from worker1) # external LB IP learned from both nodes
When the router learns the same service IP from multiple nodes, it can use ECMP (equal-cost multipath) for load distribution.
Troubleshooting — inspect Cilium agent logs
If peers aren’t establishing, inspect the cilium-agent logs on the node running the BGP control plane and filter for bgp-control-plane entries:
# find a cilium agent pod for the node you want to inspectkubectl get pods -n kube-system -o wide | grep -i cilium# show logs and filter for bgp-control-plane messageskubectl logs <cilium-agent-pod-name> -n kube-system | grep bgp-control-plane
Sample log snippets indicating BGP initialization and peer events:
Choose ASNs, BGP timers and ebgp-multihop values consistent with your physical network and external router configuration.
Use selectors in CiliumBGPAdvertisement to control which services are announced; avoid advertising all service IPs in production unless intentionally required.
Monitor route advertisements and BGP session health with the Cilium CLI and your router tooling (e.g., Bird, FRR).
Wrap-up
You enabled Cilium’s BGP control plane, configured node-level BGP instances, established a peer to an external router, and advertised Pod CIDRs and service IPs.
With these announcements your physical network can natively route traffic to nodes and support ECMP for service IPs advertised by multiple nodes.