Skip to main content
This document explains the routing modes supported by Cilium and how they affect pod-to-pod networking across different physical network topologies. You’ll learn the differences between encapsulation (tunnel) mode and native routing mode, the trade-offs for each, configuration examples, and operational requirements.

Topology overview

Consider a simple two-node cluster where the nodes are on different physical subnets (separated by one or more routers). In this example:
  • Node 1 physical interface (Ethernet0): 172.16.2.0/24, gateway R1 = 172.16.2.1
  • Node 2 physical interface (Ethernet0): 172.16.3.0/24, gateway R2 = 172.16.3.1
  • Pod CIDRs assigned by CNI:
    • Node 1 pod CIDR: 10.244.1.0/24
    • Node 2 pod CIDR: 10.244.2.0/24
  • Pod addresses:
    • Pod 1 (on Node 1): 10.244.1.1
    • Pod 2 (on Node 2): 10.244.2.1
If Pod 1 (10.244.1.1) sends a packet to Pod 2 (10.244.2.1) and the physical router R1 does not know how to reach 10.244.2.0/24, the packet will be dropped. This is a common real-world issue: underlay networks typically do not know cluster-internal pod CIDRs by default.
A network diagram illustrating Kubernetes routing between two nodes, each hosting a pod on 10.244.1.0/24 and 10.244.2.0/24, connected via routers R1 and R2. It highlights a packet from 10.244.1.1 to 10.244.2.1 and notes that the 10.244.2.0/24 route is missing from R1's routing table.
CNI solutions (including Cilium) address this mismatch between the cluster overlay and the physical underlay using one of two general approaches:
  • Encapsulation (tunnel) mode
  • Native routing mode
We’ll cover encapsulation first, then native routing.

Encapsulation mode (tunnel)

In encapsulation mode, the original pod-originated packet (the inner packet) is wrapped inside an outer packet whose source and destination are the physical node IPs. The underlay only needs to route the outer IPs (the node IPs), not the pod CIDRs.
  • Inner packet: src=10.244.1.1, dst=10.244.2.1
  • Outer packet: src=node1_IP (e.g., 172.16.2.2), dst=node2_IP (e.g., 172.16.3.2)
When the outer packet reaches Node 2, the Cilium agent decapsulates it and forwards the inner packet to the destination pod.
A network diagram illustrating Kubernetes "Encapsulation Mode (Default)" with two nodes each hosting a pod (10.244.1.1 and 10.244.2.1), showing host IPs (172.16.2.2 → 172.16.3.2), routers R1/R2, interfaces and a routing table used for encapsulated pod-to-pod traffic.
Reply traffic follows the same process in reverse: Pod 2 → Cilium on Node 2 encapsulates (node2_IP → node1_IP) → underlay routes the outer packet → Node 1 decapsulates → Pod 1 receives the inner packet.

Encapsulation details

Encapsulation adds a tunnel header and outer IP/UDP/etc. Common tunneling protocols used by CNIs include:
  • VXLAN (IANA-assigned UDP port 4789; historically some implementations used UDP 8472)
  • Geneve (typically UDP port 6081)
A network diagram titled "Encapsulation Mode (Default)" showing two Kubernetes nodes with pod IPs (10.244.1.1 and 10.244.2.1) connected to an underlay network (172.16.2.2 and 172.16.3.2) and a packet flow illustrated with a VXLAN header encapsulating the pod-to-pod traffic. The diagram notes that the underlay only needs to know the node IP addresses.

Requirements for encapsulation mode

  • Node-to-node IP connectivity: every node IP must be reachable from every other node.
  • Firewall and security groups must allow the tunnel/encapsulation protocol (UDP ports used by VXLAN/Geneve, etc.).
  • Ensure MTU is sized to account for tunnel overhead (or enable jumbo frames).
A network diagram titled "Encapsulation Mode – Requirements" showing three Kubernetes nodes each hosting a pod (IPs 10.244.1.1, 10.244.2.1, 10.244.3.1) connected via routers and firewalls. The image notes that node-to-node connectivity is required.

Encapsulation protocol options

ProtocolTypical UDP PortNotes
VXLAN4789 (historically 8472 in some deployments)Widely supported; CNI implementations often default to VXLAN
Geneve6081Flexible, extensible header options; used by some CNIs for advanced features
A slide titled "Encapsulation Modes" showing a table that lists VXLAN (Default) — 8472/UDP and Geneve — 6081/UDP. The slide also includes a small KodeKloud copyright in the corner.

Overhead and MTU considerations

Encapsulation typically adds ~50 bytes of overhead (outer IP/UDP + tunnel header for VXLAN/Geneve). That reduces the effective MTU for the inner packet and can cause fragmentation. Mitigation strategies:
  • Reduce the inner MTU on host interfaces (e.g., subtract tunnel overhead).
  • Use jumbo frames on the underlay.
  • Monitor for fragmentation and adjust accordingly.
A slide titled "Encapsulation Overhead" showing a packet diagram with an outer IP header (source/destination 172.16.2.2/172.16.3.2), a VXLAN header, an inner IP header (source/destination 10.244.1.1/10.244.2.1) and payload. It notes an extra 50 bytes of encapsulation that reduce the effective MTU by 50.

Pros and cons of encapsulation

ProsCons
Works with simple underlays — only node IP reachability requiredAdds overhead and slightly higher latency
No special routing configuration required on the physical networkReduced effective MTU and possible fragmentation
Portable across on-prem and cloud environmentsPacket visibility and debugging are harder (tunnel hides inner packet)
Lower dependency on provider featuresMore CPU/network processing for encapsulation/decapsulation
Encapsulation is often simpler to deploy because the physical network only needs to reach node IPs. Ensure firewall rules and MTU settings are adjusted for the chosen tunnel protocol (VXLAN/Geneve).

Native routing mode

In native routing mode, pods’ IPs are routed across the physical network without encapsulation. The inner packet travels across the underlay unchanged, so routers must know how to reach each node’s pod CIDRs. If Pod 1 sends to Pod 2, the packet leaves Node 1 with src=10.244.1.1 and dst=10.244.2.1. The physical router R1 must have a route pointing to Node 2’s pod CIDR (10.244.2.0/24) for the packet to be delivered.
A network diagram titled "Native Routing Mode" showing two Kubernetes nodes (pod subnets 10.244.1.0/24 and 10.244.2.0/24) peering via BGP through a central fabric to edge routers R1 and R2. Route tables and next-hop information for the connected networks are illustrated at the bottom.

How underlay routing can learn pod CIDRs

To make native routing practical at scale, the underlay must learn pod CIDRs. Common approaches:
  • Static routes — simple but not scalable or fault-tolerant.
  • Dynamic routing (BGP) — Cilium can advertise pod CIDRs into the physical fabric using BGP so routers learn where to forward pod traffic.
  • SDN/orchestration solutions that distribute routes to the underlay.
When Cilium advertises pod routes (for example via BGP), physical routers (R1/R2) learn the pod CIDRs and forward pod-to-pod traffic natively without encapsulation.

Configuring native routing in Cilium

Cilium defaults to encapsulation. To enable native routing, set the routing mode to “native” and configure which CIDR ranges should be treated as natively routable. This allows you to mix modes: some pod CIDRs can be native while others remain encapsulated. Example Cilium configuration snippet:
# cilium-config snippet (example)
routingMode: "native"

# IPv4 CIDR(s) that will use native routing (e.g., your cluster Pod CIDR)
ipv4NativeRoutingCIDR: "10.244.0.0/16"
ipv6NativeRoutingCIDR: ""
Notes about these settings:
  • ipv4NativeRoutingCIDR defines which IP range(s) Cilium advertises or treats as natively routed.
  • Traffic within that CIDR will be forwarded natively by the underlay; traffic outside falls back to encapsulation.
  • You can restrict the range (for example, a /17) to mix native and encapsulated traffic across different pod groups.

Pros and cons of native routing

ProsCons
No encapsulation overhead → better throughput and lower latencyRequires underlay routing for pod CIDRs (static routes or dynamic routing)
Full visibility into packets (no tunnel headers)Adds operational complexity (BGP peering, route management)
No MTU overhead related to encapsulationLess portable where providers restrict route advertisement or BGP
Native routing requires careful underlay configuration. If your physical network lacks routes for the pod CIDRs (or your cloud provider blocks route advertisement), pod-to-pod traffic can be dropped. Plan route distribution (e.g., BGP) before switching to native routing.

Quick comparison

FeatureEncapsulation (Tunnel)Native routing
Underlay requirementNode-to-node IP reachability onlyUnderlay must know pod CIDRs
OverheadYes — tunnel headers (e.g., ~50 bytes)No
MTU impactMay reduce effective MTUNo encapsulation-related MTU reduction
Operational complexityLowerHigher (routing configuration, BGP)
PortabilityHighDepends on provider/network capabilities
Visibility & debuggingHarder (inner packets hidden)Easier (packets visible)

Choosing a mode

  • Use encapsulation if you need portability and minimal underlay changes (default for many environments).
  • Use native routing for maximum performance and visibility if you can reliably distribute pod routes into the underlay (for example with BGP).
  • Consider a hybrid approach: advertise only selected CIDRs natively and encapsulate others.

Further reading:
Summary
  • Encapsulation (tunneling) is simpler to deploy and portable, but adds overhead and MTU considerations.
  • Native routing removes encapsulation overhead and improves performance but requires route distribution into the physical network (BGP or static routes).
  • Cilium supports both modes and allows mixing them by CIDR. Choose based on your underlay capabilities, performance goals, and operational constraints.

Watch Video