Skip to main content
This lesson compares Cilium’s sidecarless service mesh architecture with the common sidecar-based model used by many service meshes (for example, Istio). It explains how each model directs traffic, the operational trade-offs, and when you might prefer one approach over the other. Most service meshes implement a sidecar-based model. In Kubernetes, the mesh injects a proxy container (sidecar) into every application pod. Each application pod (for example, a Python or Go service) pairs with a local proxy that intercepts and routes all inbound and outbound application traffic.
A diagram titled "Service Mesh Implementation — Sidecar" showing two Kubernetes pods (one with a Python service, one with a Go service), each paired with a "Service Mesh Proxy" sidecar, with arrows indicating traffic flowing between the proxies.
Benefits of the sidecar model:
  • Services don’t need to implement mesh features themselves; sidecars provide observability, traffic control, mTLS, retries, circuit-breaking, etc.
  • Language-agnostic: applications written in any language benefit from the same network features without changing application code.
  • Supports immutable or closed-source applications because features are attached via a proxy, not code changes.
A presentation slide titled "Service Mesh Implementation – Sidecar" showing three benefit boxes. The boxes state: services don't need to implement mesh functionality; useful for apps in different languages; and supports immutable third-party applications.
Trade-offs and downsides of sidecar injection:
  • Higher resource usage: a proxy instance runs per pod, increasing CPU and memory consumption cluster-wide.
  • Operational complexity: operators must manage proxy configuration for each pod (per-pod sidecars).
  • Slower lifecycle and startup complexity: pods may wait for sidecar readiness, introducing potential race conditions.
  • Extra network hop: each request typically traverses the application → sidecar → network path, adding latency.
Cilium takes a different approach: a sidecarless model. Instead of running a proxy per pod, Cilium offloads much of the network and security logic to eBPF programs executing in the Linux kernel. This reduces per-pod overhead while providing comparable networking and security capabilities. Key distinctions:
  • eBPF in-kernel processing: Cilium programs operate at L3/L4 (network and transport layers) inside the kernel, allowing efficient packet processing without additional proxies.
  • L7 functionality via Envoy: for application-layer (L7) features—HTTP routing, L7-aware telemetry, or protocol-specific filtering—Cilium forwards traffic to Envoy. The crucial difference is that Envoy is deployed as a node-local instance (one Envoy per node), not one per pod. That typically means far fewer Envoy instances across the cluster (e.g., two nodes → two Envoy instances).
Traffic flow differences:
  • L3/L4 traffic: handled directly by eBPF inside the kernel — no extra proxy hop.
  • L7 traffic: intercepted and forwarded to the node-local Envoy proxy for L7 termination or advanced processing — one proxy hop only for L7 paths.
A side-by-side diagram comparing network traffic management: the left shows L3/L4 traffic flow handled by cilium/eBPF, and the right shows L7 terminations where an Envoy/service-mesh proxy intercepts traffic before it exits via eth0.
Comparison table — Sidecar vs Sidecarless (Cilium)
FeatureSidecar model (e.g., Istio)Cilium (sidecarless)
Proxy placementPer-pod sidecar proxiesNode-local Envoy (when L7 needed); eBPF in kernel for L3/L4
L3/L4 handlingSidecar or kernel depending on implementationeBPF handles L3/L4 in kernel (no proxy hop)
L7 handlingSidecar handles L7Envoy handles L7 (node-local), invoked for L7 only
Resource consumptionHigher (proxy per pod)Lower (fewer Envoy instances, eBPF efficiency)
Configuration surfacePer-pod proxy configs to manageCentralized node-level configs + eBPF policies
Startup and lifecycleSidecars add startup complexityPod startup decoupled from sidecar lifecycle
Compatibility with immutable appsWorks well (no code change)Works well; integrates transparently via kernel hooks
Observability & L7 featuresFull L7 via sidecarFull L7 via Envoy; L3/L4 observability via eBPF
When to choose which approach:
  • Use a sidecar model when you need per-pod L7 controls tightly coupled with the application, or when an existing ecosystem relies on per-pod proxies.
  • Choose Cilium (sidecarless) when you want lower per-pod resource overhead, high-performance L3/L4 processing, and a simplified operational model while still supporting L7 functionality through node-local Envoy instances.
  • Cilium can integrate with other meshes if you need hybrid setups or incremental migration.
Links and references:
  • Cilium — eBPF-based networking and security for cloud-native environments
  • eBPF — extended Berkeley Packet Filter: safe kernel-level programs
  • Envoy — high-performance edge and service proxy
  • Istio — example sidecar-based service mesh
Cilium’s approach reduces per-pod overhead by using eBPF for L3/L4 while still providing L7 capabilities via node-local Envoy instances—combining performance with feature completeness.

Watch Video