Skip to main content
This article explains how Cilium implements Ingress in Kubernetes, comparing its approach with traditional controllers and showing how to enable and configure Cilium’s Ingress features via Helm. Cilium combines eBPF-based kernel integration for high-performance L3/L4 routing with Envoy for selective L7 and policy enforcement. It also supports the Kubernetes Gateway API (overview only; Gateway API specifics are out of scope here).
A slide titled "Cilium Ingress" showing a multicolored hexagon cluster on the left funneling to two Kubernetes icons on the right labeled "gateway api" and "Ingress." It illustrates Cilium routing or integration with the Gateway API and Ingress.
Key advantages of Cilium Ingress
  • Programs the Linux kernel datapath with eBPF for most routing and load-balancing work, avoiding the need for a separate Ingress-controller pod for L3/L4.
  • Uses Envoy only where required: L7 processing, TLS termination (if configured), and policy enforcement.
  • Supports both traditional Kubernetes Ingress and the Gateway API for modern traffic routing.
When you enable Cilium’s ingress controller (ingressController.enabled = true), Cilium automatically configures Envoy where needed. eBPF handles most routing and load-balancing tasks for better performance and lower resource usage.

Enabling Cilium Ingress (Helm values)

To enable Cilium Ingress in a Helm-managed installation, set up the NodePort service implementation and enable the ingress controller in your values.yaml. You can also make Cilium the default Ingress controller and choose a load balancer mode (dedicated or shared). values.yaml (example)
nodePort:
  # Enable the Cilium NodePort service implementation.
  enabled: true

ingressController:
  # Enable cilium ingress controller.
  # This will automatically set enable-envoy-config as well.
  enabled: true

  # Set cilium ingress controller to be the default ingress controller.
  # This will let cilium route Ingress entries without an ingressClassName set.
  default: true

  # Default ingress load balancer mode: "dedicated" or "shared"
  # Can also be set via annotation: ingress.cilium.io/loadbalancer-mode: dedicated
  loadbalancerMode: dedicated
Apply and reload Cilium components
helm upgrade cilium cilium/cilium -n kube-system -f values.yaml

# Restart Cilium to pick up config changes
kubectl -n kube-system rollout restart daemonset/cilium
kubectl -n kube-system rollout restart deployment/cilium-operator

Load balancer modes: dedicated vs shared

Cilium supports two modes for provisioning external cloud load balancers for Ingress resources. Choose the mode that fits your isolation, cost, and routing needs.
ModeDescriptionWhen to use
dedicatedEach Ingress resource provisions its own external Load Balancer (e.g., AWS ELB/ALB per Ingress).Use when you need strict isolation per Ingress or separate public endpoints.
sharedMultiple Ingress resources share a single external Load Balancer. Cilium programs forwarding rules so a single LB routes traffic to many Ingresses.Use to minimize cloud LB count and cost; centralize routing and certificates.
Using dedicated mode increases the number of cloud load balancers (and cost). Use shared mode to consolidate LB resources, and prefer dedicated only when endpoint isolation or separate LB features are required.
You can also control the LB mode per-Ingress with an annotation:
  • ingress.cilium.io/loadbalancer-mode: shared
  • ingress.cilium.io/loadbalancer-mode: dedicated

Example: Minimal Ingress manifest

If you set ingressController.default: true, Cilium will claim Ingresses without an ingressClassName. Otherwise, you can explicitly set the Cilium ingress class.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress1
spec:
  ingressClassName: cilium
  rules:
    - host: app1.example.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: app1-svc
                port:
                  number: 80

Behavior examples

Dedicated mode
  • Each Ingress provisions an external LB, so kubectl get ingress will show unique addresses per Ingress:
$ kubectl get ingress
NAME        CLASS   HOSTS              ADDRESS                                                                                   PORTS   AGE
ingress1    cilium  app1.example.com   a3c7be5e725ce4a62ab7b5deefb2bf20-415562864.us-east-1.elb.amazonaws.com               80      22s
ingress2    cilium  app2.example.com   a732aea6d7c9f482aa4d3029753fb5f3-1501246429.us-east-1.elb.amazonaws.com               80      22s
test        cilium  *                  a96165cf1bb584356859b46890b117ac-685623129.us-east-1.elb.amazonaws.com               80      22s
Shared mode
  • All Ingress resources present the same external address because a single shared LB is used and Cilium programs routing rules to direct traffic to the correct Ingress backend.

Verification and troubleshooting tips

  • Check Cilium pods and operator in the kube-system namespace to confirm the ingress controller is enabled:
    • kubectl -n kube-system get pods | grep cilium
    • kubectl -n kube-system get deployment cilium-operator
  • Validate default controller behavior:
    • Create an Ingress without ingressClassName if ingressController.default: true and confirm Cilium has claimed it.
  • Change mode per-Ingress via annotation: ingress.cilium.io/loadbalancer-mode: shared or dedicated.
  • Inspect the Envoy configuration (when used) in the Cilium-managed Envoy instances for L7 policy issues.
  • Consult logs for cilium-agent and cilium-operator when troubleshooting provisioning or LB interactions.

References

This concludes the Cilium Ingress overview. Gateway API integration and advanced Envoy configuration are covered in separate Cilium documentation and guides.

Watch Video