Guide to install and verify Cilium on Kubernetes using the Cilium CLI, including creating a kind cluster, installing the CLI, deploying Cilium, checking status, and previewing manifests.
This guide walks through installing Cilium (a Kubernetes CNI powered by eBPF) on a cluster using the Cilium CLI. It covers creating a local example cluster with kind (optional), installing the cilium CLI, installing Cilium into the cluster, verifying the installation, and previewing the manifests that the CLI applies.Target audience: Kubernetes users who want to replace or install a CNI with Cilium and those validating Cilium deployments in local or cloud clusters.
The cluster must not already have a CNI installed (kubelet nodes will otherwise report NotReady).
kubectl configured
kubectl must point to the target cluster context.
(Optional) kind
Useful for local testing. The examples below show how to create a kind cluster with the default CNI disabled.
You can use any Kubernetes distribution (kind, Minikube, EKS, GKE, etc.). The important requirement for this demonstration is that the cluster must not have a CNI installed — otherwise kubelet nodes will report NotReady until a CNI is present.
If you want to follow along locally, create a kind cluster and disable the default CNI so we can install Cilium manually.Save this example as kind.config:
On a Linux host you can download and install the cilium CLI with the following script. It detects CPU architecture, validates the tarball checksum, and installs the binary to /usr/local/bin:
cilium-cli: v0.18.2 compiled with go1.24.0 on linux/amd64cilium image (default): v1.17.0cilium image (stable): v1.17.2cilium image (running): unknown. Unable to obtain cilium version. Reason: release: not found
Note: the “cilium image (running)” field is unknown until Cilium is installed and running in the cluster.
By default, the CLI installs into the current kubectl context. Install with:
Copy
cilium install
If you need custom settings, prepare a Helm values file (the CLI converts these into the Helm chart values) and pass it with --values.Example values.yaml (enable IPv6):
Copy
# Enable IPv6 supportipv6: enabled: true
Install with the custom values:
Copy
cilium install --values values.yaml
The CLI auto-detects cluster type and existing components (for example, whether kube-proxy is installed). If kube-proxy is already present, the default behavior is to run Cilium alongside kube-proxy. To replace kube-proxy with Cilium’s eBPF-based proxy, enable the appropriate Helm value or use CLI options to enable kube-proxy replacement.A sample installer log (abridged):
Copy
Auto-detected Kubernetes kind: kindUsing Cilium version 1.17.0Auto-detected cluster name: kind-tlv-clusterAuto-detected kube-proxy has been installed
This output confirms the Cilium DaemonSet and operator are running and ready across the nodes.
If you want Cilium to replace kube-proxy functionality with the eBPF-based proxy, enable the appropriate Helm values (or use the CLI options) to enable kube-proxy replacement. That is a configuration choice — by default the installer runs alongside kube-proxy when it detects kube-proxy is installed.