Skip to main content
This lesson explains what Kubernetes resources Cilium creates during installation and why each one exists. Later lessons will cover installation methods in depth (Cilium CLI vs. Helm) and guidance for choosing the right option for your environment. At a high level, installing Cilium will create:
  • A DaemonSet to run the Cilium agent on every node
  • A DaemonSet to run Envoy (per-node) for L7 functionality
  • A Deployment for the cilium-operator (cluster-scoped controller)
  • ConfigMaps for agent and Envoy configuration
  • ServiceAccounts for each component
  • RBAC ClusterRoles and ClusterRoleBindings
  • CustomResourceDefinitions (CRDs) used by Cilium (for policies and observability)
Below are representative Kubernetes manifests and concise explanations for the primary components so you can recognize what gets deployed and why. Table: primary Cilium resources and their purpose
Resource TypePurposeExample / Notes
DaemonSet (cilium)Runs one Cilium agent per node to manage datapath, identity, and policy enforcementSee DaemonSet example below
DaemonSet (cilium-envoy)Runs Envoy on each node for L7 proxying and telemetryEnvoy provides HTTP/gRPC L7 functionality
Deployment (cilium-operator)Cluster-scoped controller for CRD reconciliation, garbage collection, and node managementTypically deployed with multiple replicas for HA
ConfigMapHolds agent and Envoy configuration consumed at runtimecilium-config, cilium-envoy-config
ServiceAccountIsolates permissions for each componentOne per component (agent, envoy, operator)
RBAC (ClusterRole/Binding)Grants cluster-wide permissions required by Cilium componentsWatches/gets/lists nodes, pods, endpoints, etc.
CRDsEnable Cilium-specific APIs (policies, endpoints, telemetry)CiliumNetworkPolicy, CiliumEndpoint, etc.
DaemonSet: Cilium agent
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: cilium
  namespace: kube-system
spec:
  selector:
    matchLabels:
      k8s-app: cilium
  template:
    metadata:
      labels:
        k8s-app: cilium
    spec:
      containers:
      - name: cilium-agent
        image: "quay.io/cilium/cilium:latest"
        # additional args, env, volume mounts, and securityContext omitted for brevity
This DaemonSet ensures a Cilium agent runs on every node. The agent manages datapath programming, identity allocation, policy enforcement, node-local telemetry, and other responsibilities tied to the node’s networking stack. DaemonSet: Cilium Envoy (L7 proxy)
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: cilium-envoy
  namespace: kube-system
  labels:
    k8s-app: cilium-envoy
spec:
  selector:
    matchLabels:
      k8s-app: cilium-envoy
  template:
    metadata:
      labels:
        k8s-app: cilium-envoy
    spec:
      containers:
      - name: cilium-envoy
        image: "quay.io/cilium/cilium-envoy:latest"
        # Envoy configuration, args, mounts, and readiness/liveness probes omitted
The Envoy DaemonSet provides per-node L7 proxying for HTTP/gRPC and richer telemetry. Envoy runs alongside the Cilium agent and handles L7 policies and observability when those features are enabled. Deployment: Cilium operator
apiVersion: apps/v1
kind: Deployment
metadata:
  name: cilium-operator
  namespace: kube-system
  labels:
    io.cilium/app: operator
spec:
  replicas: 2
  selector:
    matchLabels:
      io.cilium/app: operator
  template:
    metadata:
      labels:
        io.cilium/app: operator
    spec:
      containers:
      - name: cilium-operator
        image: "quay.io/cilium/operator-generic:latest"
        # operator args and RBAC-related permissions are required
The operator is a cluster-scoped controller that reconciles Cilium CRDs, performs garbage collection, coordinates node state, and manages cluster-wide workflows. Running multiple replicas improves availability. ConfigMap: cilium-config (agent settings)
apiVersion: v1
kind: ConfigMap
metadata:
  name: cilium-config
  namespace: kube-system
data:
  identity-allocation-mode: crd
  identity-heartbeat-timeout: "30m0s"
  identity-gc-interval: "15m0s"
  cilium-endpoint-gc-interval: "5m0s"
  nodes-gc-interval: "5m0s"
  debug: "false"
  enable-policy: "default"
This ConfigMap holds agent configuration values (identity mode, GC intervals, debugging, policy mode, datapath/tunneling options, etc.). You can customize many more settings depending on your cluster topology and feature needs. ConfigMap: cilium-envoy-config (Envoy bootstrap)
apiVersion: v1
kind: ConfigMap
metadata:
  name: cilium-envoy-config
  namespace: kube-system
data:
  # Keep the key name as bootstrap-config.json to avoid breaking changes
  bootstrap-config.json: |
    {
      "admin": {
        "address": {
          "pipe": {
            "path": "/var/run/cilium/envoy/sockets/admin.sock"
          }
        },
        "applicationLogConfig": {
          "logFormat": {
            "textFormat": "[%Y-%m-%d %T.%e][%t][%l][%n] [%g:%#] %v"
          }
        }
      },
      "bootstrapExtensions": [
        {
          "name": "envoy.bootstrap.internal_listener",
          "typedConfig": {}
        }
      ]
    }
Envoy requires a bootstrap configuration (JSON/YAML). The key name must remain bootstrap-config.json for compatibility; the snippet above shows a minimal structure for Envoy’s admin interface and bootstrap extensions. ServiceAccounts
apiVersion: v1
kind: ServiceAccount
metadata:
  name: "cilium"
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: "cilium-envoy"
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: "cilium-operator"
  namespace: kube-system
Each component uses a dedicated ServiceAccount so you can grant the minimal RBAC permissions required for its tasks. RBAC: ClusterRole and Binding (example for the agent)
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: cilium
  labels:
    app.kubernetes.io/part-of: cilium
rules:
  # rules omitted for brevity: Cilium requires permissions to watch/get/list nodes, pods, endpoints, etc.

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: cilium
  labels:
    app.kubernetes.io/part-of: cilium
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cilium
subjects:
- kind: ServiceAccount
  name: "cilium"
  namespace: kube-system
ClusterRole and ClusterRoleBinding grant cluster-wide privileges to the component ServiceAccounts. Similar RBAC objects are created for the operator and other components.
Cilium requires elevated cluster privileges to watch and manipulate Kubernetes objects (nodes, pods, endpoints, CRDs). Review the RBAC rules before applying manifests in production clusters and follow your security policy for least-privilege access.
CustomResourceDefinitions (CRDs) Cilium installs several CRDs that enable high-level APIs for policy, visibility, and endpoint resources (for example: CiliumNetworkPolicy, CiliumClusterwideNetworkPolicy, CiliumEndpoint). These CRDs allow controllers and the operator to persist and reconcile Cilium-specific state. For more on Kubernetes CRDs, see: https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/ Putting it together — what to expect after installation
  • Node-local pods: cilium and cilium-envoy run across nodes via DaemonSets.
  • Cluster controller: cilium-operator runs as a Deployment with replicas.
  • Configuration: cilium-config and cilium-envoy-config ConfigMaps are created.
  • Security: ServiceAccounts and RBAC ClusterRoles/Bindings grant required permissions.
  • APIs: Cilium CRDs are registered for policy and resource management.
If you inspect a cluster after installing Cilium, look in the kube-system namespace for DaemonSets (cilium, cilium-envoy), the cilium-operator Deployment, ConfigMaps (cilium-config, cilium-envoy-config), component ServiceAccounts, and multiple Cilium-related CRDs.
Installation methods — CLI vs. Helm There are two primary ways to install Cilium on Kubernetes:
  1. Cilium CLI
  2. Helm
Note: The Cilium CLI internally uses Helm templates to render manifests. Whether you install via the CLI or Helm directly, Helm templates are involved in producing the manifests applied to the cluster.
References and further reading

Watch Video