Skip to main content
In this guide you’ll learn how to configure a Cilium Cluster Mesh across two Kubernetes clusters (kind-cluster1 and kind-cluster2). The walkthrough covers:
  • Preparing and customizing Cilium Helm values for Cluster Mesh
  • Installing Cilium on both clusters
  • Enabling the clustermesh-apiserver and connecting clusters
  • Deploying a test application and validating global service behavior (shared and affinity modes)
  • Cleaning up
This tutorial assumes familiarity with kubectl contexts and Helm. Key prerequisites: unique cluster IDs and non-overlapping pod CIDR ranges for each cluster.
Make sure each cluster has a unique cluster ID and non-overlapping pod CIDR ranges before enabling Cluster Mesh.
Table of Cilium annotations used in this guide:
AnnotationPurposeExample values
service.cilium.io/globalMark a Service as a global (mesh-advertised) service"true"
service.cilium.io/sharedControl whether a cluster advertises its local backends"true" or "false"
service.cilium.io/affinityPrefer local or remote backends for a global service"local", "remote", "none"
Related references:

1) Verify clusters and node readiness

Confirm your kubectl contexts and verify node status on both clusters. Nodes will show NotReady until a CNI (Cilium) is installed.
# List kubectl contexts
kubectx
# Example output:
# arn:aws:eks:us-east-1:195275640053:cluster/telepresence
# kind-cluster1
# Switch to a cluster and check nodes
kubectx kind-cluster1
kubectl get nodes
# Example output (before CNI is installed):
# NAME                      STATUS     ROLES          AGE   VERSION
# cluster1-control-plane    NotReady   control-plane  34h   v1.32.2
kubectx kind-cluster2
kubectl get nodes
# Example output (before CNI is installed):
# NAME                      STATUS     ROLES          AGE   VERSION
# cluster2-control-plane    NotReady   control-plane  34h   v1.32.2
# cluster2-worker           NotReady   <none>         34h   v1.32.2

2) Prepare Cilium Helm values for Cluster Mesh

Fetch the default Cilium Helm values and edit them to set a unique cluster name/id and non-overlapping pod CIDR pools.
helm show values cilium/cilium > values.yaml
Important fields to set in values.yaml (only relevant snippets shown). Each cluster must have a unique cluster.name and cluster.id, and operator.clusterPoolIPv4PodCIDRList (and IPv6 counterpart if used) must not overlap between clusters.
  • Example for cluster1:
cluster:
  name: cluster1
  id: 1

operator:
  clusterPoolIPv4PodCIDRList: ["11.0.0.0/8"]
  clusterPoolIPv4MaskSize: 24
  clusterPoolIPv6PodCIDRList: ["fd00::/104"]
  clusterPoolIPv6MaskSize: 120
  • Example for cluster2:
cluster:
  name: cluster2
  id: 2

operator:
  clusterPoolIPv4PodCIDRList: ["12.0.0.0/8"]
  clusterPoolIPv4MaskSize: 24
  clusterPoolIPv6PodCIDRList: ["fd01::/104"]
  clusterPoolIPv6MaskSize: 120
Note: Do not reuse any IPv4/IPv6 CIDR ranges across clusters when using Cluster Mesh.

3) Install Cilium on each cluster

Add the Cilium Helm repo and install (or upgrade) Cilium on each cluster using the modified values.yaml.
# Ensure Helm repo is set
helm repo add cilium https://helm.cilium.io/
helm repo update

# On kind-cluster1 (ensure kubectl context is set)
kubectx kind-cluster1
helm install cilium cilium/cilium -n kube-system -f values.yaml
# or, if already installed:
# helm upgrade --install cilium cilium/cilium -n kube-system -f values.yaml
Repeat after updating values.yaml for kind-cluster2 with the cluster2 name/id/CIDR ranges. After installation, nodes should transition to Ready because Cilium provides the CNI:
kubectx kind-cluster1
kubectl get nodes
# NAME                      STATUS   ROLES           AGE   VERSION
# cluster1-control-plane    Ready    control-plane   34h   v1.32.2
# cluster1-worker           Ready    <none>          34h   v1.32.2
# cluster1-worker2          Ready    <none>          34h   v1.32.2

4) Enable Cluster Mesh API server on each cluster

Use the Cilium CLI to enable the clustermesh-apiserver on each cluster. On some environments (like kind) the CLI cannot auto-detect Service type; specify --service-type=LoadBalancer if you run into auto-detection errors.
# On each cluster:
kubectx kind-cluster1
cilium clustermesh enable --context kind-cluster1 --service-type=LoadBalancer

# Repeat on cluster2:
kubectx kind-cluster2
cilium clustermesh enable --context kind-cluster2 --service-type=LoadBalancer
Verify the clustermesh-apiserver Service in kube-system and note its EXTERNAL-IP (LoadBalancer IP), which will be used for cluster-to-cluster communication:
kubectl get svc -n kube-system
# Example relevant lines:
# clustermesh-apiserver         LoadBalancer   10.96.52.31      172.19.255.91   2379:31802/TCP
# clustermesh-apiserver         LoadBalancer   10.96.68.254     172.19.255.121  2379:31316/TCP

5) Connect the clusters into the Cluster Mesh

You only need to run the cilium clustermesh connect command once from any machine that has access to both kubectl contexts. This configures both sides.
# Connect cluster1 to cluster2:
cilium clustermesh connect --context kind-cluster1 --destination-context kind-cluster2
Example diagnostic output (trimmed):
✨ Extracting access information of cluster cluster1...
🔑 Extracting secrets from cluster cluster1...
i Found ClusterMesh service IPs: [172.19.255.91]
✨ Extracting access information of cluster cluster2...
🔑 Extracting secrets from cluster cluster2...
i Found ClusterMesh service IPs: [172.19.255.121]
⚠️ Cilium CA certificates do not match between clusters. Multicluster features will be limited!
i Configuring Cilium in cluster kind-cluster1 to connect to cluster kind-cluster2
i Configuring Cilium in cluster kind-cluster2 to connect to cluster kind-cluster1
✅ Connected cluster kind-cluster1 <=> kind-cluster2!
Check Cluster Mesh status:
cilium clustermesh status --context kind-cluster1
# Example success summary:
# ✅ Service "clustermesh-apiserver" of type "LoadBalancer" found
# ✅ Cluster access information is available: - 172.19.255.91:2379
# ✅ Deployment clustermesh-apiserver is ready
# ℹ️ KVStoreMesh is enabled
#
# ✅ All 3 nodes are connected to all clusters [min:1 / avg:1.0 / max:1]
# Cluster Connections:
# - cluster2: 3/3 configured, 3/3 connected
# Global services: [ min:0 / avg:0.0 / max:0 ]
Once status shows connected, proceed to deploy and test global services.

6) Deploy the test application into both clusters

Create a simple HTTP echo deployment and service on both clusters using hashicorp/http-echo. Use the same service name and namespace on both clusters and set the echo text to indicate the cluster identity. Template file: deploy-clusterX.yaml — change the -text value per cluster before applying.
# deploy-clusterX.yaml (change the -text value per cluster before applying)
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-deployment
  labels:
    app: myapp
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
      - name: myapp
        image: hashicorp/http-echo
        args:
        - -listen=:80
        - -text="This is Cluster1"  # change to "This is Cluster2" for the other cluster
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: myapp-service
spec:
  type: LoadBalancer
  selector:
    app: myapp
  ports:
    - port: 80
      targetPort: 80
Apply the manifest on each cluster (swap the -text value accordingly):
# On cluster1
kubectx kind-cluster1
kubectl apply -f deploy-cluster1.yaml

# On cluster2 (after editing the -text to "This is Cluster2")
kubectx kind-cluster2
kubectl apply -f deploy-cluster2.yaml
Create a troubleshooting pod (netshoot) on each cluster to test connectivity from within a pod:
kubectx kind-cluster1
kubectl run test --image=nicolaka/netshoot -- sleep infinity

kubectx kind-cluster2
kubectl run test --image=nicolaka/netshoot -- sleep infinity
From outside the cluster you can curl the Service EXTERNAL-IP. From inside the cluster, DNS resolves the service name (e.g., curl myapp-service).

7) Default behavior (no global service annotation)

By default (no global annotation), services are local-only. A pod querying the service sees only local endpoints. From a test pod in cluster1:
kubectx kind-cluster1
kubectl exec test -- curl myapp-service
# Output:
# "This is Cluster1"
From a test pod in cluster2:
kubectx kind-cluster2
kubectl exec test -- curl myapp-service
# Output:
# "This is Cluster2"
This default is expected even with Cluster Mesh enabled — no global advertisement occurs without the annotation.

8) Enable a Global Service (shared across clusters)

To advertise a Service globally across the mesh so backends in all clusters are available, annotate the Service with service.cilium.io/global: "true". Apply the updated Service manifest to both clusters using the same name and namespace. Service snippet (add the annotation under metadata.annotations):
apiVersion: v1
kind: Service
metadata:
  name: myapp-service
  annotations:
    service.cilium.io/global: "true"
spec:
  type: LoadBalancer
  selector:
    app: myapp
  ports:
    - port: 80
      targetPort: 80
Apply on both clusters:
kubectx kind-cluster1
kubectl apply -f deploy-cluster1.yaml  # ensure annotation is present

kubectx kind-cluster2
kubectl apply -f deploy-cluster2.yaml  # ensure annotation is present
Verify Cluster Mesh now recognizes the global service:
cilium clustermesh status --context kind-cluster1
# Global services: [ min:1 / avg:1.0 / max:1 ]
Behavior: requests to myapp-service from any cluster will be load-balanced across pods in both clusters.
kubectx kind-cluster1
kubectl exec test -- curl myapp-service
kubectx kind-cluster2
kubectl exec test -- curl myapp-service
# Output examples: "This is Cluster1" or "This is Cluster2"

9) Disable sharing on a specific cluster (service.cilium.io/shared)

If you want a cluster to keep its local backends private (not advertised to the mesh), annotate its Service with service.cilium.io/shared: "false". Apply this annotation only on the cluster you want to stop sharing from. Example:
metadata:
  name: myapp-service
  annotations:
    service.cilium.io/global: "true"
    service.cilium.io.shared: "false"  # This cluster will NOT share this service to the mesh
Key behaviors:
  • The cluster that sets shared: "false" will not advertise its local endpoints to other clusters.
  • Pods in that cluster still use the global service (they can consume remote backends if available) — shared controls advertising, not consumption.

10) Service affinity: local vs remote

Cilium supports per-cluster affinity to prefer local or remote backends for a global service. Use the annotation service.cilium.io/affinity with values:
  • local — prefer local backends; fallback to remote only if no local backends exist
  • remote — prefer remote backends; fallback to local only if remote backends are unavailable
  • none — default global load balancing (no affinity)
Add the annotation together with service.cilium.io/global: "true":
metadata:
  name: myapp-service
  annotations:
    service.cilium.io/global: "true"
    service.cilium.io/affinity: "local"   # or "remote" or "none"
Example scenario for affinity: "local":
  1. With local backends present, pods in the cluster will hit local pods.
  2. If local backends are scaled to zero, requests automatically fail over to remote cluster backends.
Demonstration (simulate failure by scaling a deployment to zero):
# On cluster2, scale down to simulate failure
kubectx kind-cluster2
kubectl scale deployment myapp-deployment --replicas=0

# From a test pod in cluster2 (with affinity=local applied on the service), curl should fall back to cluster1:
kubectl exec test -- curl myapp-service
# Output:
# "This is Cluster1"
affinity: "remote" has the inverse preference (prefer remote, fallback to local).

11) Clean up

Delete test deployments and troubleshooting pods when finished. If you need to disconnect the Cluster Mesh, use the cilium clustermesh disable command.
# Remove test resources on each cluster
kubectx kind-cluster1
kubectl delete -f deploy-cluster1.yaml
kubectl delete pod test --ignore-not-found

kubectx kind-cluster2
kubectl delete -f deploy-cluster2.yaml
kubectl delete pod test --ignore-not-found

# To disconnect clusters (if required)
# cilium clustermesh disable --context kind-cluster1
# cilium clustermesh disable --context kind-cluster2
Callouts and reminders:
  • Ensure unique cluster IDs (1..255) and unique pod CIDR pools per cluster before enabling Cluster Mesh.
  • When using LoadBalancer service type on kind clusters, a layer that provides an external IP (e.g., a MetalLB deployment) is required to obtain EXTERNAL-IP addresses.
Further reading and references:

Watch Video