- Create simple test workloads.
- Expose a Service as a global Service.
- Apply a CiliumNetworkPolicy on one cluster and observe behavior.
- Apply policies cluster-wide and learn how to restrict allowed sources to a specific cluster using the special cluster label.
Prerequisites
| Requirement | Purpose / Notes |
|---|---|
| Two clusters connected via Cilium Cluster Mesh | Example contexts: kind-cluster1, kind-cluster2 |
| kubectl and kubectx | Switch contexts and run commands: https://kubernetes.io/docs/reference/kubectl/, https://github.com/ahmetb/kubectx |
| Cilium installed in both clusters | See Cilium docs: https://docs.cilium.io/en/stable/ |
1) Verify cluster mesh and create test frontend/test pods
Create a debug frontend pod (nicolaka/netshoot) and a simple test pod in each cluster. These are used to simulate allowed and denied clients. Commands:2) Deploy myapp and mark the Service as global
Deploy the same Deployment in both clusters using the hashicorp/http-echo image. Set the -text argument differently in each cluster so responses identify which cluster served them. Deployment (apply in both clusters; change -text accordingly):- myapp pods in each cluster returning different text.
- myapp-service annotated as global with local affinity.
3) Create a CiliumNetworkPolicy to allow only frontend -> myapp
Goal: only pods labeled app=frontend should be allowed to connect to pods labeled app=myapp. Policy:- frontend on cluster1 can curl myapp-service in cluster1 and receive “This is Cluster1”.
- test pod on cluster1 cannot reach myapp pods in cluster1 (policy blocks it).
4) Cluster-scoped policy behavior in a cluster mesh
Key observations you should take away:- CiliumNetworkPolicy is enforced only on the cluster where it is applied. A policy applied on cluster1 only affects endpoints in cluster1.
- fromEndpoints matchLabels (e.g., app=frontend) do not include cluster identity by default. This means a frontend pod in cluster2 that has app=frontend will match ingress rules on cluster1 and be allowed to reach myapp pods in cluster1 if that remote policy permits it.
CiliumNetworkPolicy is cluster-scoped. To enforce the same network policy across all clusters in a mesh, apply the policy manifest to every cluster that hosts endpoints you want protected.
| Policy applied on | Endpoints affected | Cross-cluster label match |
|---|---|---|
| cluster1 only | Only myapp pods in cluster1 | Frontend pods in any cluster with label app=frontend may match and be allowed to reach cluster1 endpoints |
| cluster1 + cluster2 | myapp pods in both clusters | Label-based allow applies only where policy is present; both clusters enforce the rule locally |
5) Apply the same policy on cluster2
To ensure the same restriction across clusters, apply the same CiliumNetworkPolicy to cluster2:- test pods on both clusters will be blocked from talking to local myapp pods.
- frontend pods on either cluster will be allowed to reach local myapp pods (label-based allow), since both clusters now enforce the same rule.
6) Restrict allowed sources to a specific cluster
If you want to allow only frontend pods from a specific cluster (for example, only frontends running in cluster1), add the special label io.cilium.k8s.policy.cluster with the cluster’s configured name to the fromEndpoints matchLabels. Example policy that allows only frontends from cluster1:- io.cilium.k8s.policy.cluster must match the cluster name configured in Cilium’s values.yaml when you installed Cilium. Example:
- The cluster name is case-sensitive and must match exactly.
- Frontend on cluster1 allowed only if io.cilium.k8s.policy.cluster matches cluster1.
- Frontend on cluster2 denied if the policy restricts to cluster1.
- test pods remain denied.
Summary / Best practices
- Cilium network policies are enforced locally in each cluster. In a Cluster Mesh, apply the same policies to every cluster where you want consistent enforcement.
- Label-based fromEndpoints rules (e.g., app=frontend) match across clusters if those pods are visible via the global Service and labels match — unless you restrict by cluster identity.
- To limit allowed sources to a single cluster, use io.cilium.k8s.policy.cluster: <cluster-name> in the fromEndpoints matchLabels.
- Verify your Cilium cluster name configuration used at install time if you plan to use cluster-scoped label matching.
If you use global Services (service.cilium.io/global: “true”), remember: traffic to endpoints on a remote cluster is subject to the policies applied on that remote cluster. Ensure policies are deployed in every cluster you expect to enforce access controls.
Links and References
- Cilium Cluster Mesh guide
- Cilium policy language docs
- Cilium Helm installation and values.yaml
- Kubernetes kubectl documentation
- kubectx repository
- hashicorp/http-echo image
- nicolaka/netshoot (debug image)